Rollup merge of #106997 - Sp00ph:introselect, r=scottmcm

Add heapsort fallback in `select_nth_unstable`

Addresses #102451 and #106933.

`slice::select_nth_unstable` uses a quick select implementation based on the same pattern defeating quicksort algorithm that `slice::sort_unstable` uses. `slice::sort_unstable` uses a recursion limit and falls back to heapsort if there were too many bad pivot choices, to ensure O(n log n) worst case running time (known as introsort). However, `slice::select_nth_unstable` does not have such a fallback strategy, which leads to it having a worst case running time of O(n²) instead. #102451 links to a playground which generates pathological inputs that show this quadratic behavior. On my machine, a randomly generated slice of length `1 << 19` takes ~200µs to calculate its median, whereas a pathological input of the same length takes over 2.5s. This PR adds an iteration limit to `select_nth_unstable`, falling back to heapsort, which ensures an O(n log n) worst case running time (introselect). With this change, there was no noticable slowdown for the random input, but the same pathological input now takes only ~1.2ms. In the future it might be worth implementing something like Median of Medians or Fast Deterministic Selection instead, which guarantee O(n) running time for all possible inputs. I've left this as a `FIXME` for now and only implemented the heapsort fallback to minimize the needed code changes.

I still think we should clarify in the `select_nth_unstable` docs that the worst case running time isn't currently O(n) (the original reason that #102451 was opened), but I think it's a lot better to be able to guarantee O(n log n) instead of O(n²) for the worst case.
This commit is contained in:
Matthias Krüger 2023-01-18 06:59:22 +01:00 committed by GitHub
commit 788671c1c6
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -831,6 +831,15 @@ fn partition_at_index_loop<'a, T, F>(
) where
F: FnMut(&T, &T) -> bool,
{
// Limit the amount of iterations and fall back to heapsort, similarly to `slice::sort_unstable`.
// This lowers the worst case running time from O(n^2) to O(n log n).
// FIXME: Investigate whether it would be better to use something like Median of Medians
// or Fast Deterministic Selection to guarantee O(n) worst case.
let mut limit = usize::BITS - v.len().leading_zeros();
// True if the last partitioning was reasonably balanced.
let mut was_balanced = true;
loop {
// For slices of up to this length it's probably faster to simply sort them.
const MAX_INSERTION: usize = 10;
@ -839,6 +848,18 @@ fn partition_at_index_loop<'a, T, F>(
return;
}
if limit == 0 {
heapsort(v, is_less);
return;
}
// If the last partitioning was imbalanced, try breaking patterns in the slice by shuffling
// some elements around. Hopefully we'll choose a better pivot this time.
if !was_balanced {
break_patterns(v);
limit -= 1;
}
// Choose a pivot
let (pivot, _) = choose_pivot(v, is_less);
@ -863,6 +884,7 @@ fn partition_at_index_loop<'a, T, F>(
}
let (mid, _) = partition(v, pivot, is_less);
was_balanced = cmp::min(mid, v.len() - mid) >= v.len() / 8;
// Split the slice into `left`, `pivot`, and `right`.
let (left, right) = v.split_at_mut(mid);