Commit Graph

60 Commits

Author SHA1 Message Date
Ben Kimock
ee05de8e5e Re-enable android tests/benches in alloc 2024-08-28 10:45:30 -04:00
Nicholas Nethercote
84ac80f192 Reformat use declarations.
The previous commit updated `rustfmt.toml` appropriately. This commit is
the outcome of running `x fmt --all` with the new formatting options.
2024-07-29 08:26:52 +10:00
Ralf Jung
c0b564b767 disable benches in Miri 2024-04-07 09:58:10 +02:00
Ralf Jung
a6803b9de4 add 'x.py miri', and make it work for 'library/{core,alloc,std}' 2024-04-03 20:27:20 +02:00
surechen
40ae34194c remove redundant imports
detects redundant imports that can be eliminated.

for #117772 :

In order to facilitate review and modification, split the checking code and
removing redundant imports code into two PR.
2023-12-10 10:56:22 +08:00
AngelicosPhosphoros
964df019d2 Split Vec::dedup_by into 2 cycles
First cycle runs until we found 2 same elements, second runs after if there any found in the first one. This allows to avoid any memory writes until we found an item which we want to remove.

This leads to significant performance gains if all `Vec` items are kept: -40% on my benchmark with unique integers.

Results of benchmarks before implementation (including new benchmark where nothing needs to be removed):
 *   vec::bench_dedup_all_100                 74.00ns/iter  +/- 13.00ns
 *   vec::bench_dedup_all_1000               572.00ns/iter +/- 272.00ns
 *   vec::bench_dedup_all_100000              64.42µs/iter  +/- 19.47µs
 *   __vec::bench_dedup_none_100                67.00ns/iter  +/- 17.00ns__
 *   __vec::bench_dedup_none_1000              662.00ns/iter  +/- 86.00ns__
 *   __vec::bench_dedup_none_10000               9.16µs/iter   +/- 2.71µs__
 *   __vec::bench_dedup_none_100000             91.25µs/iter   +/- 1.82µs__
 *   vec::bench_dedup_random_100             105.00ns/iter  +/- 11.00ns
 *   vec::bench_dedup_random_1000            781.00ns/iter  +/- 10.00ns
 *   vec::bench_dedup_random_10000             9.00µs/iter   +/- 5.62µs
 *   vec::bench_dedup_random_100000          449.81µs/iter  +/- 74.99µs
 *   vec::bench_dedup_slice_truncate_100     105.00ns/iter  +/- 16.00ns
 *   vec::bench_dedup_slice_truncate_1000      2.65µs/iter +/- 481.00ns
 *   vec::bench_dedup_slice_truncate_10000    18.33µs/iter   +/- 5.23µs
 *   vec::bench_dedup_slice_truncate_100000  501.12µs/iter  +/- 46.97µs

Results after implementation:
 *   vec::bench_dedup_all_100                 75.00ns/iter   +/- 9.00ns
 *   vec::bench_dedup_all_1000               494.00ns/iter +/- 117.00ns
 *   vec::bench_dedup_all_100000              58.13µs/iter   +/- 8.78µs
 *   __vec::bench_dedup_none_100                52.00ns/iter  +/- 22.00ns__
 *   __vec::bench_dedup_none_1000              417.00ns/iter +/- 116.00ns__
 *   __vec::bench_dedup_none_10000               4.11µs/iter +/- 546.00ns__
 *   __vec::bench_dedup_none_100000             40.47µs/iter   +/- 5.36µs__
 *   vec::bench_dedup_random_100              77.00ns/iter  +/- 15.00ns
 *   vec::bench_dedup_random_1000            681.00ns/iter  +/- 86.00ns
 *   vec::bench_dedup_random_10000            11.66µs/iter   +/- 2.22µs
 *   vec::bench_dedup_random_100000          469.35µs/iter  +/- 20.53µs
 *   vec::bench_dedup_slice_truncate_100     100.00ns/iter   +/- 5.00ns
 *   vec::bench_dedup_slice_truncate_1000      2.55µs/iter +/- 224.00ns
 *   vec::bench_dedup_slice_truncate_10000    18.95µs/iter   +/- 2.59µs
 *   vec::bench_dedup_slice_truncate_100000  492.85µs/iter  +/- 72.84µs

Resolves #77772
2023-12-05 21:01:00 +01:00
AngelicosPhosphoros
0c6901a487 Add more benchmarks of Vec::dedup
They are for more specific cases than old benches.
Also, better usage of blackbox
2023-11-25 02:08:43 +01:00
The 8472
114d5f221c s/drain_filter/extract_if/ for Vec, Btree{Map,Set} and LinkedList 2023-06-14 09:28:54 +02:00
KaDiWa
ad2b34d0e3
remove some unneeded imports 2023-04-12 19:27:18 +02:00
bors
4507fdaaa2 Auto merge of #106241 - Sp00ph:vec_deque_iter_methods, r=the8472
Implement more methods for `vec_deque::IntoIter`

This implements a couple `Iterator` methods on `vec_deque::IntoIter` (`(try_)fold`, `(try_)rfold` `advance_(back_)by`, `next_chunk`, `count` and `last`) to allow these to be more efficient than their default implementations, also allowing many other `Iterator` methods that use these under the hood to take advantage of these manual implementations. `vec::IntoIter` has similar implementations for many of these methods. This PR does not yet implement `TrustedRandomAccess` and friends, as I'm not very familiar with the required safety guarantees.

r? `@the8472` (since you also took over my last PR)
2023-02-18 20:12:35 +00:00
Markus Everling
ccba6c5151 Add vec_deque::IntoIter benchmarks 2023-01-18 00:33:05 +01:00
Thom Chiovoloni
a4bf36e87b
Update rand in the stdlib tests, and remove the getrandom feature from it 2023-01-04 14:52:41 -08:00
Ralf Jung
644a5a34dd enable fuzzy_provenance_casts lint in liballoc 2022-11-20 19:12:18 +01:00
The 8472
467b299e53 update str.contains benchmarks 2022-11-14 23:03:16 +01:00
The 8472
4844e5162c black_box test strings in str.contains(str) benchmarks 2022-11-14 23:01:25 +01:00
est31
2c72ea7748 Stabilize map_first_last 2022-09-30 17:00:07 +02:00
The 8472
2f9f2e507e Optimized vec::IntoIter::next_chunk impl
```
test vec::bench_next_chunk                               ... bench:         696 ns/iter (+/- 22)
x86_64v1, pr
test vec::bench_next_chunk                               ... bench:         309 ns/iter (+/- 4)

znver2, default
test vec::bench_next_chunk                               ... bench:      17,272 ns/iter (+/- 117)
znver2, pr
test vec::bench_next_chunk                               ... bench:         211 ns/iter (+/- 3)
```

The znver2 default impl seems to be slow due to inlining decisions. It goes through `core::array::iter_next_chunk`
which has a deeper call tree.
2022-07-26 20:31:43 +02:00
Paolo Barbolini
ac2c21a623 Add VecDeque::extend TrustedLen benchmark 2022-06-17 23:41:03 +02:00
Thom Chiovoloni
0812759840
Avoid use of rand::thread_rng in stdlib benchmarks 2022-05-02 00:08:21 -07:00
Paolo Barbolini
84b8898d63 Add VecDeque::extend benchmark 2022-04-27 21:10:20 +02:00
Ben Kimock
6e6d0cbf83 Add debug assertions to some unsafe functions
These debug assertions are all implemented only at runtime using
`const_eval_select`, and in the error path they execute
`intrinsics::abort` instead of being a normal debug assertion to
minimize the impact of these assertions on code size, when enabled.

Of all these changes, the bounds checks for unchecked indexing are
expected to be most impactful (case in point, they found a problem in
rustc).
2022-03-29 11:05:24 -04:00
The 8472
d0f38cc4b4 update vec::retain benchmarks
Add `into_iter().filter().collect()` as a comparison point since it was reported to be faster than `retain`.
Remove clone inside benchmark loop to reduce allocator noise.
2021-12-04 16:20:35 +01:00
John Kugelman
fb2d0f5c03 Add #[must_use] to remaining alloc functions 2021-10-15 11:46:49 -04:00
Jubilee
19d9a147be
Rollup merge of #88452 - xu-cheng:vecdeque-from-array, r=m-ou-se
VecDeque: improve performance for From<[T; N]>

Create `VecDeque` directly from the array instead of inserting items one-by-one.

Benchmark
```
./x.py bench library/alloc --test-args vec_deque::bench_from_array_1000
```

* Before
```
test vec_deque::bench_from_array_1000                    ... bench:       3,991 ns/iter (+/- 717)
```

* After
```
test vec_deque::bench_from_array_1000                    ... bench:         268 ns/iter (+/- 37)
```
2021-10-04 13:58:08 -07:00
TennyZhuang
20e14e4030 Add benchmark for Vec::retain 2021-09-17 02:55:12 +08:00
Cheng XU
2ab73cf63d
add benchmark for From<[T; N]> in VecDeque 2021-08-28 19:46:58 -07:00
Cheng XU
6a6885c6bd
add benchmark for BTreeMap::from_iter 2021-08-28 17:18:43 -07:00
Ali Malik
e43254aad1 Fix may not to appropriate might not or must not 2021-07-29 01:15:20 -04:00
The8472
e015e9da71 implement fold() on array::IntoIter to improve flatten().collect() perf
```
# old
test vec::bench_flat_map_collect                         ... bench:   2,244,024 ns/iter (+/- 18,903)

# new
test vec::bench_flat_map_collect                         ... bench:     172,863 ns/iter (+/- 2,141)
```
2021-07-24 19:24:11 +02:00
Stein Somers
b9d43c603b BTree: encapsulate LeafRange better & some debug asserts 2021-06-09 12:03:07 +02:00
Muhammad Mominul Huque
507d97b26e Update expressions where we can use array's IntoIterator implementation 2021-06-02 16:09:04 +06:00
The8472
60a900ee10 remove InPlaceIterable marker from Peekable due to unsoundness
The unsoundness is not in Peekable per se, it rather is due to the
interaction between Peekable being able to hold an extra item
and vec::IntoIter's clone implementation shortening the allocation.

An alternative solution would be to change IntoIter's clone implementation
to keep enough spare capacity available.
2021-05-19 01:41:09 +02:00
Ben Kimock
8c88418114 Try to make Vec benchmarks only run code they are benchmarking
Many of the Vec benchmarks assert what values should be produced by the
benchmarked code. In some cases, these asserts dominate the runtime of
the benchmarks they are in, causing the benchmarks to understate the
impact of an optimization or regression.
2021-03-25 00:14:00 -04:00
The8472
a1a04e0842 add transmute-via-iterators bench 2021-03-21 20:54:05 +01:00
Soveu
b0092bc995 Vec::dedup optimization - add benches 2021-03-16 14:41:26 +01:00
Stein Somers
d9daedd433 BTreeMap: correct tests for alternative choices of B 2021-02-21 19:06:46 +01:00
Ivan Tham
55ba9e4755
Reorder benches const variable
Move LEN so it is is read in order.
2020-09-29 21:39:24 +08:00
Ivan Tham
939fd37643
Rust vec bench import specific rand::RngCore 2020-09-25 22:19:28 +08:00
Ivan Tham
4a6bc77a01
Liballoc bench vec use mem take not replace 2020-09-22 14:26:15 +08:00
Ralf Jung
4b362bbbb6
Rollup merge of #76981 - pickfire:patch-5, r=Mark-Simulacrum
liballoc bench use imported path Bencher

test is already in scope, no need to use the full path
2020-09-21 15:30:44 +02:00
bors
a409a233e0 Auto merge of #75974 - SkiFire13:peekmut-opt-sift, r=LukasKalbertodt
Avoid useless sift_down when std::collections::binary_heap::PeekMut is never mutably dereferenced

If `deref_mut` is never called then it's not possible for the element to be mutated without internal mutability, meaning there's no need to call `sift_down`.

This could be a little improvement in cases where you want to mutate the biggest element of the heap only if it satisfies a certain predicate that needs only read access to the element.
2020-09-21 05:31:01 +00:00
Ivan Tham
d99bb9d31c
liballoc bench use imported path Bencher
test is already in scope, no need to use the full path
2020-09-21 00:46:40 +08:00
Giacomo Stevanato
924cd135b6 Added benchmarks for BinaryHeap 2020-09-20 01:12:02 +02:00
Ivan Tham
685f04220e
Clean up vec benches bench_in_place style 2020-09-06 12:00:22 +08:00
The8472
a62cd1b44c fix benchmark compile errors 2020-09-03 20:59:33 +02:00
The8472
6ad133443a add benchmark to cover in-place extend 2020-09-03 20:59:28 +02:00
The8472
c731648e77 fix: bench didn't black_box its results 2020-09-03 20:59:23 +02:00
The8472
e1151844fa bench larger allocations 2020-09-03 20:59:22 +02:00
The8472
3d5e9f1904 bench in-place zip 2020-09-03 20:59:18 +02:00
The8472
a596ff36b5 exercise more of the in-place pipeline in the bench 2020-09-03 20:59:14 +02:00