Commit Graph

2373 Commits

Author SHA1 Message Date
Matthias Krüger
97bcf0da7c
Rollup merge of #119801 - zachs18:zachs18-patch-1, r=steffahn,Nilstrieb
Fix deallocation with wrong allocator in (A)Rc::from_box_in

Deallocate the `Box` with the original allocator (via `&A`), not `Global`.

Fixes #119749

<details> <summary>Example code with error and Miri output</summary>

(Note that this UB is not observable on stable, because the only usable allocator on stable is `Global` anyway.)

Code ([playground link](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=96193c2c6a1912d7f669fbbe39174b09)):

```rs
#![feature(allocator_api)]
use std::alloc::System;

// uncomment one of these
use std::rc::Rc;
//use std::sync::Arc as Rc;

fn main() {
    let x: Box<[u32], System> = Box::new_in([1,2,3], System);
    let _: Rc<[u32], System> = Rc::from(x);
}
```

Miri output:

```rs
error: Undefined Behavior: deallocating alloc904, which is C heap memory, using Rust heap deallocation operation
   --> /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/alloc.rs:117:14
    |
117 |     unsafe { __rust_dealloc(ptr, layout.size(), layout.align()) }
    |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ deallocating alloc904, which is C heap memory, using Rust heap deallocation operation
    |
    = help: this indicates a bug in the program: it performed an invalid operation, and caused Undefined Behavior
    = help: see https://doc.rust-lang.org/nightly/reference/behavior-considered-undefined.html for further information
    = note: BACKTRACE:
    = note: inside `std::alloc::dealloc` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/alloc.rs:117:14: 117:64
    = note: inside `<std::alloc::Global as std::alloc::Allocator>::deallocate` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/alloc.rs:254:22: 254:51
    = note: inside `<std::boxed::Box<std::mem::ManuallyDrop<[u32]>> as std::ops::Drop>::drop` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/boxed.rs:1244:17: 1244:66
    = note: inside `std::ptr::drop_in_place::<std::boxed::Box<std::mem::ManuallyDrop<[u32]>>> - shim(Some(std::boxed::Box<std::mem::ManuallyDrop<[u32]>>))` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ptr/mod.rs:507:1: 507:56
    = note: inside `std::mem::drop::<std::boxed::Box<std::mem::ManuallyDrop<[u32]>>>` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/mem/mod.rs:992:24: 992:25
    = note: inside `std::rc::Rc::<[u32], std::alloc::System>::from_box_in` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/rc.rs:1928:13: 1928:22
    = note: inside `<std::rc::Rc<[u32], std::alloc::System> as std::convert::From<std::boxed::Box<[u32], std::alloc::System>>>::from` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/rc.rs:2504:9: 2504:27
note: inside `main`
   --> src/main.rs:10:32
    |
10  |     let _: Rc<[u32], System> = Rc::from(x);
    |                                ^^^^^^^^^^^

note: some details are omitted, run with `MIRIFLAGS=-Zmiri-backtrace=full` for a verbose backtrace

error: aborting due to 1 previous error
```

</details>
2024-01-22 16:54:57 +01:00
bors
6fff796eac Auto merge of #120196 - matthiaskrgr:rollup-id2zocf, r=matthiaskrgr
Rollup of 8 pull requests

Successful merges:

 - #120005 (Update Readme)
 - #120045 (Un-hide `iter::repeat_n`)
 - #120128 (Make stable_mir::with_tables sound)
 - #120145 (fix: Drop guard was deallocating with the incorrect size)
 - #120158 (`rustc_mir_dataflow`: Restore removed exports)
 - #120167 (Capture the rationale for `-Zallow-features=` in bootstrap.py)
 - #120174 (Warn users about limited review for tier 2 and 3 code)
 - #120180 (Document some alternatives to `Vec::split_off`)

Failed merges:

 - #120171 (Fix assume and assert in jump threading)

r? `@ghost`
`@rustbot` modify labels: rollup
2024-01-22 08:56:22 +00:00
Matthias Krüger
3eb7fe32a1
Rollup merge of #120180 - Zalathar:vec-split-off-alternatives, r=dtolnay
Document some alternatives to `Vec::split_off`

One of the discussion points that came up in #119917 is that some people use `Vec::split_off` in cases where they probably shouldn't, because the alternatives (like `mem::take`) are hard to discover.

This PR adds some suggestions to the documentation of `split_off` that should point people towards alternatives that might be more appropriate for their use-case.

I've deliberately tried to keep these changes as simple and uncontroversial as possible, so that they don't depend on how the team decides to handle the concerns raised in #119917. That's why I haven't touched the existing documentation for `split_off`, and haven't added links to `split_off` to the documentation of other methods.
2024-01-21 12:28:55 +01:00
Matthias Krüger
4a941d384d
Rollup merge of #120145 - the8472:fix-inplace-dest-drop, r=cuviper
fix: Drop guard was deallocating with the incorrect size

InPlaceDstBufDrop holds onto the allocation before the shrinking happens which means it must deallocate the destination elements but the source allocation.

Thanks `@cuviper` for spotting this.
2024-01-21 12:28:53 +01:00
bors
fa404339c9 Auto merge of #85528 - the8472:iter-markers, r=dtolnay
Implement iterator specialization traits on more adapters

This adds

* `TrustedLen` to `Skip` and `StepBy`
* `TrustedRandomAccess` to `Skip`
* `InPlaceIterable` and `SourceIter` to  `Copied` and `Cloned`

The first two might improve performance in the compiler itself since `skip` is used in several places. Constellations that would exercise the last point are probably rare since it would require an owning iterator that has references as Items somewhere in its iterator pipeline.

Improvements for `Skip`:

```
# old
test iter::bench_skip_trusted_random_access                     ... bench:       8,335 ns/iter (+/- 90)

# new
test iter::bench_skip_trusted_random_access                     ... bench:       2,753 ns/iter (+/- 27)
```
2024-01-21 11:17:46 +00:00
Zalathar
6f1944d394 Document some alternatives to Vec::split_off 2024-01-21 11:56:55 +11:00
Guillaume Gomez
b917753d79
Rollup merge of #120116 - the8472:only-same-alignments, r=cuviper
Remove alignment-changing in-place collect

This removes the alignment-changing in-place collect optimization introduced in #110353
Currently stable users can't benefit from the optimization because GlobaAlloc doesn't support alignment-changing realloc and neither do most posix allocators. So in practice it has a negative impact on performance.

Explanation from https://github.com/rust-lang/rust/issues/120091#issuecomment-1899071681:

> > You mention that in case of alignment mismatch -- when the new alignment is less than the old -- the implementation calls `mremap`.
>
> I was trying to note that this isn't really the case in practice, due to the semantics of Rust's allocator APIs. The only use of the allocator within the `in_place_collect` implementation itself is [a call to `Allocator::shrink()`](db7125f008/library/alloc/src/vec/in_place_collect.rs (L299-L303)), which per its documentation [allows decreasing the required alignment](https://doc.rust-lang.org/1.75.0/core/alloc/trait.Allocator.html). However, in stable Rust, the only available `Allocator` is [`Global`](https://doc.rust-lang.org/1.75.0/alloc/alloc/struct.Global.html), which delegates to the registered `GlobalAlloc`. Since `GlobalAlloc::realloc()` [cannot change the required alignment](https://doc.rust-lang.org/1.75.0/core/alloc/trait.GlobalAlloc.html#method.realloc), the implementation of [`<Global as Allocator>::shrink()`](db7125f008/library/alloc/src/alloc.rs (L280-L321)) must fall back to creating a brand-new allocation, `memcpy`ing the data into it, and freeing the old allocation, whenever the alignment doesn't remain exactly the same.
>
> Therefore, the underlying allocator, provided by libc or some other source, has no opportunity to internally `mremap()` the data when the alignment is changed, since it has no way of knowing that the allocation is the same.
2024-01-20 20:06:35 +01:00
The 8472
5796b3c167 fix: Drop guard was deallocating with the incorrect size
InPlaceDstBufDrop holds onto the allocation before the shrinking happens
which means it must deallocate the destination elements but the source
allocation.
2024-01-19 23:05:30 +01:00
Matthias Krüger
7219bd22ed
Rollup merge of #120110 - invpt:patch-1, r=the8472
Update documentation for Vec::into_boxed_slice to be more clear about excess capacity

Currently, the documentation for Vec::into_boxed_slice says that "if the vector has excess capacity, its items will be moved into a newly-allocated buffer with exactly the right capacity." This is misleading, as copies do not necessarily occur, depending on if the allocator supports in-place shrinking. I copied some of the wording from shrink_to_fit, though it could potentially still be worded better than this.
2024-01-19 08:15:05 +01:00
invpt
35a9fc3472 Clarify docs for Vec::into_boxed_slice, Vec::shrink_to_fit 2024-01-18 18:01:36 -05:00
The 8472
85d1787962 remove alignment-changing in-place collect
Currently stable users can't benefit from this because GlobaAlloc doesn't support
alignment-changing realloc and neither do most posix allocators.
So in practice it always results in an extra memcpy.
2024-01-18 22:50:14 +01:00
The 8472
b28a95391b update internal ASCII art comment for vec specializations 2024-01-18 22:47:20 +01:00
Jake Goulding
fb7762b1c5 Remove no-longer-needed allow(dead_code) from the standard library
`repr(transparent)` now silences the lint.
2024-01-18 13:14:42 -05:00
Robert Grosse
db7125f008
Fix typo in comments (in_place_collect) 2024-01-16 20:48:22 -08:00
Matthias Krüger
b3d15ebb08
Rollup merge of #119853 - klensy:rustfmt-ignore, r=cuviper
rustfmt.toml: don't ignore just any tests path, only root one

Previously ignored any `tests` path, now only /tests at repo root.

For reference, https://git-scm.com/docs/gitignore#_pattern_format
2024-01-11 19:42:53 +01:00
klensy
aa696c5a22 apply fmt 2024-01-11 15:04:48 +03:00
hi-rustin
784b50cece chore: remove unnecessary blank line
Signed-off-by: hi-rustin <rustin.liu@gmail.com>
2024-01-11 09:45:21 +08:00
The8472
37d26c719d Implement in-place iteratation markers for iter::{Copied, Cloned} 2024-01-10 19:03:57 +01:00
zachs18
bfe04e08c0 Fix deallocation with wrong allocator in (A)Rc::from_box_in 2024-01-10 02:17:47 -06:00
The 8472
93b34a5ffa mark vec::IntoIter pointers as !nonnull 2024-01-07 03:44:04 +01:00
The 8472
fd8ba7bc3c typo fix 2024-01-07 03:42:45 +01:00
Matthias Krüger
923578e6f9
Rollup merge of #118781 - RalfJung:core-panic-feature, r=the8472
merge core_panic feature into panic_internals

I don't know why those are two separate features, but it does not seem intentional. This merge is useful because with https://github.com/rust-lang/rust/pull/118123, panic_internals is recognized as an internal feature, but core_panic is not -- but core_panic definitely should be internal.
2024-01-06 16:07:46 +01:00
bors
5113ed28ea Auto merge of #118297 - shepmaster:warn-dead-tuple-fields, r=WaffleLapkin
Merge `unused_tuple_struct_fields` into `dead_code`

This implicitly upgrades the lint from `allow` to `warn` and places it into the `unused` lint group.

[Discussion on Zulip](https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Moving.20.60unused_tuple_struct_fields.60.20from.20allow.20to.20warn)
2024-01-05 04:51:55 +00:00
León Orell Valerian Liehr
34ef194859
Rollup merge of #119434 - taiki-e:rc-is-dangling, r=Mark-Simulacrum
rc: Take *const T in is_dangling

It is not important which one is used since `is_dangling` does not access memory, but `*const` removes the needs of `*const T` -> `*mut T` casts in `from_raw_in`.
2024-01-03 16:08:25 +01:00
Jake Goulding
5772818dc8 Adjust library tests for unused_tuple_struct_fields -> dead_code 2024-01-02 15:34:37 -05:00
Matthias Krüger
c67ab2e0b4
Rollup merge of #119158 - JohnTheCoolingFan:arc-weak-clone-pretty, r=cuviper
Clean up alloc::sync::Weak Clone implementation

Since both return points (tail and early return) return the same expression and the only difference is whether inner is available, the code that does the atomic operations and checks on inner was moved into the if body and the only return is at the tail. Original comments preserved.
2023-12-30 11:42:02 +01:00
Taiki Endo
2c23c06c32 rc: Take *const T in is_dangling
It is not important which one is used since `is_dangling` does not access
memory, but `*const` removes the needs of `*const T` -> `*mut T` casts
in `from_raw_in`.
2023-12-30 16:28:00 +09:00
Gurinder Singh
e3aca01343 Italicise "bytes" in the docs of some Vec methods
because on a cursory read it's easy to miss that the limit is
in terms of bytes not no. of elements. The italics should help
with that.
2023-12-29 09:53:29 +05:30
Matthias Krüger
89c3236789
Rollup merge of #119205 - mumbleskates:vecdeque-comment-fix, r=Mark-Simulacrum
fix minor mistake in comments describing VecDeque resizing

Avoiding confusion where one of the items in the deque seems to disappear in two of the three cases
2023-12-24 01:08:09 +01:00
Pietro Albini
c00486c9bb
update version placeholders 2023-12-22 11:01:42 +01:00
Kent Ross
f2e711e4c2 fix minor mistake in comments describing VecDeque resizing 2023-12-21 15:20:14 -08:00
JohnTheCoolingFan
0453d5fe6f
Cleaned up alloc::sync::Weak Clone implementation
Since both return points (tail and early return) return the same
expression and the only difference is whether inner is available, the
code that does the atomic operations and checks on inner was moved into
the if body and the only return is at the tail. Original comments
preserved.
2023-12-20 12:13:34 +03:00
bors
51c0db6a91 Auto merge of #106790 - the8472:rawvec-niche, r=scottmcm
add more niches to rawvec

Previously RawVec only had a single niche in its `NonNull` pointer. With this change it now has `isize::MAX` niches since half the value-space of the capacity field is never needed, we can't have a capacity larger than isize::MAX.
2023-12-20 02:19:10 +00:00
Daniel Huang
f2f9b1d82a
Update c_str.rs 2023-12-14 19:08:36 -05:00
The 8472
6a2f44e9d8 add comment to RawVec::cap field 2023-12-11 23:38:48 +01:00
The 8472
502df1b7d4 add more niches to rawvec 2023-12-11 23:38:48 +01:00
bors
8a3765582c Auto merge of #117758 - Urgau:lint_pointer_trait_comparisons, r=davidtwco
Add lint against ambiguous wide pointer comparisons

This PR is the resolution of https://github.com/rust-lang/rust/issues/106447 decided in https://github.com/rust-lang/rust/issues/117717 by T-lang.

## `ambiguous_wide_pointer_comparisons`

*warn-by-default*

The `ambiguous_wide_pointer_comparisons` lint checks comparison of `*const/*mut ?Sized` as the operands.

### Example

```rust
let ab = (A, B);
let a = &ab.0 as *const dyn T;
let b = &ab.1 as *const dyn T;

let _ = a == b;
```

### Explanation

The comparison includes metadata which may not be expected.

-------

This PR also drops `clippy::vtable_address_comparisons` which is superseded by this one.

~~One thing: is the current naming right? `invalid` seems a bit too much.~~

Fixes https://github.com/rust-lang/rust/issues/117717
2023-12-11 14:33:16 +00:00
bors
84f6130fe3 Auto merge of #118692 - surechen:remove_unused_imports, r=petrochenkov
remove redundant imports

detects redundant imports that can be eliminated.

for #117772 :

In order to facilitate review and modification, split the checking code and removing redundant imports code into two PR.

r? `@petrochenkov`
2023-12-10 11:55:48 +00:00
bors
61afc9c928 Auto merge of #116949 - hamza1311:stablize-arc_unwrap_or_clone, r=dtolnay
Stablize arc_unwrap_or_clone

Fixes: #93610

This likely needs FCP. I created this PR as it's stabilization is trivial and FCP can be just conducted here. Not sure how to ping the libs API team (last attempt didn't work apparently according to GH UI)
2023-12-10 05:01:00 +00:00
surechen
40ae34194c remove redundant imports
detects redundant imports that can be eliminated.

for #117772 :

In order to facilitate review and modification, split the checking code and
removing redundant imports code into two PR.
2023-12-10 10:56:22 +08:00
Ralf Jung
af4913fcf4 merge core_panic feature into panic_internals 2023-12-09 14:49:00 +01:00
bors
1a3aa4ad14 Auto merge of #114136 - TennyZhuang:linked-list-retain, r=thomcc
add LinkedList::{retain,retain_mut}

Implement #114135

The API is consistent with other collections.
2023-12-09 02:38:45 +00:00
bors
c9d85d67c4 Auto merge of #117960 - zhiqiangxu:dry, r=workingjubilee
chore: avoid duplicate code in `Weak::inner`
2023-12-07 02:27:41 +00:00
Matthias Krüger
49d6594278
Rollup merge of #117563 - 0xalpharush:docs/into-raw, r=workingjubilee
docs: clarify explicitly freeing heap allocated memory

The documentation for `Box::into_raw` didn't mention `drop` and wondered if I was doing something wrong. Based off [this](https://stackoverflow.com/questions/75441199/rust-how-do-i-correctly-free-heap-allocated-memory), I think it's helpful to include the more concise yet explicit way to free heap allocated memory. This is my first rust PR and I went through https://std-dev-guide.rust-lang.org/development/, but let me know if I missed something :)
2023-12-06 17:21:57 +01:00
bors
15bb3e204a Auto merge of #118460 - the8472:fix-vec-realloc, r=saethlin
Fix in-place collect not reallocating when necessary

Regression introduced in https://github.com/rust-lang/rust/pull/110353.
This was [caught by miri](https://rust-lang.zulipchat.com/#narrow/stream/269128-miri/topic/Cron.20Job.20Failure.20.28miri-test-libstd.2C.202023-11.29/near/404764617)

r? `@saethlin`
2023-12-06 08:45:11 +00:00
Urgau
5e1bfb538f Adjust tests for newly added ambiguous_wide_pointer_comparisons lint 2023-12-06 09:03:48 +01:00
bors
1dd4db5062 Auto merge of #118655 - compiler-errors:rollup-vrngyzn, r=compiler-errors
Rollup of 9 pull requests

Successful merges:

 - #117793 (Update variable name to fix `unused_variables` warning)
 - #118123 (Add support for making lib features internal)
 - #118268 (Pretty print `Fn<(..., ...)>` trait refs with parentheses (almost) always)
 - #118346 (Add `deeply_normalize_for_diagnostics`, use it in coherence)
 - #118350 (Simplify Default for tuples)
 - #118450 (Use OnceCell in cell module documentation)
 - #118585 (Fix parser ICE when recovering `dyn`/`impl` after `for<...>`)
 - #118587 (Cleanup error handlers some more)
 - #118642 (bootstrap(builder.rs): Don't explicitly warn against `semicolon_in_expressions_from_macros`)

r? `@ghost`
`@rustbot` modify labels: rollup
2023-12-06 04:20:51 +00:00
zhiqiangxu
75d76c8ffe Don't repeat yourself 2023-12-06 09:02:19 +08:00
bors
e9013ac0e4 Auto merge of #118273 - AngelicosPhosphoros:dedup_2_loops_version_77772_2, r=the8472
Split `Vec::dedup_by` into 2 cycles

First cycle runs until we found 2 same elements, second runs after if there any found in the first one. This allows to avoid any memory writes until we found an item which we want to remove.

This leads to significant performance gains if all `Vec` items are kept: -40% on my benchmark with unique integers.

Results of benchmarks before implementation (including new benchmark where nothing needs to be removed):
 *   vec::bench_dedup_all_100                 74.00ns/iter  +/- 13.00ns
 *   vec::bench_dedup_all_1000               572.00ns/iter +/- 272.00ns
 *   vec::bench_dedup_all_100000              64.42µs/iter  +/- 19.47µs
 *   __vec::bench_dedup_none_100                67.00ns/iter  +/- 17.00ns__
 *   __vec::bench_dedup_none_1000              662.00ns/iter  +/- 86.00ns__
 *   __vec::bench_dedup_none_10000               9.16µs/iter   +/- 2.71µs__
 *   __vec::bench_dedup_none_100000             91.25µs/iter   +/- 1.82µs__
 *   vec::bench_dedup_random_100             105.00ns/iter  +/- 11.00ns
 *   vec::bench_dedup_random_1000            781.00ns/iter  +/- 10.00ns
 *   vec::bench_dedup_random_10000             9.00µs/iter   +/- 5.62µs
 *   vec::bench_dedup_random_100000          449.81µs/iter  +/- 74.99µs
 *   vec::bench_dedup_slice_truncate_100     105.00ns/iter  +/- 16.00ns
 *   vec::bench_dedup_slice_truncate_1000      2.65µs/iter +/- 481.00ns
 *   vec::bench_dedup_slice_truncate_10000    18.33µs/iter   +/- 5.23µs
 *   vec::bench_dedup_slice_truncate_100000  501.12µs/iter  +/- 46.97µs

Results after implementation:
 *   vec::bench_dedup_all_100                 75.00ns/iter   +/- 9.00ns
 *   vec::bench_dedup_all_1000               494.00ns/iter +/- 117.00ns
 *   vec::bench_dedup_all_100000              58.13µs/iter   +/- 8.78µs
 *   __vec::bench_dedup_none_100                52.00ns/iter  +/- 22.00ns__
 *   __vec::bench_dedup_none_1000              417.00ns/iter +/- 116.00ns__
 *   __vec::bench_dedup_none_10000               4.11µs/iter +/- 546.00ns__
 *   __vec::bench_dedup_none_100000             40.47µs/iter   +/- 5.36µs__
 *   vec::bench_dedup_random_100              77.00ns/iter  +/- 15.00ns
 *   vec::bench_dedup_random_1000            681.00ns/iter  +/- 86.00ns
 *   vec::bench_dedup_random_10000            11.66µs/iter   +/- 2.22µs
 *   vec::bench_dedup_random_100000          469.35µs/iter  +/- 20.53µs
 *   vec::bench_dedup_slice_truncate_100     100.00ns/iter   +/- 5.00ns
 *   vec::bench_dedup_slice_truncate_1000      2.55µs/iter +/- 224.00ns
 *   vec::bench_dedup_slice_truncate_10000    18.95µs/iter   +/- 2.59µs
 *   vec::bench_dedup_slice_truncate_100000  492.85µs/iter  +/- 72.84µs

Resolves #77772

P.S. Note that this is same PR as #92104 I just missed review then forgot about it.
Also, I cannot reopen that pull request so I am creating a new one.
I responded to remaining questions directly by adding commentaries to my code.
2023-12-05 21:40:02 +00:00
AngelicosPhosphoros
964df019d2 Split Vec::dedup_by into 2 cycles
First cycle runs until we found 2 same elements, second runs after if there any found in the first one. This allows to avoid any memory writes until we found an item which we want to remove.

This leads to significant performance gains if all `Vec` items are kept: -40% on my benchmark with unique integers.

Results of benchmarks before implementation (including new benchmark where nothing needs to be removed):
 *   vec::bench_dedup_all_100                 74.00ns/iter  +/- 13.00ns
 *   vec::bench_dedup_all_1000               572.00ns/iter +/- 272.00ns
 *   vec::bench_dedup_all_100000              64.42µs/iter  +/- 19.47µs
 *   __vec::bench_dedup_none_100                67.00ns/iter  +/- 17.00ns__
 *   __vec::bench_dedup_none_1000              662.00ns/iter  +/- 86.00ns__
 *   __vec::bench_dedup_none_10000               9.16µs/iter   +/- 2.71µs__
 *   __vec::bench_dedup_none_100000             91.25µs/iter   +/- 1.82µs__
 *   vec::bench_dedup_random_100             105.00ns/iter  +/- 11.00ns
 *   vec::bench_dedup_random_1000            781.00ns/iter  +/- 10.00ns
 *   vec::bench_dedup_random_10000             9.00µs/iter   +/- 5.62µs
 *   vec::bench_dedup_random_100000          449.81µs/iter  +/- 74.99µs
 *   vec::bench_dedup_slice_truncate_100     105.00ns/iter  +/- 16.00ns
 *   vec::bench_dedup_slice_truncate_1000      2.65µs/iter +/- 481.00ns
 *   vec::bench_dedup_slice_truncate_10000    18.33µs/iter   +/- 5.23µs
 *   vec::bench_dedup_slice_truncate_100000  501.12µs/iter  +/- 46.97µs

Results after implementation:
 *   vec::bench_dedup_all_100                 75.00ns/iter   +/- 9.00ns
 *   vec::bench_dedup_all_1000               494.00ns/iter +/- 117.00ns
 *   vec::bench_dedup_all_100000              58.13µs/iter   +/- 8.78µs
 *   __vec::bench_dedup_none_100                52.00ns/iter  +/- 22.00ns__
 *   __vec::bench_dedup_none_1000              417.00ns/iter +/- 116.00ns__
 *   __vec::bench_dedup_none_10000               4.11µs/iter +/- 546.00ns__
 *   __vec::bench_dedup_none_100000             40.47µs/iter   +/- 5.36µs__
 *   vec::bench_dedup_random_100              77.00ns/iter  +/- 15.00ns
 *   vec::bench_dedup_random_1000            681.00ns/iter  +/- 86.00ns
 *   vec::bench_dedup_random_10000            11.66µs/iter   +/- 2.22µs
 *   vec::bench_dedup_random_100000          469.35µs/iter  +/- 20.53µs
 *   vec::bench_dedup_slice_truncate_100     100.00ns/iter   +/- 5.00ns
 *   vec::bench_dedup_slice_truncate_1000      2.55µs/iter +/- 224.00ns
 *   vec::bench_dedup_slice_truncate_10000    18.95µs/iter   +/- 2.59µs
 *   vec::bench_dedup_slice_truncate_100000  492.85µs/iter  +/- 72.84µs

Resolves #77772
2023-12-05 21:01:00 +01:00