Fix in-place collection leak when remaining element destructor panic
Fixes#101628
cc `@the8472`
I went for the drop guard route, placing it immediately before the `forget_allocation_drop_remaining` call and after the comment, as to signal they are closely related.
I also updated the test to check for the leak, though the only change really needed was removing the leak clean up for miri since now that's no longer leaked.
Add `vec::Drain{,Filter}::keep_rest`
This PR adds `keep_rest` methods to `vec::Drain` and `vec::DrainFilter` under `drain_keep_rest` feature gate:
```rust
// mod alloc::vec
impl<T, A: Allocator> Drain<'_, T, A> {
pub fn keep_rest(self);
}
impl<T, F, A: Allocator> DrainFilter<'_, T, F, A>
where
F: FnMut(&mut T) -> bool,
{
pub fn keep_rest(self);
}
```
Both these methods cancel draining of elements that were not yet yielded from the iterators. While this needs more testing & documentation, I want at least start the discussion. This may be a potential way out of the "should `DrainFilter` exhaust itself on drop?" argument.
Add tests that check `Vec::retain` predicate execution order.
This behaviour is documented for `Vec::retain` which means that there is code that rely on that but there weren't tests about that.
Optimized vec::IntoIter::next_chunk impl
```
x86_64v1, default
test vec::bench_next_chunk ... bench: 696 ns/iter (+/- 22)
x86_64v1, pr
test vec::bench_next_chunk ... bench: 309 ns/iter (+/- 4)
znver2, default
test vec::bench_next_chunk ... bench: 17,272 ns/iter (+/- 117)
znver2, pr
test vec::bench_next_chunk ... bench: 211 ns/iter (+/- 3)
```
On znver2 the default impl seems to be slow due to different inlining decisions. It goes through `core::array::iter_next_chunk`
which has a deep call tree.
Enforce that layout size fits in isize in Layout
As it turns out, enforcing this _in APIs that already enforce `usize` overflow_ is fairly trivial. `Layout::from_size_align_unchecked` continues to "allow" sizes which (when rounded up) would overflow `isize`, but these are now declared as library UB for `Layout`, meaning that consumers of `Layout` no longer have to check this before making an allocation.
(Note that this is "immediate library UB;" IOW it is valid for a future release to make this immediate "language UB," and there is an extant patch to do so, to allow Miri to catch this misuse.)
See also #95252, [Zulip discussion](https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/Layout.20Isn't.20Enforcing.20The.20isize.3A.3AMAX.20Rule).
Fixes https://github.com/rust-lang/rust/issues/95334
Some relevant quotes:
`@eddyb,` https://github.com/rust-lang/rust/pull/95252#issuecomment-1078513769
> [B]ecause of the non-trivial presence of both of these among code published on e.g. crates.io:
>
> 1. **`Layout` "producers" / `GlobalAlloc` "users"**: smart pointers (including `alloc::rc` copies with small tweaks), collections, etc.
> 2. **`Layout` "consumers" / `GlobalAlloc` "providers"**: perhaps fewer of these, but anything built on top of OS APIs like `mmap` will expose `> isize::MAX` allocations (on 32-bit hosts) if they lack extra checks
>
> IMO the only responsible option is to enforce the `isize::MAX` limit in `Layout`, which:
>
> * makes `Layout` _sound_ in terms of only ever allowing allocations where `(alloc_base_ptr: *mut u8).offset(size)` is never UB
> * frees both "producers" and "consumers" of `Layout` from manually reimplementing the checks
> * manual checks can be risky, e.g. if the final size passed to the allocator isn't the one being checked
> * this applies retroactively, fixing the overall soundness of existing code with zero transition period or _any_ changes required from users (as long as going through `Layout` is mandatory, making a "choke point")
>
>
> Feel free to quote this comment onto any relevant issue, I might not be able to keep track of developments.
`@Gankra,` https://github.com/rust-lang/rust/pull/95252#issuecomment-1078556371
> As someone who spent way too much time optimizing libcollections checks for this stuff and tried to splatter docs about it everywhere on the belief that it was a reasonable thing for people to manually take care of: I concede the point, it is not reasonable. I am wholy spiritually defeated by the fact that _liballoc_ of all places is getting this stuff wrong. This isn't throwing shade at the folks who implemented these Rc features, but rather a statement of how impractical it is to expect anyone out in the wider ecosystem to enforce them if _some of the most audited rust code in the library that defines the very notion of allocating memory_ can't even reliably do it.
>
> We need the nuclear option of Layout enforcing this rule. Code that breaks this rule is _deeply_ broken and any "regressions" from changing Layout's contract is a _correctness_ fix. Anyone who disagrees and is sufficiently motivated can go around our backs but the standard library should 100% refuse to enable them.
cc also `@RalfJung` `@rust-lang/wg-allocators.` Even though this technically supersedes #95252, those potential failure points should almost certainly still get nicer panics than just "unwrap failed" (which they would get by this PR).
It might additionally be worth recommending to users of the `Layout` API that they should ideally use `.and_then`/`?` to complete the entire layout calculation, and then `panic!` from a single location at the end of `Layout` manipulation, to reduce the overhead of the checks and optimizations preserving the exact location of each `panic` which are conceptually just one failure: allocation too big.
Probably deserves a T-lang and/or T-libs-api FCP (this technically solidifies the [objects must be no larger than `isize::MAX`](https://rust-lang.github.io/unsafe-code-guidelines/layout/scalars.html#isize-and-usize) rule further, and the UCG document says this hasn't been RFCd) and a crater run. Ideally, no code exists that will start failing with this addition; if it does, it was _likely_ (but not certainly) causing UB.
Changes the raw_vec allocation path, thus deserves a perf run as well.
I suggest hiding whitespace-only changes in the diff view.
Documentation for the following methods
with_capacity
with_capacity_in
with_capacity_and_hasher
reserve
reserve_exact
try_reserve
try_reserve_exact
was inconsistent and often not entirely correct where they existed on the following types
Vec
VecDeque
String
OsString
PathBuf
BinaryHeap
HashSet
HashMap
BufWriter
LineWriter
since the allocator is allowed to allocate more than the requested capacity in all such cases, and will frequently "allocate" much more in the case of zero-sized types (I also checked BufReader, but there the docs appear to be accurate as it appears to actually allocate the exact capacity).
Some effort was made to make the documentation more consistent between types as well.
Fix with_capacity* methods for Vec
Fix *reserve* methods for Vec
Fix docs for *reserve* methods of VecDeque
Fix docs for String::with_capacity
Fix docs for *reserve* methods of String
Fix docs for OsString::with_capacity
Fix docs for *reserve* methods on OsString
Fix docs for with_capacity* methods on HashSet
Fix docs for *reserve methods of HashSet
Fix docs for with_capacity* methods of HashMap
Fix docs for *reserve methods on HashMap
Fix expect messages about OOM in doctests
Fix docs for BinaryHeap::with_capacity
Fix docs for *reserve* methods of BinaryHeap
Fix typos
Fix docs for with_capacity on BufWriter and LineWriter
Fix consistent use of `hasher` between `HashMap` and `HashSet`
Fix warning in doc test
Add test for capacity of vec with ZST
Fix doc test error
This updates the standard library's documentation to use the new syntax. The
documentation is worthwhile to update as it should be more idiomatic
(particularly for features like this, which are nice for users to get acquainted
with). The general codebase is likely more hassle than benefit to update: it'll
hurt git blame, and generally updates can be done by folks updating the code if
(and when) that makes things more readable with the new format.
A few places in the compiler and library code are updated (mostly just due to
already having been done when this commit was first authored).
The `advance_by(n)` docs state that in the error case `Err(k)` that k is always less than n.
It also states that `advance_by(0)` may return `Err(0)` to indicate an exhausted iterator.
These statements are inconsistent.
Since only one implementation (Skip) actually made use of that I changed it to return Ok(()) in that case too.
While adding some tests I also found a bug in `Take::advance_back_by`.
This also checks the contents and not only the capacity in case IntoIter's clone implementation is changed to add capacity at the end. Extra capacity at the beginning would be needed to make InPlaceIterable work.
Co-authored-by: Giacomo Stevanato <giaco.stevanato@gmail.com>
The unsoundness is not in Peekable per se, it rather is due to the
interaction between Peekable being able to hold an extra item
and vec::IntoIter's clone implementation shortening the allocation.
An alternative solution would be to change IntoIter's clone implementation
to keep enough spare capacity available.
Discovered this kind of issue in an unrelated library.
The author copied the tests from here and AFAIK, there are no tests for this particular case.
Signed-off-by: Hanif Bin Ariffin <hanif.ariffin.4326@gmail.com>
Optimize Vec::retain
Use `copy_non_overlapping` instead of `swap` to reduce memory writes, like what we've done in #44355 and `String::retain`.
#48065 already tried to do this optimization but it is reverted in #67300 due to bad codegen of `DrainFilter::drop`.
This PR re-implement the drop-then-move approach. I did a [benchmark](https://gist.github.com/oxalica/3360eec9376f22533fcecff02798b698) on small-no-drop, small-need-drop, large-no-drop elements with different predicate functions. It turns out that the new implementation is >20% faster in average for almost all cases. Only 2/24 cases are slower by 3% and 5%. See the link above for more detail.
I think regression in may-panic cases is due to drop-guard preventing some optimization. If it's permitted to leak elements when predicate function of element's `drop` panic, the new implementation should be almost always faster than current one.
I'm not sure if we should leak on panic, since there is indeed an issue (#52267) complains about it before.
Implement <https://github.com/rust-lang/rfcs/pull/2714>, changes from the RFC:
- Rename the method `append_from_within` => `extend_from_within`
- Loose :Copy bound => :Clone
- Specialize in case of :Copy
This commit also adds `Vec::split_at_spare` private method and use it to implement
`Vec::spare_capacity_mut` and `Vec::extend_from_within`. This method returns 2
slices - initialized elements (same as `&mut vec[..]`) and uninitialized but
allocated space (same as `vec.spare_capacity_mut()`).