The implementation is unsound when a partially consumed iterator has
some elements buffered in the front/back parts and cloning the Iterator
removes the capacity from the backing vec::IntoIter.
Remove special-case handling of `vec.split_off(0)`
#76682 added special handling to `Vec::split_off` for the case where `at == 0`. Instead of copying the vector's contents into a freshly-allocated vector and returning it, the special-case code steals the old vector's allocation, and replaces it with a new (empty) buffer with the same capacity.
That eliminates the need to copy the existing elements, but comes at a surprising cost, as seen in #119913. The returned vector's capacity is no longer determined by the size of its contents (as would be expected for a freshly-allocated vector), and instead uses the full capacity of the old vector.
In cases where the capacity is large but the size is small, that results in a much larger capacity than would be expected from reading the documentation of `split_off`. This is especially bad when `split_off` is called in a loop (to recycle a buffer), and the returned vectors have a wide variety of lengths.
I believe it's better to remove the special-case code, and treat `at == 0` just like any other value:
- The current documentation states that `split_off` returns a “newly allocated vector”, which is not actually true in the current implementation when `at == 0`.
- If the value of `at` could be non-zero at runtime, then the caller has already agreed to the cost of a full memcpy of the taken elements in the general case. Avoiding that copy would be nice if it were close to free, but the different handling of capacity means that it is not.
- If the caller specifically wants to avoid copying in the case where `at == 0`, they can easily implement that behaviour themselves using `mem::replace`.
Fixes#119913.
Implement iterator specialization traits on more adapters
This adds
* `TrustedLen` to `Skip` and `StepBy`
* `TrustedRandomAccess` to `Skip`
* `InPlaceIterable` and `SourceIter` to `Copied` and `Cloned`
The first two might improve performance in the compiler itself since `skip` is used in several places. Constellations that would exercise the last point are probably rare since it would require an owning iterator that has references as Items somewhere in its iterator pipeline.
Improvements for `Skip`:
```
# old
test iter::bench_skip_trusted_random_access ... bench: 8,335 ns/iter (+/- 90)
# new
test iter::bench_skip_trusted_random_access ... bench: 2,753 ns/iter (+/- 27)
```
Add lint against ambiguous wide pointer comparisons
This PR is the resolution of https://github.com/rust-lang/rust/issues/106447 decided in https://github.com/rust-lang/rust/issues/117717 by T-lang.
## `ambiguous_wide_pointer_comparisons`
*warn-by-default*
The `ambiguous_wide_pointer_comparisons` lint checks comparison of `*const/*mut ?Sized` as the operands.
### Example
```rust
let ab = (A, B);
let a = &ab.0 as *const dyn T;
let b = &ab.1 as *const dyn T;
let _ = a == b;
```
### Explanation
The comparison includes metadata which may not be expected.
-------
This PR also drops `clippy::vtable_address_comparisons` which is superseded by this one.
~~One thing: is the current naming right? `invalid` seems a bit too much.~~
Fixes https://github.com/rust-lang/rust/issues/117717
detects redundant imports that can be eliminated.
for #117772 :
In order to facilitate review and modification, split the checking code and
removing redundant imports code into two PR.
Expand in-place iteration specialization to Flatten, FlatMap and ArrayChunks
This enables the following cases to collect in-place:
```rust
let v = vec![[0u8; 4]; 1024]
let v: Vec<_> = v.into_iter().flatten().collect();
let v: Vec<Option<NonZeroUsize>> = vec![NonZeroUsize::new(0); 1024];
let v: Vec<_> = v.into_iter().flatten().collect();
let v = vec![u8; 4096];
let v: Vec<_> = v.into_iter().array_chunks::<4>().collect();
```
Especially the nicheful-option-flattening should be useful in real code.
Implement `From<{&,&mut} [T; N]>` for `Vec<T>` where `T: Clone`
Currently, if `T` implements `Clone`, we can create a `Vec<T>` from an `&[T]` or an `&mut [T]`, can we also support creating a `Vec<T>` from an `&[T; N]` or an `&mut [T; N]`? Also, do I need to add `#[inline]` to the implementation?
ACP: rust-lang/libs-team#220. [Accepted]
Closes#100880.
Eliminate ZST allocations in `Box` and `Vec`
This PR fixes 2 issues with `Box` and `RawVec` related to ZST allocations. Specifically, the `Allocator` trait requires that:
- If you allocate a zero-sized layout then you must later deallocate it, otherwise the allocator may leak memory.
- You cannot pass a ZST pointer to the allocator that you haven't previously allocated.
These restrictions exist because an allocator implementation is allowed to allocate non-zero amounts of memory for a zero-sized allocation. For example, `malloc` in libc does this.
Currently, ZSTs are handled differently in `Box` and `Vec`:
- `Vec` never allocates when `T` is a ZST or if the vector capacity is 0.
- `Box` just blindly passes everything on to the allocator, including ZSTs.
This causes problems due to the free conversions between `Box<[T]>` and `Vec<T>`, specifically that ZST allocations could get leaked or a dangling pointer could be passed to `deallocate`.
This PR fixes this by changing `Box` to not allocate for zero-sized values and slices. It also fixes a bug in `RawVec::shrink` where shrinking to a size of zero did not actually free the backing memory.
A successful advance is now signalled by returning `0` and other values now represent the remaining number
of steps that couldn't be advanced as opposed to the amount of steps that have been advanced during a partial advance_by.
This simplifies adapters a bit, replacing some `match`/`if` with arithmetic. Whether this is beneficial overall depends
on whether `advance_by` is mostly used as a building-block for other iterator methods and adapters or whether
we also see uses by users where `Result` might be more useful.
Fix in-place collection leak when remaining element destructor panic
Fixes#101628
cc `@the8472`
I went for the drop guard route, placing it immediately before the `forget_allocation_drop_remaining` call and after the comment, as to signal they are closely related.
I also updated the test to check for the leak, though the only change really needed was removing the leak clean up for miri since now that's no longer leaked.
Add `vec::Drain{,Filter}::keep_rest`
This PR adds `keep_rest` methods to `vec::Drain` and `vec::DrainFilter` under `drain_keep_rest` feature gate:
```rust
// mod alloc::vec
impl<T, A: Allocator> Drain<'_, T, A> {
pub fn keep_rest(self);
}
impl<T, F, A: Allocator> DrainFilter<'_, T, F, A>
where
F: FnMut(&mut T) -> bool,
{
pub fn keep_rest(self);
}
```
Both these methods cancel draining of elements that were not yet yielded from the iterators. While this needs more testing & documentation, I want at least start the discussion. This may be a potential way out of the "should `DrainFilter` exhaust itself on drop?" argument.
Add tests that check `Vec::retain` predicate execution order.
This behaviour is documented for `Vec::retain` which means that there is code that rely on that but there weren't tests about that.