stabilise array methods
Closes#76118
Stabilises the remaining array methods
FCP is yet to be carried out for this
There wasn't a clear consensus on the naming, but all the other alternatives had some flaws as discussed in the tracking issue and there was a silence on this issue for a year
Stabilize `slice_first_last_chunk`
This PR does a few different things based around stabilizing `slice_first_last_chunk`. They are split up so this PR can be by-commit reviewed, I can move parts to a separate PR if desired.
This feature provides a very elegant API to extract arrays from either end of a slice, such as for parsing integers from binary data.
## Stabilize `slice_first_last_chunk`
ACP: https://github.com/rust-lang/libs-team/issues/69
Implementation: https://github.com/rust-lang/rust/issues/90091
Tracking issue: https://github.com/rust-lang/rust/issues/111774
This stabilizes the functionality from https://github.com/rust-lang/rust/issues/111774:
```rust
impl [T] {
pub const fn first_chunk<const N: usize>(&self) -> Option<&[T; N]>;
pub fn first_chunk_mut<const N: usize>(&mut self) -> Option<&mut [T; N]>;
pub const fn last_chunk<const N: usize>(&self) -> Option<&[T; N]>;
pub fn last_chunk_mut<const N: usize>(&mut self) -> Option<&mut [T; N]>;
pub const fn split_first_chunk<const N: usize>(&self) -> Option<(&[T; N], &[T])>;
pub fn split_first_chunk_mut<const N: usize>(&mut self) -> Option<(&mut [T; N], &mut [T])>;
pub const fn split_last_chunk<const N: usize>(&self) -> Option<(&[T], &[T; N])>;
pub fn split_last_chunk_mut<const N: usize>(&mut self) -> Option<(&mut [T], &mut [T; N])>;
}
```
Const stabilization is included for all non-mut methods, which are blocked on `const_mut_refs`. This change includes marking the trivial function `slice_split_at_unchecked` const-stable for internal use (but not fully stable).
## Remove `split_array` slice methods
Tracking issue: https://github.com/rust-lang/rust/issues/90091
Implementation: https://github.com/rust-lang/rust/pull/83233#pullrequestreview-780315524
This PR also removes the following unstable methods from the `split_array` feature, https://github.com/rust-lang/rust/issues/90091:
```rust
impl<T> [T] {
pub fn split_array_ref<const N: usize>(&self) -> (&[T; N], &[T]);
pub fn split_array_mut<const N: usize>(&mut self) -> (&mut [T; N], &mut [T]);
pub fn rsplit_array_ref<const N: usize>(&self) -> (&[T], &[T; N]);
pub fn rsplit_array_mut<const N: usize>(&mut self) -> (&mut [T], &mut [T; N]);
}
```
This is done because discussion at #90091 and its implementation PR indicate a strong preference for nonpanicking APIs that return `Option`. The only difference between functions under the `split_array` and `slice_first_last_chunk` features is `Option` vs. panic, so remove the duplicates as part of this stabilization.
This does not affect the array methods from `split_array`. We will want to revisit these once `generic_const_exprs` is further along.
## Reverse order of return tuple for `split_last_chunk{,_mut}`
An unresolved question for #111774 is whether to return `(preceding_slice, last_chunk)` (`(&[T], &[T; N])`) or the reverse (`(&[T; N], &[T])`), from `split_last_chunk` and `split_last_chunk_mut`. It is currently implemented as `(last_chunk, preceding_slice)` which matches `split_last -> (&T, &[T])`. The first commit changes these to `(&[T], &[T; N])` for these reasons:
- More consistent with other splitting methods that return multiple values: `str::rsplit_once`, `slice::split_at{,_mut}`, `slice::align_to` all return tuples with the items in order
- More intuitive (arguably opinion, but it is consistent with other language elements like pattern matching `let [a, b, rest @ ..] ...`
- If we ever added a varidic way to obtain multiple chunks, it would likely return something in order: `.split_many_last::<(2, 4)>() -> (&[T], &[T; 2], &[T; 4])`
- It is the ordering used in the `rsplit_array` methods
I think the inconsistency with `split_last` could be acceptable in this case, since for `split_last` the scalar `&T` doesn't have any internal order to maintain with the other items.
## Unresolved questions
Do we want to reserve the same names on `[u8; N]` to avoid inference confusion? https://github.com/rust-lang/rust/pull/117561#issuecomment-1793388647
---
`slice_first_last_chunk` has only been around since early 2023, but `split_array` has been around since 2021.
`@rustbot` label -T-libs +T-libs-api -T-libs +needs-fcp
cc `@rust-lang/wg-const-eval,` `@scottmcm` who raised this topic, `@clarfonthey` implementer of `slice_first_last_chunk` `@jethrogb` implementer of `split_array`
Zulip discussion: https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/Stabilizing.20array-from-slice.20*something*.3FFixes: #111774
detects redundant imports that can be eliminated.
for #117772 :
In order to facilitate review and modification, split the checking code and
removing redundant imports code into two PR.
The functionality of these methods from `split_array` has been absorbed by the
`slice_first_last_chunk` feature. This only affects the methods on slices,
not those with the same name that are implemented on array types.
Also adjusts testing to reflect this change.
optimize zipping over array iterators
Fixes#115339 (somewhat)
the new assembly:
```asm
zip_arrays:
.cfi_startproc
vmovups (%rdx), %ymm0
leaq 32(%rsi), %rcx
vxorps %xmm1, %xmm1, %xmm1
vmovups %xmm1, -24(%rsp)
movq $0, -8(%rsp)
movq %rsi, -88(%rsp)
movq %rdi, %rax
movq %rcx, -80(%rsp)
vmovups %ymm0, -72(%rsp)
movq $0, -40(%rsp)
movq $32, -32(%rsp)
movq -24(%rsp), %rcx
vmovups (%rsi,%rcx), %ymm0
vorps -72(%rsp,%rcx), %ymm0, %ymm0
vmovups %ymm0, (%rsi,%rcx)
vmovups (%rsi), %ymm0
vmovups %ymm0, (%rdi)
vzeroupper
retq
```
This is still longer than the slice version given in the issue but at least it eliminates the terrible `vpextrb`/`orb` chain. I guess this is due to excessive memcpys again (haven't looked at the llvmir)?
The `TrustedLen` specialization is a drive-by change since I had to do something for the default impl anyway to be able to specialize the `TrustedRandomAccessNoCoerce` impl.
`[T; N]::zip` is "eager" but most zips are mapped.
This causes poor optimization in generated code.
This is a fundamental design issue and "zip" is
"prime real estate" in terms of function names,
so let's free it up again.
Rollup of 6 pull requests
Successful merges:
- #111936 (Include test suite metadata in the build metrics)
- #111952 (Remove DesugaringKind::Replace.)
- #111966 (Add #[inline] to array TryFrom impls)
- #111983 (Perform MIR type ops locally in new solver)
- #111997 (Fix re-export of doc hidden macro not showing up)
- #112014 (rustdoc: get unnormalized link destination for suggestions)
r? `@ghost`
`@rustbot` modify labels: rollup
This takes a whole 3 lines in `compiler/` since it lowers to `CastKind::Transmute` in MIR *exactly* the same as the existing `intrinsics::transmute` does, it just doesn't have the fancy checking in `hir_typeck`.
Added to enable experimenting with the request in <https://github.com/rust-lang/rust/pull/106281#issuecomment-1496648190> and because the portable-simd folks might be interested for dependently-sized array-vector conversions.
It also simplifies a couple places in `core`.
A successful advance is now signalled by returning `0` and other values now represent the remaining number
of steps that couldn't be advanced as opposed to the amount of steps that have been advanced during a partial advance_by.
This simplifies adapters a bit, replacing some `match`/`if` with arithmetic. Whether this is beneficial overall depends
on whether `advance_by` is mostly used as a building-block for other iterator methods and adapters or whether
we also see uses by users where `Result` might be more useful.
Improve the `array::map` codegen
The `map` method on arrays [is documented as sometimes performing poorly](https://doc.rust-lang.org/std/primitive.array.html#note-on-performance-and-stack-usage), and after [a question on URLO](https://users.rust-lang.org/t/try-trait-residual-o-trait-and-try-collect-into-array/88510?u=scottmcm) prompted me to take another look at the core [`try_collect_into_array`](7c46fb2111/library/core/src/array/mod.rs (L865-L912)) function, I had some ideas that ended up working better than I'd expected.
There's three main ideas in here, split over three commits:
1. Don't use `array::IntoIter` when we can avoid it, since that seems to not get SRoA'd, meaning that every step writes things like loop counters into the stack unnecessarily
2. Don't return arrays in `Result`s unnecessarily, as that doesn't seem to optimize away even with `unwrap_unchecked` (perhaps because it needs to get moved into a new LLVM type to account for the discriminant)
3. Don't distract LLVM with all the `Option` dances when we know for sure we have enough items (like in `map` and `zip`). This one's a larger commit as to do it I ended up adding a new `pub(crate)` trait, but hopefully those changes are still straight-forward.
(No libs-api changes; everything should be completely implementation-detail-internal.)
It's still not completely fixed -- I think it needs pcwalton's `memcpy` optimizations still (#103830) to get further -- but this seems to go much better than before. And the remaining `memcpy`s are just `transmute`-equivalent (`[T; N] -> ManuallyDrop<[T; N]>` and `[MaybeUninit<T>; N] -> [T; N]`), so hopefully those will be easier to remove with LLVM16 than the previous subobject copies 🤞
r? `@thomcc`
As a simple example, this test
```rust
pub fn long_integer_map(x: [u32; 64]) -> [u32; 64] {
x.map(|x| 13 * x + 7)
}
```
On nightly <https://rust.godbolt.org/z/xK7548TGj> takes `sub rsp, 808`
```llvm
start:
%array.i.i.i.i = alloca [64 x i32], align 4
%_3.sroa.5.i.i.i = alloca [65 x i32], align 4
%_5.i = alloca %"core::iter::adapters::map::Map<core::array::iter::IntoIter<u32, 64>, [closure@/app/example.rs:2:11: 2:14]>", align 8
```
(and yes, that's a 6**5**-element array `alloca` despite 6**4**-element input and output)
But with this PR it's only `sub rsp, 520`
```llvm
start:
%array.i.i.i.i.i.i = alloca [64 x i32], align 4
%array1.i.i.i = alloca %"core::mem::manually_drop::ManuallyDrop<[u32; 64]>", align 4
```
Similarly, the loop it emits on nightly is scalar-only and horrifying
```nasm
.LBB0_1:
mov esi, 64
mov edi, 0
cmp rdx, 64
je .LBB0_3
lea rsi, [rdx + 1]
mov qword ptr [rsp + 784], rsi
mov r8d, dword ptr [rsp + 4*rdx + 528]
mov edi, 1
lea edx, [r8 + 2*r8]
lea r8d, [r8 + 4*rdx]
add r8d, 7
.LBB0_3:
test edi, edi
je .LBB0_11
mov dword ptr [rsp + 4*rcx + 272], r8d
cmp rsi, 64
jne .LBB0_6
xor r8d, r8d
mov edx, 64
test r8d, r8d
jne .LBB0_8
jmp .LBB0_11
.LBB0_6:
lea rdx, [rsi + 1]
mov qword ptr [rsp + 784], rdx
mov edi, dword ptr [rsp + 4*rsi + 528]
mov r8d, 1
lea esi, [rdi + 2*rdi]
lea edi, [rdi + 4*rsi]
add edi, 7
test r8d, r8d
je .LBB0_11
.LBB0_8:
mov dword ptr [rsp + 4*rcx + 276], edi
add rcx, 2
cmp rcx, 64
jne .LBB0_1
```
whereas with this PR it's unrolled and vectorized
```nasm
vpmulld ymm1, ymm0, ymmword ptr [rsp + 64]
vpaddd ymm1, ymm1, ymm2
vmovdqu ymmword ptr [rsp + 328], ymm1
vpmulld ymm1, ymm0, ymmword ptr [rsp + 96]
vpaddd ymm1, ymm1, ymm2
vmovdqu ymmword ptr [rsp + 360], ymm1
```
(though sadly still stack-to-stack)
The type is unsafe and now exposed to the whole crate.
Document it properly and add an unsafe method so the
caller can make it visible that something unsafe is happening.
Clarify `array::from_fn` documentation
I've seen quite a few of people on social media confused of where the length of array is coming from in the newly stabilized `array::from_fn` example.
This PR tries to clarify the documentation on this.
Optimize `array::IntoIter`
`.into_iter()` on arrays was slower than it needed to be (especially compared to slice iterator) since it uses `Range<usize>`, which needs to handle degenerate ranges like `10..4`.
This PR adds an internal `IndexRange` type that's like `Range<usize>` but with a safety invariant that means it doesn't need to worry about those cases -- it only handles `start <= end` -- and thus can give LLVM more information to optimize better.
I added one simple demonstration of the improvement as a codegen test.
(`vec::IntoIter` uses pointers instead of indexes, so doesn't have this problem, but that only works because its elements are boxed. `array::IntoIter` can't use pointers because that would keep it from being movable.)
`.into_iter()` on arrays was slower than it needed to be (especially compared to slice iterator) since it uses `Range<usize>`, which needs to handle degenerate ranges like `10..4`.
This PR adds an internal `IndexRange` type that's like `Range<usize>` but with a safety invariant that means it doesn't need to worry about those cases -- it only handles `start <= end` -- and thus can give LLVM more information to optimize better.
I added one simple demonstration of the improvement as a codegen test.