Commit Graph

177 Commits

Author SHA1 Message Date
Daniel Paoliello
d261647c93 Import the 2021 prelude in the core crate 2024-03-25 13:12:06 -07:00
Konrad Höffner
533add895c
add missing PartialOrd impl doc for array 2024-03-06 10:28:56 +01:00
Markus Reiter
14ed426eec
Use generic NonZero everywhere in core. 2024-02-22 15:17:33 +01:00
Markus Reiter
a90cc05233
Replace NonZero::<_>::new with NonZero::new. 2024-02-15 08:09:42 +01:00
Markus Reiter
746a58d435
Use generic NonZero internally. 2024-02-15 08:09:42 +01:00
Mark Rousskov
8043821b3a Bump version placeholders 2024-02-08 07:43:38 -05:00
Matthias Krüger
2624bfbc0d
Rollup merge of #120384 - wackbyte:array-equality-generics, r=Mark-Simulacrum
Use `<T, U>` for array/slice equality `impl`s

Makes the trait implementation documentation for arrays and slices appear more consistent.

[Example](https://doc.rust-lang.org/1.75.0/std/primitive.array.html): mixed `A`, `B`, and `U`.
![List of PartialEq implementations for arrays](https://github.com/wackbyte/rust/assets/29505620/823c010e-ee57-4de1-885b-a1cd6dcaf85f)

This change makes them all `U`.
2024-02-05 11:07:27 +01:00
Matthias Krüger
c9ab37bf4f
Rollup merge of #103522 - Dylan-DPC:76118/array-methods-stab, r=dtolnay
stabilise array methods

Closes #76118

Stabilises the remaining array methods

FCP is yet to be carried out for this

There wasn't a clear consensus on the naming, but all the other alternatives had some flaws as discussed in the tracking issue and there was a silence on this issue for a year
2024-01-26 23:15:47 +01:00
wackbyte
3f3a153056 Use <T, U> for array/slice equality impls
Makes the trait implementation documentation for arrays and slices appear more consistent.
2024-01-26 12:40:04 -05:00
Matthias Krüger
64461dab01
Rollup merge of #117561 - tgross35:split-array, r=scottmcm
Stabilize `slice_first_last_chunk`

This PR does a few different things based around stabilizing `slice_first_last_chunk`. They are split up so this PR can be by-commit reviewed, I can move parts to a separate PR if desired.

This feature provides a very elegant API to extract arrays from either end of a slice, such as for parsing integers from binary data.

## Stabilize `slice_first_last_chunk`

ACP: https://github.com/rust-lang/libs-team/issues/69
Implementation: https://github.com/rust-lang/rust/issues/90091
Tracking issue: https://github.com/rust-lang/rust/issues/111774

This stabilizes the functionality from https://github.com/rust-lang/rust/issues/111774:

```rust
impl [T] {
    pub const fn first_chunk<const N: usize>(&self) -> Option<&[T; N]>;
    pub fn first_chunk_mut<const N: usize>(&mut self) -> Option<&mut [T; N]>;
    pub const fn last_chunk<const N: usize>(&self) -> Option<&[T; N]>;
    pub fn last_chunk_mut<const N: usize>(&mut self) -> Option<&mut [T; N]>;
    pub const fn split_first_chunk<const N: usize>(&self) -> Option<(&[T; N], &[T])>;
    pub fn split_first_chunk_mut<const N: usize>(&mut self) -> Option<(&mut [T; N], &mut [T])>;
    pub const fn split_last_chunk<const N: usize>(&self) -> Option<(&[T], &[T; N])>;
    pub fn split_last_chunk_mut<const N: usize>(&mut self) -> Option<(&mut [T], &mut [T; N])>;
}
```

Const stabilization is included for all non-mut methods, which are blocked on `const_mut_refs`. This change includes marking the trivial function `slice_split_at_unchecked` const-stable for internal use (but not fully stable).

## Remove `split_array` slice methods

Tracking issue: https://github.com/rust-lang/rust/issues/90091
Implementation: https://github.com/rust-lang/rust/pull/83233#pullrequestreview-780315524

This PR also removes the following unstable methods from the `split_array` feature, https://github.com/rust-lang/rust/issues/90091:

```rust
impl<T> [T] {
    pub fn split_array_ref<const N: usize>(&self) -> (&[T; N], &[T]);
    pub fn split_array_mut<const N: usize>(&mut self) -> (&mut [T; N], &mut [T]);

    pub fn rsplit_array_ref<const N: usize>(&self) -> (&[T], &[T; N]);
    pub fn rsplit_array_mut<const N: usize>(&mut self) -> (&mut [T], &mut [T; N]);
}
```

This is done because discussion at #90091 and its implementation PR indicate a strong preference for nonpanicking APIs that return `Option`. The only difference between functions under the `split_array` and `slice_first_last_chunk` features is `Option` vs. panic, so remove the duplicates as part of this stabilization.

This does not affect the array methods from `split_array`. We will want to revisit these once `generic_const_exprs` is further along.

## Reverse order of return tuple for `split_last_chunk{,_mut}`

An unresolved question for #111774 is whether to return `(preceding_slice, last_chunk)` (`(&[T], &[T; N])`) or the reverse (`(&[T; N], &[T])`), from `split_last_chunk` and `split_last_chunk_mut`. It is currently implemented as `(last_chunk, preceding_slice)` which matches `split_last -> (&T, &[T])`. The first commit changes these to `(&[T], &[T; N])` for these reasons:

- More consistent with other splitting methods that return multiple values: `str::rsplit_once`, `slice::split_at{,_mut}`, `slice::align_to` all return tuples with the items in order
- More intuitive (arguably opinion, but it is consistent with other language elements like pattern matching `let [a, b, rest @ ..] ...`
- If we ever added a varidic way to obtain multiple chunks, it would likely return something in order: `.split_many_last::<(2, 4)>() -> (&[T], &[T; 2], &[T; 4])`
- It is the ordering used in the `rsplit_array` methods

I think the inconsistency with `split_last` could be acceptable in this case, since for `split_last` the scalar `&T` doesn't have any internal order to maintain with the other items.

## Unresolved questions

Do we want to reserve the same names on `[u8; N]` to avoid inference confusion? https://github.com/rust-lang/rust/pull/117561#issuecomment-1793388647

---

`slice_first_last_chunk` has only been around since early 2023, but `split_array` has been around since 2021.

`@rustbot` label -T-libs +T-libs-api -T-libs +needs-fcp
cc `@rust-lang/wg-const-eval,` `@scottmcm` who raised this topic, `@clarfonthey` implementer of `slice_first_last_chunk` `@jethrogb` implementer of `split_array`

Zulip discussion: https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/Stabilizing.20array-from-slice.20*something*.3F

Fixes: #111774
2024-01-19 19:26:59 +01:00
surechen
40ae34194c remove redundant imports
detects redundant imports that can be eliminated.

for #117772 :

In order to facilitate review and modification, split the checking code and
removing redundant imports code into two PR.
2023-12-10 10:56:22 +08:00
Trevor Gross
01337bf1fd Remove {,r}split_array_ref{,_mut} methods from slices
The functionality of these methods from `split_array` has been absorbed by the
`slice_first_last_chunk` feature. This only affects the methods on slices,
not those with the same name that are implemented on array types.

Also adjusts testing to reflect this change.
2023-11-29 23:21:57 -05:00
Matthias Krüger
aa2289d3bc
Rollup merge of #117549 - DaniPopes:more-copied, r=b-naber
Use `copied` instead of manual `map`
2023-11-17 23:04:22 +01:00
DaniPopes
e6779d98ee
library: use copied instead of manual map 2023-11-03 17:18:45 +01:00
ltdk
8337e86b28 Add insta-stable std:#️⃣:{DefaultHasher, RandomState} exports 2023-11-02 20:35:20 -04:00
bors
0d410be23c Auto merge of #115515 - the8472:zip-for-arrays, r=scottmcm
optimize zipping over array iterators

Fixes #115339 (somewhat)

the new assembly:

```asm
zip_arrays:
        .cfi_startproc
        vmovups (%rdx), %ymm0
        leaq    32(%rsi), %rcx
        vxorps  %xmm1, %xmm1, %xmm1
        vmovups %xmm1, -24(%rsp)
        movq    $0, -8(%rsp)
        movq    %rsi, -88(%rsp)
        movq    %rdi, %rax
        movq    %rcx, -80(%rsp)
        vmovups %ymm0, -72(%rsp)
        movq    $0, -40(%rsp)
        movq    $32, -32(%rsp)
        movq    -24(%rsp), %rcx
        vmovups (%rsi,%rcx), %ymm0
        vorps   -72(%rsp,%rcx), %ymm0, %ymm0
        vmovups %ymm0, (%rsi,%rcx)
        vmovups (%rsi), %ymm0
        vmovups %ymm0, (%rdi)
        vzeroupper
        retq
```

This is still longer than the slice version given in the issue but at least it eliminates the terrible  `vpextrb`/`orb` chain. I guess this is due to excessive memcpys again (haven't looked at the llvmir)?

The `TrustedLen` specialization is a drive-by change since I had to do something for the default impl anyway to be able to specialize the `TrustedRandomAccessNoCoerce` impl.
2023-10-15 00:49:21 +00:00
The 8472
b018ad3d41 optimize zipping over array iterators 2023-10-06 18:33:25 +02:00
Jason Newcomb
d464b72970 Add more diagnostic items for clippy 2023-10-05 18:21:47 -04:00
Mark Rousskov
cc907f80b9 Re-format let-else per rustfmt update 2023-07-12 21:49:27 -04:00
Jubilee Young
472230d192 Remove array_zip
`[T; N]::zip` is "eager" but most zips are mapped.
This causes poor optimization in generated code.
This is a fundamental design issue and "zip" is
"prime real estate" in terms of function names,
so let's free it up again.
2023-05-30 00:40:39 -07:00
bors
82b311b418 Auto merge of #112016 - GuillaumeGomez:rollup-fhqn4i6, r=GuillaumeGomez
Rollup of 6 pull requests

Successful merges:

 - #111936 (Include test suite metadata in the build metrics)
 - #111952 (Remove DesugaringKind::Replace.)
 - #111966 (Add #[inline] to array TryFrom impls)
 - #111983 (Perform MIR type ops locally in new solver)
 - #111997 (Fix re-export of doc hidden macro not showing up)
 - #112014 (rustdoc: get unnormalized link destination for suggestions)

r? `@ghost`
`@rustbot` modify labels: rollup
2023-05-27 12:29:07 +00:00
Ben Kimock
e1b8fad664 Add #[inline] to array TryFrom impls 2023-05-25 18:24:27 -04:00
Scott McMurray
ba5a3968b8 Stabilize BuildHasher::hash_one 2023-05-24 23:47:50 -07:00
Scott McMurray
1cfcf71e04 Add an example that depends on is_ascii in a const 2023-05-04 14:46:17 -07:00
Scott McMurray
370d31b93d Constify [u8]::is_ascii (unstably)
UTF-8 checking in `const fn`-stabilized back in 1.63, but apparently somehow ASCII checking was never const-ified, despite being simpler.
2023-05-04 14:30:20 -07:00
Scott McMurray
8c781b0906 Add the basic ascii::Char type 2023-05-03 22:09:33 -07:00
Michael Goulet
33871c97ab Make sure that signatures aren't accidental refinements 2023-04-28 17:36:49 +00:00
Scott McMurray
1de2257c3f Add intrinsics::transmute_unchecked
This takes a whole 3 lines in `compiler/` since it lowers to `CastKind::Transmute` in MIR *exactly* the same as the existing `intrinsics::transmute` does, it just doesn't have the fancy checking in `hir_typeck`.

Added to enable experimenting with the request in <https://github.com/rust-lang/rust/pull/106281#issuecomment-1496648190> and because the portable-simd folks might be interested for dependently-sized array-vector conversions.

It also simplifies a couple places in `core`.
2023-04-22 17:22:03 -07:00
Deadbeef
76dbe29104 rm const traits in libcore 2023-04-16 06:49:27 +00:00
The 8472
e29b27b4a4 replace advance_by returning usize with Result<(), NonZeroUsize> 2023-03-27 16:03:14 +02:00
The 8472
69db91b8b2 Change advance(_back)_by to return usize instead of Result<(), usize>
A successful advance is now signalled by returning `0` and other values now represent the remaining number
of steps that couldn't be advanced as opposed to the amount of steps that have been advanced during a partial advance_by.

This simplifies adapters a bit, replacing some `match`/`if` with arithmetic. Whether this is beneficial overall depends
on whether `advance_by` is mostly used as a building-block for other iterator methods and adapters or whether
we also see uses by users where `Result` might be more useful.
2023-03-27 14:11:49 +02:00
Scott McMurray
44eec1d9b0 Merge two different equality specialization traits in core 2023-03-01 14:42:06 -08:00
bors
2d91939bb7 Auto merge of #107634 - scottmcm:array-drain, r=thomcc
Improve the `array::map` codegen

The `map` method on arrays [is documented as sometimes performing poorly](https://doc.rust-lang.org/std/primitive.array.html#note-on-performance-and-stack-usage), and after [a question on URLO](https://users.rust-lang.org/t/try-trait-residual-o-trait-and-try-collect-into-array/88510?u=scottmcm) prompted me to take another look at the core [`try_collect_into_array`](7c46fb2111/library/core/src/array/mod.rs (L865-L912)) function, I had some ideas that ended up working better than I'd expected.

There's three main ideas in here, split over three commits:
1. Don't use `array::IntoIter` when we can avoid it, since that seems to not get SRoA'd, meaning that every step writes things like loop counters into the stack unnecessarily
2. Don't return arrays in `Result`s unnecessarily, as that doesn't seem to optimize away even with `unwrap_unchecked` (perhaps because it needs to get moved into a new LLVM type to account for the discriminant)
3. Don't distract LLVM with all the `Option` dances when we know for sure we have enough items (like in `map` and `zip`).  This one's a larger commit as to do it I ended up adding a new `pub(crate)` trait, but hopefully those changes are still straight-forward.

(No libs-api changes; everything should be completely implementation-detail-internal.)

It's still not completely fixed -- I think it needs pcwalton's `memcpy` optimizations still (#103830) to get further -- but this seems to go much better than before.  And the remaining `memcpy`s are just `transmute`-equivalent (`[T; N] -> ManuallyDrop<[T; N]>` and `[MaybeUninit<T>; N] -> [T; N]`), so hopefully those will be easier to remove with LLVM16 than the previous subobject copies 🤞

r? `@thomcc`

As a simple example, this test
```rust
pub fn long_integer_map(x: [u32; 64]) -> [u32; 64] {
    x.map(|x| 13 * x + 7)
}
```
On nightly <https://rust.godbolt.org/z/xK7548TGj> takes `sub rsp, 808`
```llvm
start:
  %array.i.i.i.i = alloca [64 x i32], align 4
  %_3.sroa.5.i.i.i = alloca [65 x i32], align 4
  %_5.i = alloca %"core::iter::adapters::map::Map<core::array::iter::IntoIter<u32, 64>, [closure@/app/example.rs:2:11: 2:14]>", align 8
```
(and yes, that's a 6**5**-element array `alloca` despite 6**4**-element input and output)

But with this PR it's only `sub rsp, 520`
```llvm
start:
  %array.i.i.i.i.i.i = alloca [64 x i32], align 4
  %array1.i.i.i = alloca %"core::mem::manually_drop::ManuallyDrop<[u32; 64]>", align 4
```

Similarly, the loop it emits on nightly is scalar-only and horrifying
```nasm
.LBB0_1:
        mov     esi, 64
        mov     edi, 0
        cmp     rdx, 64
        je      .LBB0_3
        lea     rsi, [rdx + 1]
        mov     qword ptr [rsp + 784], rsi
        mov     r8d, dword ptr [rsp + 4*rdx + 528]
        mov     edi, 1
        lea     edx, [r8 + 2*r8]
        lea     r8d, [r8 + 4*rdx]
        add     r8d, 7
.LBB0_3:
        test    edi, edi
        je      .LBB0_11
        mov     dword ptr [rsp + 4*rcx + 272], r8d
        cmp     rsi, 64
        jne     .LBB0_6
        xor     r8d, r8d
        mov     edx, 64
        test    r8d, r8d
        jne     .LBB0_8
        jmp     .LBB0_11
.LBB0_6:
        lea     rdx, [rsi + 1]
        mov     qword ptr [rsp + 784], rdx
        mov     edi, dword ptr [rsp + 4*rsi + 528]
        mov     r8d, 1
        lea     esi, [rdi + 2*rdi]
        lea     edi, [rdi + 4*rsi]
        add     edi, 7
        test    r8d, r8d
        je      .LBB0_11
.LBB0_8:
        mov     dword ptr [rsp + 4*rcx + 276], edi
        add     rcx, 2
        cmp     rcx, 64
        jne     .LBB0_1
```

whereas with this PR it's unrolled and vectorized
```nasm
	vpmulld	ymm1, ymm0, ymmword ptr [rsp + 64]
	vpaddd	ymm1, ymm1, ymm2
	vmovdqu	ymmword ptr [rsp + 328], ymm1
	vpmulld	ymm1, ymm0, ymmword ptr [rsp + 96]
	vpaddd	ymm1, ymm1, ymm2
	vmovdqu	ymmword ptr [rsp + 360], ymm1
```
(though sadly still stack-to-stack)
2023-02-13 10:18:48 +00:00
Tobias Bucher
77c85e9cba Remove a couple of #[doc(hidden)] pub fn and their #[feature] gates 2023-02-10 08:06:35 +01:00
Scott McMurray
5bc328fdef Allow canonicalizing the array::map loop in trusted cases 2023-02-04 16:44:51 -08:00
Scott McMurray
52df0558ea Stop forcing array::map through an unnecessary Result 2023-02-04 16:41:35 -08:00
Scott McMurray
5a7342c3dd Stop using into_iter in array::map 2023-02-04 16:41:35 -08:00
André Vennberg
0b35f448f8 Remove various double spaces in source comments. 2023-01-14 17:22:04 +01:00
Hannes Körber
9671dd239d doc: Fix a few small issues
* A few typos around generic types (`;` vs `,`)
* Use inline code formatting for code fragments
* One instance of wrong wording
2022-12-15 14:05:03 +01:00
Fabian Hintringer
69d562d684 change example of array_from_fn to match suggestion 2022-11-25 10:05:07 +01:00
Fabian Hintringer
480f850868 improve array_from_fn documenation 2022-11-24 19:30:46 +01:00
The 8472
3925fc0c8e document and improve array Guard type
The type is unsafe and now exposed to the whole crate.
Document it properly and add an unsafe method so the
caller can make it visible that something unsafe is happening.
2022-11-08 00:13:26 +01:00
The 8472
eb3f001d37 make the array initialization guard available to other modules 2022-11-07 21:44:25 +01:00
Dylan DPC
2326f42ce2 stabilise array methods 2022-10-25 18:07:21 +05:30
Michael Howell
acc269d65b
Rollup merge of #100462 - zohnannor:master, r=thomcc
Clarify `array::from_fn` documentation

I've seen quite a few of people on social media confused of where the length of array is coming from in the newly stabilized `array::from_fn` example.

This PR tries to clarify the documentation on this.
2022-10-23 14:48:13 -07:00
Alex Saveau
55d71c61b8
Remove all uses of array_assume_init
Signed-off-by: Alex Saveau <saveau.alexandre@gmail.com>
2022-10-17 13:03:54 -07:00
Pietro Albini
3975d55d98
remove cfg(bootstrap) 2022-09-26 10:14:45 +02:00
onestacked
449326aaad Added const Default impls for Arrays and Tuples. 2022-09-23 17:53:59 +02:00
bors
4ecfdfac51 Auto merge of #100214 - scottmcm:strict-range, r=thomcc
Optimize `array::IntoIter`

`.into_iter()` on arrays was slower than it needed to be (especially compared to slice iterator) since it uses `Range<usize>`, which needs to handle degenerate ranges like `10..4`.

This PR adds an internal `IndexRange` type that's like `Range<usize>` but with a safety invariant that means it doesn't need to worry about those cases -- it only handles `start <= end` -- and thus can give LLVM more information to optimize better.

I added one simple demonstration of the improvement as a codegen test.

(`vec::IntoIter` uses pointers instead of indexes, so doesn't have this problem, but that only works because its elements are boxed.  `array::IntoIter` can't use pointers because that would keep it from being movable.)
2022-09-21 00:41:33 +00:00
Scott McMurray
6dbd9a29c2 Optimize array::IntoIter
`.into_iter()` on arrays was slower than it needed to be (especially compared to slice iterator) since it uses `Range<usize>`, which needs to handle degenerate ranges like `10..4`.

This PR adds an internal `IndexRange` type that's like `Range<usize>` but with a safety invariant that means it doesn't need to worry about those cases -- it only handles `start <= end` -- and thus can give LLVM more information to optimize better.

I added one simple demonstration of the improvement as a codegen test.
2022-09-19 23:24:34 -07:00