A successful advance is now signalled by returning `0` and other values now represent the remaining number
of steps that couldn't be advanced as opposed to the amount of steps that have been advanced during a partial advance_by.
This simplifies adapters a bit, replacing some `match`/`if` with arithmetic. Whether this is beneficial overall depends
on whether `advance_by` is mostly used as a building-block for other iterator methods and adapters or whether
we also see uses by users where `Result` might be more useful.
Improve the `array::map` codegen
The `map` method on arrays [is documented as sometimes performing poorly](https://doc.rust-lang.org/std/primitive.array.html#note-on-performance-and-stack-usage), and after [a question on URLO](https://users.rust-lang.org/t/try-trait-residual-o-trait-and-try-collect-into-array/88510?u=scottmcm) prompted me to take another look at the core [`try_collect_into_array`](7c46fb2111/library/core/src/array/mod.rs (L865-L912)) function, I had some ideas that ended up working better than I'd expected.
There's three main ideas in here, split over three commits:
1. Don't use `array::IntoIter` when we can avoid it, since that seems to not get SRoA'd, meaning that every step writes things like loop counters into the stack unnecessarily
2. Don't return arrays in `Result`s unnecessarily, as that doesn't seem to optimize away even with `unwrap_unchecked` (perhaps because it needs to get moved into a new LLVM type to account for the discriminant)
3. Don't distract LLVM with all the `Option` dances when we know for sure we have enough items (like in `map` and `zip`). This one's a larger commit as to do it I ended up adding a new `pub(crate)` trait, but hopefully those changes are still straight-forward.
(No libs-api changes; everything should be completely implementation-detail-internal.)
It's still not completely fixed -- I think it needs pcwalton's `memcpy` optimizations still (#103830) to get further -- but this seems to go much better than before. And the remaining `memcpy`s are just `transmute`-equivalent (`[T; N] -> ManuallyDrop<[T; N]>` and `[MaybeUninit<T>; N] -> [T; N]`), so hopefully those will be easier to remove with LLVM16 than the previous subobject copies 🤞
r? `@thomcc`
As a simple example, this test
```rust
pub fn long_integer_map(x: [u32; 64]) -> [u32; 64] {
x.map(|x| 13 * x + 7)
}
```
On nightly <https://rust.godbolt.org/z/xK7548TGj> takes `sub rsp, 808`
```llvm
start:
%array.i.i.i.i = alloca [64 x i32], align 4
%_3.sroa.5.i.i.i = alloca [65 x i32], align 4
%_5.i = alloca %"core::iter::adapters::map::Map<core::array::iter::IntoIter<u32, 64>, [closure@/app/example.rs:2:11: 2:14]>", align 8
```
(and yes, that's a 6**5**-element array `alloca` despite 6**4**-element input and output)
But with this PR it's only `sub rsp, 520`
```llvm
start:
%array.i.i.i.i.i.i = alloca [64 x i32], align 4
%array1.i.i.i = alloca %"core::mem::manually_drop::ManuallyDrop<[u32; 64]>", align 4
```
Similarly, the loop it emits on nightly is scalar-only and horrifying
```nasm
.LBB0_1:
mov esi, 64
mov edi, 0
cmp rdx, 64
je .LBB0_3
lea rsi, [rdx + 1]
mov qword ptr [rsp + 784], rsi
mov r8d, dword ptr [rsp + 4*rdx + 528]
mov edi, 1
lea edx, [r8 + 2*r8]
lea r8d, [r8 + 4*rdx]
add r8d, 7
.LBB0_3:
test edi, edi
je .LBB0_11
mov dword ptr [rsp + 4*rcx + 272], r8d
cmp rsi, 64
jne .LBB0_6
xor r8d, r8d
mov edx, 64
test r8d, r8d
jne .LBB0_8
jmp .LBB0_11
.LBB0_6:
lea rdx, [rsi + 1]
mov qword ptr [rsp + 784], rdx
mov edi, dword ptr [rsp + 4*rsi + 528]
mov r8d, 1
lea esi, [rdi + 2*rdi]
lea edi, [rdi + 4*rsi]
add edi, 7
test r8d, r8d
je .LBB0_11
.LBB0_8:
mov dword ptr [rsp + 4*rcx + 276], edi
add rcx, 2
cmp rcx, 64
jne .LBB0_1
```
whereas with this PR it's unrolled and vectorized
```nasm
vpmulld ymm1, ymm0, ymmword ptr [rsp + 64]
vpaddd ymm1, ymm1, ymm2
vmovdqu ymmword ptr [rsp + 328], ymm1
vpmulld ymm1, ymm0, ymmword ptr [rsp + 96]
vpaddd ymm1, ymm1, ymm2
vmovdqu ymmword ptr [rsp + 360], ymm1
```
(though sadly still stack-to-stack)
avoid mixing accesses of ptrs derived from a mutable ref and parent ptrs
``@Vanille-N`` is working on a successor for Stacked Borrows. It will mostly accept strictly more code than Stacked Borrows did, with one exception: the following pattern no longer works.
```rust
let mut root = 6u8;
let mref = &mut root;
let ptr = mref as *mut u8;
*ptr = 0; // Write
assert_eq!(root, 0); // Parent Read
*ptr = 0; // Attempted Write
```
This worked in Stacked Borrows kind of by accident: when doing the "parent read", under SB we Disable `mref`, but the raw ptrs derived from it remain usable. The fact that we can still use the "children" of a reference that is no longer usable is quite nasty and leads to some undesirable effects (in particular it is the major blocker for resolving https://github.com/rust-lang/unsafe-code-guidelines/issues/257). So in Tree Borrows we no longer do that; instead, reading from `root` makes `mref` and all its children read-only.
Due to other improvements in Tree Borrows, the entire Miri test suite still passes with this new behavior, and even the entire libcore and liballoc test suite, except for these 2 cases this PR fixes. Both of these involve code where the programmer wrote `&mut` but then used pointers derived from that reference in ways that alias with the parent pointer, which arguably is violating uniqueness. They are fixed by properly using raw pointers throughout.
Use associated items of `char` instead of freestanding items in `core::char`
The associated functions and constants on `char` have been stable since 1.52 and the freestanding items have soft-deprecated since 1.62 (https://github.com/rust-lang/rust/pull/95566). This PR ~~marks them as "deprecated in future", similar to the integer and floating point modules (`core::{i32, f32}` etc)~~ replaces all uses of `core::char::*` with `char::*` to prepare for future deprecation of `core::char::*`.
Context is no longer Sync so this doesn't work.
error[E0277]: `*mut ()` cannot be shared between threads safely
--> library/core/tests/task.rs:24:21
|
24 | static CONTEXT: Context<'static> = Context::from_waker(&WAKER);
| ^^^^^^^^^^^^^^^^ `*mut ()` cannot be shared between threads safely
|
= help: within `Context<'static>`, the trait `Sync` is not implemented for `*mut ()`
= note: required because it appears within the type `PhantomData<*mut ()>`
= note: required because it appears within the type `Context<'static>`
= note: shared static variables must have a type that implements `Sync`
More inference-friendly API for lazy
The signature for new was
```
fn new<F>(f: F) -> Lazy<T, F>
```
Notably, with `F` unconstrained, `T` can be literally anything, and just `let _ = Lazy::new(|| 92)` would not typecheck.
This historiacally was a necessity -- `new` is a `const` function, it couldn't have any bounds. Today though, we can move `new` under the `F: FnOnce() -> T` bound, which gives the compiler enough data to infer the type of T from closure.
According to Godbolt¹, on x86_64 using binary and produces slightly
better code than using subtraction. Readability of both is pretty
much equivalent so might just as well use the shorter option.
¹ https://rust.godbolt.org/z/9jM3ejbMx
Add slice methods for indexing via an array of indices.
Disclaimer: It's been a while since I contributed to the main Rust repo, apologies in advance if this is large enough already that it should've been an RFC.
---
# Update:
- Based on feedback, removed the `&[T]` variant of this API, and removed the requirements for the indices to be sorted.
# Description
This adds the following slice methods to `core`:
```rust
impl<T> [T] {
pub unsafe fn get_many_unchecked_mut<const N: usize>(&mut self, indices: [usize; N]) -> [&mut T; N];
pub fn get_many_mut<const N: usize>(&mut self, indices: [usize; N]) -> Option<[&mut T; N]>;
}
```
This allows creating multiple mutable references to disjunct positions in a slice, which previously required writing some awkward code with `split_at_mut()` or `iter_mut()`. For the bound-checked variant, the indices are checked against each other and against the bounds of the slice, which requires `N * (N + 1) / 2` comparison operations.
This has a proof-of-concept standalone implementation here: https://crates.io/crates/index_many
Care has been taken that the implementation passes miri borrow checks, and generates straight-forward assembly (though this was only checked on x86_64).
# Example
```rust
let v = &mut [1, 2, 3, 4];
let [a, b] = v.get_many_mut([0, 2]).unwrap();
std::mem::swap(a, b);
*v += 100;
assert_eq!(v, &[3, 2, 101, 4]);
```
# Codegen Examples
<details>
<summary>Click to expand!</summary>
Disclaimer: Taken from local tests with the standalone implementation.
## Unchecked Indexing:
```rust
pub unsafe fn example_unchecked(slice: &mut [usize], indices: [usize; 3]) -> [&mut usize; 3] {
slice.get_many_unchecked_mut(indices)
}
```
```nasm
example_unchecked:
mov rcx, qword, ptr, [r9]
mov r8, qword, ptr, [r9, +, 8]
mov r9, qword, ptr, [r9, +, 16]
lea rcx, [rdx, +, 8*rcx]
lea r8, [rdx, +, 8*r8]
lea rdx, [rdx, +, 8*r9]
mov qword, ptr, [rax], rcx
mov qword, ptr, [rax, +, 8], r8
mov qword, ptr, [rax, +, 16], rdx
ret
```
## Checked Indexing (Option):
```rust
pub unsafe fn example_option(slice: &mut [usize], indices: [usize; 3]) -> Option<[&mut usize; 3]> {
slice.get_many_mut(indices)
}
```
```nasm
mov r10, qword, ptr, [r9, +, 8]
mov rcx, qword, ptr, [r9, +, 16]
cmp rcx, r10
je .LBB0_7
mov r9, qword, ptr, [r9]
cmp rcx, r9
je .LBB0_7
cmp rcx, r8
jae .LBB0_7
cmp r10, r9
je .LBB0_7
cmp r9, r8
jae .LBB0_7
cmp r10, r8
jae .LBB0_7
lea r8, [rdx, +, 8*r9]
lea r9, [rdx, +, 8*r10]
lea rcx, [rdx, +, 8*rcx]
mov qword, ptr, [rax], r8
mov qword, ptr, [rax, +, 8], r9
mov qword, ptr, [rax, +, 16], rcx
ret
.LBB0_7:
mov qword, ptr, [rax], 0
ret
```
## Checked Indexing (Panic):
```rust
pub fn example_panic(slice: &mut [usize], indices: [usize; 3]) -> [&mut usize; 3] {
let len = slice.len();
match slice.get_many_mut(indices) {
Some(s) => s,
None => {
let tmp = indices;
index_many::sorted_bound_check_failed(&tmp, len)
}
}
}
```
```nasm
example_panic:
sub rsp, 56
mov rax, qword, ptr, [r9]
mov r10, qword, ptr, [r9, +, 8]
mov r9, qword, ptr, [r9, +, 16]
cmp r9, r10
je .LBB0_6
cmp r9, rax
je .LBB0_6
cmp r9, r8
jae .LBB0_6
cmp r10, rax
je .LBB0_6
cmp rax, r8
jae .LBB0_6
cmp r10, r8
jae .LBB0_6
lea rax, [rdx, +, 8*rax]
lea r8, [rdx, +, 8*r10]
lea rdx, [rdx, +, 8*r9]
mov qword, ptr, [rcx], rax
mov qword, ptr, [rcx, +, 8], r8
mov qword, ptr, [rcx, +, 16], rdx
mov rax, rcx
add rsp, 56
ret
.LBB0_6:
mov qword, ptr, [rsp, +, 32], rax
mov qword, ptr, [rsp, +, 40], r10
mov qword, ptr, [rsp, +, 48], r9
lea rcx, [rsp, +, 32]
mov edx, 3
call index_many::bound_check_failed
ud2
```
</details>
# Extensions
There are multiple optional extensions to this.
## Indexing With Ranges
This could easily be expanded to allow indexing with `[I; N]` where `I: SliceIndex<Self>`. I wanted to keep the initial implementation simple, so I didn't include it yet.
## Panicking Variant
We could also add this method:
```rust
impl<T> [T] {
fn index_many_mut<const N: usize>(&mut self, indices: [usize; N]) -> [&mut T; N];
}
```
This would work similar to the regular index operator and panic with out-of-bound indices. The advantage would be that we could more easily ensure good codegen with a useful panic message, which is non-trivial with the `Option` variant.
This is implemented in the standalone implementation, and used as basis for the codegen examples here and there.
`VecDeque::resize` should re-use the buffer in the passed-in element
Today it always copies it for *every* appended element, but one of those clones is avoidable.
This adds `iter::repeat_n` (https://github.com/rust-lang/rust/issues/104434) as the primitive needed to do this. If this PR is acceptable, I'll also use this in `Vec` rather than its custom `ExtendElement` type & infrastructure that is harder to share between multiple different containers:
101e1822c3/library/alloc/src/vec/mod.rs (L2479-L2492)
Fix mod_inv termination for the last iteration
On usize=u64 platforms, the 4th iteration would overflow the `mod_gate` back to 0. Similarly for usize=u32 platforms, the 3rd iteration would overflow much the same way.
I tested various approaches to resolving this, including approaches with `saturating_mul` and `widening_mul` to a double usize. Turns out LLVM likes `mul_with_overflow` the best. In fact now, that LLVM can see the iteration count is limited, it will happily unroll the loop into a nice linear sequence.
You will also notice that the code around the loop got simplified somewhat. Now that LLVM is handling the loop nicely, there isn’t any more reasons to manually unroll the first iteration out of the loop (though looking at the code today I’m not sure all that complexity was necessary in the first place).
Fixes#103361
Fix inconsistent rounding of 0.5 when formatted to 0 decimal places
As described in #70336, when displaying values to zero decimal places the value of 0.5 is rounded to 1, which is inconsistent with the display of other half-integer values which round to even.
From testing the flt2dec implementation, it looks like this comes down to the condition in the fixed-width Dragon implementation where an empty buffer is treated as a case to apply rounding up. I believe the change below fixes it and updates only the relevant tests.
Nevertheless I am aware this is very much a core piece of functionality, so please take a very careful look to make sure I haven't missed anything. I hope this change does not break anything in the wider ecosystem as having a consistent rounding behaviour in floating point formatting is in my opinion a useful feature to have.
Resolves#70336
Make `Hash`, `Hasher` and `BuildHasher` `#[const_trait]` and make `Sip` const `Hasher`
This PR enables using Hashes in const context.
r? ``@fee1-dead``
The signature for new was
```
fn new<F>(f: F) -> Lazy<T, F>
```
Notably, with `F` unconstrained, `T` can be literally anything, and just
`let _ = Lazy::new(|| 92)` would not typecheck.
This historiacally was a necessity -- `new` is a `const` function, it
couldn't have any bounds. Today though, we can move `new` under the `F:
FnOnce() -> T` bound, which gives the compiler enough data to infer the
type of T from closure.
Stabilize `duration_checked_float`
## Stabilization Report
This stabilization report is for a stabilization of `duration_checked_float`, tracking issue: https://github.com/rust-lang/rust/issues/83400.
### Implementation History
- https://github.com/rust-lang/rust/pull/82179
- https://github.com/rust-lang/rust/pull/90247
- https://github.com/rust-lang/rust/pull/96051
- Changed error type to `FromFloatSecsError` in https://github.com/rust-lang/rust/pull/90247
- https://github.com/rust-lang/rust/pull/96051 changes the rounding mode to round-to-nearest instead of truncate.
## API Summary
This stabilization report proposes the following API to be stabilized in `core`, along with their re-exports in `std`:
```rust
// core::time
impl Duration {
pub const fn try_from_secs_f32(secs: f32) -> Result<Duration, TryFromFloatSecsError>;
pub const fn try_from_secs_f64(secs: f64) -> Result<Duration, TryFromFloatSecsError>;
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct TryFromFloatSecsError { ... }
impl core::fmt::Display for TryFromFloatSecsError { ... }
impl core::error::Error for TryFromFloatSecsError { ... }
```
These functions are made const unstable under `duration_consts_float`, tracking issue #72440.
There is an open question in the tracking issue around what the error type should be called which I was hoping to resolve in the context of an FCP.
In this stabilization PR, I have altered the name of the error type to `TryFromFloatSecsError`. In my opinion, the error type shares the name of the method (adjusted to accommodate both types of floats), which is consistent with other error types in `core`, `alloc` and `std` like `TryReserveError` and `TryFromIntError`.
## Experience Report
Code such as this is ready to be converted to a checked API to ensure it is panic free:
```rust
impl Time {
pub fn checked_add_f64(&self, seconds: f64) -> Result<Self, TimeError> {
// Fail safely during `f64` conversion to duration
if seconds.is_nan() || seconds.is_infinite() {
return Err(TzOutOfRangeError::new().into());
}
if seconds.is_sign_positive() {
self.checked_add(Duration::from_secs_f64(seconds))
} else {
self.checked_sub(Duration::from_secs_f64(-seconds))
}
}
}
```
See: https://github.com/artichoke/artichoke/issues/2194.
`@rustbot` label +T-libs-api -T-libs
cc `@mbartlett21`
On usize=u64 platforms, the 4th iteration would overflow the `mod_gate`
back to 0. Similarly for usize=u32 platforms, the 3rd iteration would
overflow much the same way.
I tested various approaches to resolving this, including approaches with
`saturating_mul` and `widening_mul` to a double usize. Turns out LLVM
likes `mul_with_overflow` the best. In fact now, that LLVM can see the
iteration count is limited, it will happily unroll the loop into a nice
linear sequence.
You will also notice that the code around the loop got simplified
somewhat. Now that LLVM is handling the loop nicely, there isn’t any
more reasons to manually unroll the first iteration out of the loop
(though looking at the code today I’m not sure all that complexity was
necessary in the first place).
Fixes#103361
introduce `{char, u8}::is_ascii_octdigit`
This feature adds two new APIs: `char::is_ascii_octdigit` and `u8::is_ascii_octdigit`, under the feature gate `is_ascii_octdigit`. These methods are shorthands for `char::is_digit(self, 8)` and `u8::is_digit(self, 8)`:
```rust
// core::char
impl char {
pub fn is_ascii_octdigit(self) -> bool;
}
// core::num
impl u8 {
pub fn is_ascii_octdigit(self) -> bool;
}
```
---
Couple of things I need help understanding:
- `const`ness: have I used the right attribute in this case?
- is there a way to run the tests for `core::char` alone, instead of `./x.py test library/core`?
Reimplement `carrying_add` and `borrowing_sub` for signed integers.
As per the discussion in #85532, this PR reimplements `carrying_add` and `borrowing_sub` for signed integers.
It also adds unit tests for both unsigned and signed integers, emphasing on the behaviours of the methods.
Make use of `[wrapping_]byte_{add,sub}`
These new methods trivially replace old `.cast().wrapping_offset().cast()` & similar code.
Note that [`arith_offset`](https://doc.rust-lang.org/std/intrinsics/fn.arith_offset.html) and `wrapping_offset` are the same thing.
r? ``@scottmcm``
_split off from #100746_
Expose `Utf8Lossy` as `Utf8Chunks`
This PR changes the feature for `Utf8Lossy` from `str_internals` to `utf8_lossy` and improves the API. This is done to eventually expose the API as stable.
Proposal: rust-lang/libs-team#54
Tracking Issue: #99543
Refactor iteration logic in the `Flatten` and `FlatMap` iterators
The `Flatten` and `FlatMap` iterators both delegate to `FlattenCompat`:
```rust
struct FlattenCompat<I, U> {
iter: Fuse<I>,
frontiter: Option<U>,
backiter: Option<U>,
}
```
Every individual iterator method that `FlattenCompat` implements needs to carefully manage this state, checking whether the `frontiter` and `backiter` are present, and storing the current iterator appropriately if iteration is aborted. This has led to methods such as `next`, `advance_by`, and `try_fold` all having similar code for managing the iterator's state.
I have extracted this common logic of iterating the inner iterators with the option to exit early into a `iter_try_fold` method:
```rust
impl<I, U> FlattenCompat<I, U>
where
I: Iterator<Item: IntoIterator<IntoIter = U>>,
{
fn iter_try_fold<Acc, Fold, R>(&mut self, acc: Acc, fold: Fold) -> R
where
Fold: FnMut(Acc, &mut U) -> R,
R: Try<Output = Acc>,
{ ... }
}
```
It passes each of the inner iterators to the given function as long as it keep succeeding. It takes care of managing `FlattenCompat`'s state, so that the actual `Iterator` methods don't need to. The resulting code that makes use of this abstraction is much more straightforward:
```rust
fn next(&mut self) -> Option<U::Item> {
#[inline]
fn next<U: Iterator>((): (), iter: &mut U) -> ControlFlow<U::Item> {
match iter.next() {
None => ControlFlow::CONTINUE,
Some(x) => ControlFlow::Break(x),
}
}
self.iter_try_fold((), next).break_value()
}
```
Note that despite being implemented in terms of `iter_try_fold`, `next` is still able to benefit from `U`'s `next` method. It therefore does not take the performance hit that implementing `next` directly in terms of `Self::try_fold` causes (in some benchmarks).
This PR also adds `iter_try_rfold` which captures the shared logic of `try_rfold` and `advance_back_by`, as well as `iter_fold` and `iter_rfold` for folding without early exits (used by `fold`, `rfold`, `count`, and `last`).
Benchmark results:
```
before after
bench_flat_map_sum 423,255 ns/iter 414,338 ns/iter
bench_flat_map_ref_sum 1,942,139 ns/iter 2,216,643 ns/iter
bench_flat_map_chain_sum 1,616,840 ns/iter 1,246,445 ns/iter
bench_flat_map_chain_ref_sum 4,348,110 ns/iter 3,574,775 ns/iter
bench_flat_map_chain_option_sum 780,037 ns/iter 780,679 ns/iter
bench_flat_map_chain_option_ref_sum 2,056,458 ns/iter 834,932 ns/iter
```
I added the last two benchmarks specifically to demonstrate an extreme case where `FlatMap::next` can benefit from custom internal iteration of the outer iterator, so take it with a grain of salt. We should probably do a perf run to see if the changes to `next` are worth it in practice.
Add `Iterator::array_chunks` (take N+1)
A revival of https://github.com/rust-lang/rust/pull/92393.
r? `@Mark-Simulacrum`
cc `@rossmacarthur` `@scottmcm` `@the8472`
I've tried to address most of the review comments on the previous attempt. The only thing I didn't address is `try_fold` implementation, I've left the "custom" one for now, not sure what exactly should it use.
Reoptimize layout array
This way it's one check instead of two, so hopefully (cc #99117) it'll be simpler for rustc perf too 🤞
Quick demonstration:
```rust
pub fn demo(n: usize) -> Option<Layout> {
Layout::array::<i32>(n).ok()
}
```
Nightly: <https://play.rust-lang.org/?version=nightly&mode=release&edition=2021&gist=e97bf33508aa03f38968101cdeb5322d>
```nasm
mov rax, rdi
mov ecx, 4
mul rcx
seto cl
movabs rdx, 9223372036854775805
xor esi, esi
cmp rax, rdx
setb sil
shl rsi, 2
xor edx, edx
test cl, cl
cmove rdx, rsi
ret
```
This PR (note no `mul`, in addition to being much shorter):
```nasm
xor edx, edx
lea rax, [4*rcx]
shr rcx, 61
sete dl
shl rdx, 2
ret
```
This is built atop `@CAD97` 's #99136; the new changes are cb8aba66ef6a0e17f08a0574e4820653e31b45a0.
I added a bunch more tests for `Layout::from_size_align` and `Layout::array` too.
Fix slice::ChunksMut aliasing
Fixes https://github.com/rust-lang/rust/issues/94231, details in that issue.
cc `@RalfJung`
This isn't done just yet, all the safety comments are placeholders. But otherwise, it seems to work.
I don't really like this approach though. There's a lot of unsafe code where there wasn't before, but as far as I can tell the only other way to uphold the aliasing requirement imposed by `__iterator_get_unchecked` is to use raw slices, which I think require the same amount of unsafe code. All that would do is tie the `len` and `ptr` fields together.
Oh I just looked and I'm pretty sure that `ChunksExactMut`, `RChunksMut`, and `RChunksExactMut` also need to be patched. Even more reason to put up a draft.
core::any: replace some generic types with impl Trait
This gives a cleaner API since the caller only specifies the concrete type they usually want to.
r? `@yaahc`
Fix `Skip::next` for non-fused inner iterators
`iter.skip(n).next()` will currently call `nth` and `next` in succession on `iter`, without checking whether `nth` exhausts the iterator. Using `?` to propagate a `None` value returned by `nth` avoids this.
Add assertion that `transmute_copy`'s U is not larger than T
This is called out as a safety requirement in the docs, but because knowing this can be done at compile time and constant folded (just like the `align_of` branch is removed), we can just panic here.
I've looked at the asm (using `cargo-asm`) of a function that both is correct and incorrect, and the panic is completely removed, or is unconditional, without needing build-std.
I don't expect this to cause much breakage in the wild. I scanned through https://miri.saethlin.dev/ub for issues that would look like this (error: Undefined Behavior: memory access failed: alloc1768 has size 1, so pointer to 8 bytes starting at offset 0 is out-of-bounds), but couldn't find any.
That doesn't rule out it happening in crates tested that fail earlier for some other reason, though, but it indicates that doing this is rare, if it happens at all. A crater run for this would need to be build and test, since this is a runtime thing.
Also added a few more transmute_copy tests.
Add a special case for align_offset /w stride != 1
This generalizes the previous `stride == 1` special case to apply to any
situation where the requested alignment is divisible by the stride. This
in turn allows the test case from #98809 produce ideal assembly, along
the lines of:
leaq 15(%rdi), %rax
andq $-16, %rax
This also produces pretty high quality code for situations where the
alignment of the input pointer isn’t known:
pub unsafe fn ptr_u32(slice: *const u32) -> *const u32 {
slice.offset(slice.align_offset(16) as isize)
}
// =>
movl %edi, %eax
andl $3, %eax
leaq 15(%rdi), %rcx
andq $-16, %rcx
subq %rdi, %rcx
shrq $2, %rcx
negq %rax
sbbq %rax, %rax
orq %rcx, %rax
leaq (%rdi,%rax,4), %rax
Here LLVM is smart enough to replace the `usize::MAX` special case with
a branch-less bitwise-OR approach, where the mask is constructed using
the neg and sbb instructions. This appears to work across various
architectures I’ve tried.
This change ends up introducing more branches and code in situations
where there is less knowledge of the arguments. For example when the
requested alignment is entirely unknown. This use-case was never really
a focus of this function, so I’m not particularly worried, especially
since llvm-mca is saying that the new code is still appreciably faster,
despite all the new branching.
Fixes#98809.
Sadly, this does not help with #72356.
This generalizes the previous `stride == 1` special case to apply to any
situation where the requested alignment is divisible by the stride. This
in turn allows the test case from #98809 produce ideal assembly, along
the lines of:
leaq 15(%rdi), %rax
andq $-16, %rax
This also produces pretty high quality code for situations where the
alignment of the input pointer isn’t known:
pub unsafe fn ptr_u32(slice: *const u32) -> *const u32 {
slice.offset(slice.align_offset(16) as isize)
}
// =>
movl %edi, %eax
andl $3, %eax
leaq 15(%rdi), %rcx
andq $-16, %rcx
subq %rdi, %rcx
shrq $2, %rcx
negq %rax
sbbq %rax, %rax
orq %rcx, %rax
leaq (%rdi,%rax,4), %rax
Here LLVM is smart enough to replace the `usize::MAX` special case with
a branch-less bitwise-OR approach, where the mask is constructed using
the neg and sbb instructions. This appears to work across various
architectures I’ve tried.
This change ends up introducing more branches and code in situations
where there is less knowledge of the arguments. For example when the
requested alignment is entirely unknown. This use-case was never really
a focus of this function, so I’m not particularly worried, especially
since llvm-mca is saying that the new code is still appreciably faster,
despite all the new branching.
Fixes#98809.
Sadly, this does not help with #72356.
Stabilize `core::ffi:c_*` and rexport in `std::ffi`
This only stabilizes the base types, not the non-zero variants, since
those have their own separate tracking issue and have not gone through
FCP to stabilize.
This only stabilizes the base types, not the non-zero variants, since
those have their own separate tracking issue and have not gone through
FCP to stabilize.
Allow arithmetic and certain bitwise ops on AtomicPtr
This is mainly to support migrating from `AtomicUsize`, for the strict provenance experiment.
This is a pretty dubious set of APIs, but it should be sufficient to allow code that's using `AtomicUsize` to manipulate a tagged pointer atomically. It's under a new feature gate, `#![feature(strict_provenance_atomic_ptr)]`, but I'm not sure if it needs its own tracking issue. I'm happy to make one, but it's not clear that it's needed.
I'm unsure if it needs changes in the various non-LLVM backends. Because we just cast things to integers anyway (and were already doing so), I doubt it.
API change proposal: https://github.com/rust-lang/libs-team/issues/60Fixes#95492
ptr::copy and ptr::swap are doing untyped copies
The consensus in https://github.com/rust-lang/rust/issues/63159 seemed to be that these operations should be "untyped", i.e., they should treat the data as raw bytes, should work when these bytes violate the validity invariant of `T`, and should exactly preserve the initialization state of the bytes that are being copied. This is already somewhat implied by the description of "copying/swapping size*N bytes" (rather than "N instances of `T`").
The implementations mostly already work that way (well, for LLVM's intrinsics the documentation is not precise enough to say what exactly happens to poison, but if this ever gets clarified to something that would *not* perfectly preserve poison, then I strongly assume there will be some way to make a copy that *does* perfectly preserve poison). However, I had to adjust `swap_nonoverlapping`; after ``@scottmcm's`` [recent changes](https://github.com/rust-lang/rust/pull/94212), that one (sometimes) made a typed copy. (Note that `mem::swap`, which works on mutable references, is unchanged. It is documented as "swapping the values at two mutable locations", which to me strongly indicates that it is indeed typed. It is also safe and can rely on `&mut T` pointing to a valid `T` as part of its safety invariant.)
On top of adding a test (that will be run by Miri), this PR then also adjusts the documentation to indeed stably promise the untyped semantics. I assume this means the PR has to go through t-libs (and maybe t-lang?) FCP.
Fixes https://github.com/rust-lang/rust/issues/63159
Add the Provider api to core::any
This is an implementation of [RFC 3192](https://github.com/rust-lang/rfcs/pull/3192) ~~(which is yet to be merged, thus why this is a draft PR)~~. It adds an API for type-driven requests and provision of data from trait objects. A primary use case is for the `Error` trait, though that is not implemented in this PR. The only major difference to the RFC is that the functionality is added to the `any` module, rather than being in a sibling `provide_any` module (as discussed in the RFC thread).
~~Still todo: improve documentation on items, including adding examples.~~
cc `@yaahc`
Extend ptr::null and null_mut to all thin (including extern) types
Fixes https://github.com/rust-lang/rust/issues/93959
This change was accepted in https://rust-lang.github.io/rfcs/2580-ptr-meta.html
Note that this changes the signature of **stable** functions. The change should be backward-compatible, but it is **insta-stable** since it cannot (easily, at all?) be made available only through a `#![feature(…)]` opt-in.
The RFC also proposed the same change for `NonNull::dangling`, which makes sense it terms of its signature but not in terms of its implementation. `dangling` uses `align_of()` as an address. But what `align_of()` should be for extern types or whether it should be allowed at all remains an open question.
This commit depends on https://github.com/rust-lang/rust/pull/93977, which is not yet part of the bootstrap compiler. So `#[cfg]` is used to only apply the change in stage 1+. As far a I know bounds cannot be made conditional with `#[cfg]`, so the entire functions are duplicated. This is unfortunate but temporary.
Since this duplication makes it less obvious in the diff, the new definitions differ in:
* More permissive bounds (`Thin` instead of implied `Sized`)
* Different implementation
* Having `rustc_allow_const_fn_unstable(const_fn_trait_bound)`
* Having `rustc_allow_const_fn_unstable(ptr_metadata)`
These guards changed to pointers in #97027, but their `Display` was
formatting that field directly, which made it show the raw pointer
value. Now we go through `Deref` to display the real value again.
Fixes https://github.com/rust-lang/rust/issues/93959
This change was accepted in https://rust-lang.github.io/rfcs/2580-ptr-meta.html
Note that this changes the signature of **stable** functions.
The change should be backward-compatible, but it is **insta-stable**
since it cannot (easily, at all?) be made available only
through a `#![feature(…)]` opt-in.
The RFC also proposed the same change for `NonNull::dangling`,
which makes sense it terms of its signature but not in terms of its implementation.
`dangling` uses `align_of()` as an address. But what `align_of()` should be for
extern types or whether it should be allowed at all remains an open question.
This commit depends on https://github.com/rust-lang/rust/pull/93977, which is not yet
part of the bootstrap compiler. So `#[cfg]` is used to only apply the change in
stage 1+. As far a I know bounds cannot be made conditional with `#[cfg]`, so the
entire functions are duplicated. This is unfortunate but temporary.
Since this duplication makes it less obvious in the diff,
the new definitions differ in:
* More permissive bounds (`Thin` instead of implied `Sized`)
* Different implementation
* Having `rustc_allow_const_fn_unstable(ptr_metadata)`
Add a dedicated length-prefixing method to `Hasher`
This accomplishes two main goals:
- Make it clear who is responsible for prefix-freedom, including how they should do it
- Make it feasible for a `Hasher` that *doesn't* care about Hash-DoS resistance to get better performance by not hashing lengths
This does not change rustc-hash, since that's in an external crate, but that could potentially use it in future.
Fixes#94026
r? rust-lang/libs
---
The core of this change is the following two new methods on `Hasher`:
```rust
pub trait Hasher {
/// Writes a length prefix into this hasher, as part of being prefix-free.
///
/// If you're implementing [`Hash`] for a custom collection, call this before
/// writing its contents to this `Hasher`. That way
/// `(collection![1, 2, 3], collection![4, 5])` and
/// `(collection![1, 2], collection![3, 4, 5])` will provide different
/// sequences of values to the `Hasher`
///
/// The `impl<T> Hash for [T]` includes a call to this method, so if you're
/// hashing a slice (or array or vector) via its `Hash::hash` method,
/// you should **not** call this yourself.
///
/// This method is only for providing domain separation. If you want to
/// hash a `usize` that represents part of the *data*, then it's important
/// that you pass it to [`Hasher::write_usize`] instead of to this method.
///
/// # Examples
///
/// ```
/// #![feature(hasher_prefixfree_extras)]
/// # // Stubs to make the `impl` below pass the compiler
/// # struct MyCollection<T>(Option<T>);
/// # impl<T> MyCollection<T> {
/// # fn len(&self) -> usize { todo!() }
/// # }
/// # impl<'a, T> IntoIterator for &'a MyCollection<T> {
/// # type Item = T;
/// # type IntoIter = std::iter::Empty<T>;
/// # fn into_iter(self) -> Self::IntoIter { todo!() }
/// # }
///
/// use std:#️⃣:{Hash, Hasher};
/// impl<T: Hash> Hash for MyCollection<T> {
/// fn hash<H: Hasher>(&self, state: &mut H) {
/// state.write_length_prefix(self.len());
/// for elt in self {
/// elt.hash(state);
/// }
/// }
/// }
/// ```
///
/// # Note to Implementers
///
/// If you've decided that your `Hasher` is willing to be susceptible to
/// Hash-DoS attacks, then you might consider skipping hashing some or all
/// of the `len` provided in the name of increased performance.
#[inline]
#[unstable(feature = "hasher_prefixfree_extras", issue = "88888888")]
fn write_length_prefix(&mut self, len: usize) {
self.write_usize(len);
}
/// Writes a single `str` into this hasher.
///
/// If you're implementing [`Hash`], you generally do not need to call this,
/// as the `impl Hash for str` does, so you can just use that.
///
/// This includes the domain separator for prefix-freedom, so you should
/// **not** call `Self::write_length_prefix` before calling this.
///
/// # Note to Implementers
///
/// The default implementation of this method includes a call to
/// [`Self::write_length_prefix`], so if your implementation of `Hasher`
/// doesn't care about prefix-freedom and you've thus overridden
/// that method to do nothing, there's no need to override this one.
///
/// This method is available to be overridden separately from the others
/// as `str` being UTF-8 means that it never contains `0xFF` bytes, which
/// can be used to provide prefix-freedom cheaper than hashing a length.
///
/// For example, if your `Hasher` works byte-by-byte (perhaps by accumulating
/// them into a buffer), then you can hash the bytes of the `str` followed
/// by a single `0xFF` byte.
///
/// If your `Hasher` works in chunks, you can also do this by being careful
/// about how you pad partial chunks. If the chunks are padded with `0x00`
/// bytes then just hashing an extra `0xFF` byte doesn't necessarily
/// provide prefix-freedom, as `"ab"` and `"ab\u{0}"` would likely hash
/// the same sequence of chunks. But if you pad with `0xFF` bytes instead,
/// ensuring at least one padding byte, then it can often provide
/// prefix-freedom cheaper than hashing the length would.
#[inline]
#[unstable(feature = "hasher_prefixfree_extras", issue = "88888888")]
fn write_str(&mut self, s: &str) {
self.write_length_prefix(s.len());
self.write(s.as_bytes());
}
}
```
With updates to the `Hash` implementations for slices and containers to call `write_length_prefix` instead of `write_usize`.
`write_str` defaults to using `write_length_prefix` since, as was pointed out in the issue, the `write_u8(0xFF)` approach is insufficient for hashers that work in chunks, as those would hash `"a\u{0}"` and `"a"` to the same thing. But since `SipHash` works byte-wise (there's an internal buffer to accumulate bytes until a full chunk is available) it overrides `write_str` to continue to use the add-non-UTF-8-byte approach.
---
Compatibility:
Because the default implementation of `write_length_prefix` calls `write_usize`, the changed hash implementation for slices will do the same thing the old one did on existing `Hasher`s.
This accomplishes two main goals:
- Make it clear who is responsible for prefix-freedom, including how they should do it
- Make it feasible for a `Hasher` that *doesn't* care about Hash-DoS resistance to get better performance by not hashing lengths
This does not change rustc-hash, since that's in an external crate, but that could potentially use it in future.
Update `int_roundings` methods from feedback
This updates `#![feature(int_roundings)]` (#88581) from feedback. All methods now take `NonZeroX`. The documentation makes clear that they panic in debug mode and wrap in release mode.
r? `@joshtriplett`
`@rustbot` label +T-libs +T-libs-api +S-waiting-on-review
Add slice::remainder
This adds a remainder function to the Slice iterator, so that a caller can access unused
elements if iteration stops.
Addresses #91733
Faster parsing for lower numbers for radix up to 16 (cont.)
( Continuation of https://github.com/rust-lang/rust/pull/83371 )
With LingMan's change I think this is potentially ready.
Make non-power-of-two alignments a validity error in `Layout`
Inspired by the zulip conversation about how `Layout` should better enforce `size <= isize::MAX as usize`, this uses an N-variant enum on N-bit platforms to require at the validity level that the existing invariant of "must be a power of two" is upheld.
This was MIRI can catch it, and means there's a more-specific type for `Layout` to store than just `NonZeroUsize`.
It's left as `pub(crate)` here; a future PR could consider giving it a tracking issue for non-internal usage.
Inspired by the zulip conversation about how `Layout` should better enforce `size < isize::MAX as usize`, this uses an N-variant enum on N-bit platforms to require at the validity level that the existing invariant of "must be a power of two" is upheld.
This was MIRI can catch it, and means there's a more-specific type for `Layout` to store than just `NonZeroUsize`.
Implement provenance preserving methods on NonNull
### Description
Add the `addr`, `with_addr`, `map_addr` methods to the `NonNull` type, and map the address type to `NonZeroUsize`.
### Motivation
The `NonNull` type is useful for implementing pointer types which have the 0-niche. It is currently possible to implement these provenance preserving functions by calling `NonNull::as_ptr` and `new_unchecked`. The adding these methods makes it more ergonomic.
### Testing
Added a unit test of a non-null tagged pointer type. This is based on some real code I have elsewhere, that currently routes the pointer through a `NonZeroUsize` and back out to produce a usable pointer. I wanted to produce an ideal version of the same tagged pointer struct that preserved pointer provenance.
### Related
Extension of APIs proposed in #95228 . I can also split this out into a separate tracking issue if that is better (though I may need some pointers on how to do that).
Handle rustc_const_stable attribute in library feature collector
The library feature collector in [compiler/rustc_passes/src/lib_features.rs](551b4fa395/compiler/rustc_passes/src/lib_features.rs) has only been looking at `#[stable(…)]`, `#[unstable(…)]`, and `#[rustc_const_unstable(…)]` attributes, while ignoring `#[rustc_const_stable(…)]`. The consequences of this were:
- When any const feature got stabilized (changing one or more `rustc_const_unstable` to `rustc_const_stable`), users who had previously enabled that unstable feature using `#![feature(…)]` would get told "unknown feature", rather than rustc's nicer "the feature … has been stable since … and no longer requires an attribute to enable".
This can be seen in the way that https://github.com/rust-lang/rust/pull/93957#issuecomment-1079794660 failed after rebase:
```console
error[E0635]: unknown feature `const_ptr_offset`
--> $DIR/offset_from_ub.rs:1:35
|
LL | #![feature(const_ptr_offset_from, const_ptr_offset)]
| ^^^^^^^^^^^^^^^^
```
- We weren't enforcing that a particular feature is either stable everywhere or unstable everywhere, and that a feature that has been stabilized has the same stabilization version everywhere, both of which we enforce for the other stability attributes.
This PR updates the library feature collector to handle `rustc_const_stable`, and fixes places in the standard library and test suite where `rustc_const_stable` was being used in a way that does not meet the rules for a stability attribute.
skip slow int_log tests in Miri
Iterating over i16::MAX many things takes a long time in Miri, let's not do that.
I added https://github.com/rust-lang/miri/pull/2044 on the Miri side to still give us some test coverage.
**Description**
Add the `addr`, `with_addr, `map_addr` methods to the `NonNull` type,
and map the address type to `NonZeroUsize`.
**Motiviation**
The `NonNull` type is useful for implementing pointer types which have
the 0-niche. It is currently possible to implement these provenance
preserving functions by calling `NonNull::as_ptr` and `new_unchecked`.
The addition of these methods simply make it more ergonomic to use.
**Testing**
Added a unit test of a nonnull tagged pointer type. This is based on
some real code I have elsewhere, that currently routes the pointer
through a `NonZeroUsize` and back out to produce a usable pointer.
Derive Eq for std::cmp::Ordering, instead of using manual impl.
This allows consts of type Ordering to be used in patterns, and with feature(adt_const_params) allows using `Ordering` as a const generic parameter.
Currently, `std::cmp::Ordering` implements `Eq` using a manually written `impl Eq for Ordering {}`, instead of `derive(Eq)`. This means that it does not implement `StructuralEq`.
This commit removes the manually written impl, and adds `derive(Eq)` to `Ordering`, so that it will implement `StructuralEq`.
Let `try_collect` take advantage of `try_fold` overrides
No public API changes.
With this change, `try_collect` (#94047) is no longer going through the `impl Iterator for &mut impl Iterator`, and thus will be able to use `try_fold` overrides instead of being forced through `next` for every element.
Here's the test added, to see that it fails before this PR (once a new enough nightly is out): https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=462f2896f2fed2c238ee63ca1a7e7c56
This might as well go to the same person as my last `try_process` PR (#93572), so
r? ``@yaahc``
Enable conditional checking of values in the Rust codebase
This pull-request enable conditional checking of (well known) values in the Rust codebase.
Well known values were added in https://github.com/rust-lang/rust/pull/94362. All the `target_*` values are taken from all the built-in targets which is why some extra values were needed do be added as they are not (yet ?) defined in any built-in targets.
r? `@Mark-Simulacrum`
This updates the standard library's documentation to use the new syntax. The
documentation is worthwhile to update as it should be more idiomatic
(particularly for features like this, which are nice for users to get acquainted
with). The general codebase is likely more hassle than benefit to update: it'll
hurt git blame, and generally updates can be done by folks updating the code if
(and when) that makes things more readable with the new format.
A few places in the compiler and library code are updated (mostly just due to
already having been done when this commit was first authored).
Add Iterator::collect_into
This PR adds `Iterator::collect_into` as proposed by ``@cormacrelf`` in #48597 (see https://github.com/rust-lang/rust/pull/48597#issuecomment-842083688).
Followup of #92982.
This adds the following method to the Iterator trait:
```rust
fn collect_into<E: Extend<Self::Item>>(self, collection: &mut E) -> &mut E
```
Implement `RawWaker` and `Waker` getters for underlying pointers
implement #87021
New APIs:
- `RawWaker::data(&self) -> *const ()`
- `RawWaker::vtable(&self) -> &'static RawWakerVTable`
- ~`Waker::as_raw_waker(&self) -> &RawWaker`~ `Waker::as_raw(&self) -> &RawWaker`
This third one is an auxiliary function to make the two APIs above more useful. Since we can only get `&Waker` in `Future::poll`, without this, we need to `transmute` it into `&RawWaker` (relying on `repr(transparent)`) in order to access its data/vtable pointers.
~Not sure if it should be named `as_raw` or `as_raw_waker`. Seems we always use `as_<something-raw>` instead of just `as_raw`. But `as_raw_waker` seems not quite consistent with `Waker::from_raw`.~ As suggested in https://github.com/rust-lang/rust/pull/91828#discussion_r770729837, use `as_raw`.
Carefully remove bounds checks from some chunk iterator functions
So, I was writing code that requires the equivalent of `rchunks(N).rev()` (which isn't the same as forward `chunks(N)` — in particular, if the buffer length is not a multiple of `N`, I must handle the "remainder" first).
I happened to look at the codegen output of the function (I was actually interested in whether or not a nested loop was being unrolled — it was), and noticed that in the outer `rchunks(n).rev()` loop, LLVM seemed to be unable to remove the bounds checks from the iteration: https://rust.godbolt.org/z/Tnz4MYY8f (this panic was from the split_at in `RChunks::next_back`).
After doing some experimentation, it seems all of the `next_back` in the non-exact chunk iterators have the issue: (`Chunks::next_back`, `RChunks::next_back`, `ChunksMut::next_back`, and `RChunksMut::next_back`)...
Even worse, the forward `rchunks` iterators sometimes have the issue as well (... but only sometimes). For example https://rust.godbolt.org/z/oGhbqv53r has bounds checks, but if I uncomment the loop body, it manages to remove the check (which is bizarre, since I'd expect the opposite...). I suspect it's highly dependent on the surrounding code, so I decided to remove the bounds checks from them anyway. Overall, this change includes:
- All `next_back` functions on the non-`Exact` iterators (e.g. `R?Chunks(Mut)?`).
- All `next` functions on the non-exact rchunks iterators (e.g. `RChunks(Mut)?`).
I wasn't able to catch any of the other chunk iterators failing to remove the bounds checks (I checked iterations over `r?chunks(_exact)?(_mut)?` with constant chunk sizes under `-O3`, `-Os`, and `-Oz`), which makes sense, since these were the cases where it was harder to prove the bounds check correct to remove...
In fact, it took quite a bit of thinking to convince myself that using unchecked_ here was valid — so I'm not really surprised that LLVM had trouble (although compilers are slightly better at this sort of reasoning than humans). A consequence of that is the fact that the `// SAFETY` comment for these are... kinda long...
---
I didn't do this for, or even think about it for, any of the other iteration methods; just `next` and `next_back` (where it mattered). If this PR is accepted, I'll file a follow up for someone (possibly me) to look at the others later (in particular, `nth`/`nth_back` looked like they had similar logic), but I wanted to do this now, as IMO `next`/`next_back` are the most important here, since they're what gets used by the iteration protocol.
---
Note: While I don't expect this to impact performance directly, the panic is a side effect, which would otherwise not exist in these loops. That is, this could prevent the compiler from being able to move/remove/otherwise rework a loop over these iterators (as an example, it could not delete the code for a loop whose body computes a value which doesn't get used).
Also, some like to be able to have confidence this code has no panicking branches in the optimized code, and "no bounds checks" is kinda part of the selling point of Rust's iterators anyway.
Remove deprecated and unstable slice_partition_at_index functions
They have been deprecated since commit 01ac5a97c9
which was part of the 1.49.0 release, so from the point of nightly,
11 releases ago.
Make `char::DecodeUtf16::size_hist` more precise
New implementation takes into account contents of `self.buf` and rounds lower bound up instead of down.
Fixes#88762
Revival of #88763
Add `intrinsics::const_deallocate`
Tracking issue: #79597
Related: #91884
This allows deallocation of a memory allocated by `intrinsics::const_allocate`. At the moment, this can be only used to reduce memory usage, but in the future this may be useful to detect memory leaks (If an allocated memory remains after evaluation, raise an error...?).
impl Not for !
The lack of this impl caused trouble for me in some degenerate cases of macro-generated code of the form `if !$cond {...}`, even without `feature(never_type)` on a stable compiler. Namely if `$cond` contains a `return` or `break` or similar diverging expression, which would otherwise be perfectly legal in boolean position, the code previously failed to compile with:
```console
error[E0600]: cannot apply unary operator `!` to type `!`
--> library/core/tests/ops.rs:239:8
|
239 | if !return () {}
| ^^^^^^^^^^ cannot apply unary operator `!`
```
Simplification of BigNum::bit_length
As indicated in the comment, the BigNum::bit_length function could be
optimized by using CLZ, which is often a single instruction instead a
loop.
I think the code is also simpler now without the loop.
I added some additional tests for Big8x3 and Big32x40 to ensure that
there were no regressions.
Partially stabilize `maybe_uninit_extra`
This covers:
```rust
impl<T> MaybeUninit<T> {
pub unsafe fn assume_init_read(&self) -> T { ... }
pub unsafe fn assume_init_drop(&mut self) { ... }
}
```
It does not cover the const-ness of `write` under `const_maybe_uninit_write` nor the const-ness of `assume_init_read` (this commit adds `const_maybe_uninit_assume_init_read` for that).
FCP: https://github.com/rust-lang/rust/issues/63567#issuecomment-958590287.
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
This covers:
impl<T> MaybeUninit<T> {
pub unsafe fn assume_init_read(&self) -> T { ... }
pub unsafe fn assume_init_drop(&mut self) { ... }
}
It does not cover the const-ness of `write` under
`const_maybe_uninit_write` nor the const-ness of
`assume_init_read` (this commit adds
`const_maybe_uninit_assume_init_read` for that).
FCP: https://github.com/rust-lang/rust/issues/63567#issuecomment-958590287.
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
As indicated in the comment, the BigNum::bit_length function could be
optimized by using CLZ, which is often a single instruction instead a
loop.
I think the code is also simpler now without the loop.
I added some additional tests for Big8x3 and Big32x40 to ensure that
there were no regressions.
disable test with self-referential generator on Miri
Running the libcore test suite in Miri currently fails due to the known incompatibility of self-referential generators with Miri's aliasing checks (https://github.com/rust-lang/unsafe-code-guidelines/issues/148). So let's disable that test in Miri for now.
Allow reverse iteration of lowercase'd/uppercase'd chars
The PR implements `DoubleEndedIterator` trait for `ToLowercase` and `ToUppercase`.
This enables reverse iteration of lowercase/uppercase variants of character sequences.
One of use cases: determining whether a char sequence is a suffix of another one.
Example:
```rust
fn endswith_ignore_case(s1: &str, s2: &str) -> bool {
for eob in s1
.chars()
.flat_map(|c| c.to_lowercase())
.rev()
.zip_longest(s2.chars().flat_map(|c| c.to_lowercase()).rev())
{
match eob {
EitherOrBoth::Both(c1, c2) => {
if c1 != c2 {
return false;
}
}
EitherOrBoth::Left(_) => return true,
EitherOrBoth::Right(_) => return false,
}
}
true
}
```
Revert "Temporarily rename int_roundings functions to avoid conflicts"
This reverts commit 3ece63b64e.
This should be okay because #90329 has been merged.
r? `@joshtriplett`
Mark defaulted `PartialEq`/`PartialOrd` methods as const
WIthout it, `const` impls of these traits are unpleasant to write. I think this kind of change is allowed now. although it looks like it might require some Miri tweaks. Let's find out.
r? ```@fee1-dead```
Do array-slice equality via array equality, rather than always via slices
~~Draft because it needs a rebase after #91766 eventually gets through bors.~~
This enables the optimizations from #85828 to be used for array-to-slice comparisons too, not just array-to-array.
For example, <https://play.rust-lang.org/?version=nightly&mode=release&edition=2021&gist=5f9ba69b3d5825a782f897c830d3a6aa>
```rust
pub fn demo(x: &[u8], y: [u8; 4]) -> bool {
*x == y
}
```
Currently writes the array to stack for no reason:
```nasm
sub rsp, 4
mov dword ptr [rsp], edx
cmp rsi, 4
jne .LBB0_1
mov eax, dword ptr [rdi]
cmp eax, dword ptr [rsp]
sete al
add rsp, 4
ret
.LBB0_1:
xor eax, eax
add rsp, 4
ret
```
Whereas with the change in this PR it just compares it directly:
```nasm
cmp rsi, 4
jne .LBB1_1
cmp dword ptr [rdi], edx
sete al
ret
.LBB1_1:
xor eax, eax
ret
```
Constify `bool::then{,_some}`
Note on `~const Drop`: it has no effect when called from runtime functions, when called from const contexts, the trait system ensures that the type can be dropped in const contexts.
This'll still go via slices eventually for large arrays, but this way slice comparisons to short arrays can use the same memcmp-avoidance tricks.
Added some tests for all the combinations to make sure I didn't accidentally infinitely-recurse something.