Add negation methods for signed non-zero integers.
Performing negation with defined wrapping semantics (such as `wrapping_neg()`) on a non-zero integer currently requires unpacking to a primitive and re-wrapping. Since negation of non-zero signed integers always produces a non-zero result, it is safe to implement the various `*_neg()` methods for `NonZeroI{N}`.
I'm not sure what to do about the `#[unstable(..., issue = "none")]` here -- should I file a tracking issue, or is that handled by the Rust dev team?
ACP: https://github.com/rust-lang/libs-team/issues/105
Add a niche to `Duration`, unix `SystemTime`, and non-apple `Instant`
As the nanoseconds fields is always between `0` and `(NANOS_PER_SEC - 1)` inclusive, use the `rustc_layout_scalar_valid_range` attributes to create a niche in the nanosecond field of `Duration` and `Timespec` (which is used to implement unix `SystemTime` and non-apple unix `Instant`; windows `Instant` is implemented with `Duration` and therefore will also benefit). This change has the benefit of making `Option<T>` the same size as `T` for the previously mentioned types. Also shrinks the nanoseconds field of `Timespec` to a `u32` as nanoseconds do not need the extra range of an `i64`, shrinking `Timespec` by 4 bytes on 32-bit platforms.
r? ```@joshtriplett```
Add `#[rustc_safe_intrinsic]`
This PR adds the `#[rustc_safe_intrinsic]` attribute as mentionned on Zulip. The goal of this attribute is to avoid keeping a list of symbols as the source for stable intrinsics, and instead rely on an attribute. This is similar to `#[rustc_const_stable]` and `#[rustc_const_unstable]`, which among other things, are used to mark the constness of intrinsic functions.
Suggest unwrapping `???<T>` if a method cannot be found on it but is present on `T`.
This suggests various ways to get inside wrapper types if the method cannot be found on the wrapper type, but is present on the wrappee.
For this PR, those wrapper types include `Localkey`, `MaybeUninit`, `RefCell`, `RwLock` and `Mutex`.
Stabilize bench_black_box
This PR stabilize `feature(bench_black_box)`.
```rust
pub fn black_box<T>(dummy: T) -> T;
```
The FCP was completed in https://github.com/rust-lang/rust/issues/64102.
`@rustbot` label +T-libs-api -T-libs
Stabilize `#![feature(mixed_integer_ops)]`
Tracked and FCP completed in #87840.
````@rustbot```` label +T-libs-api +S-waiting-on-review +relnotes
r? rust-lang/t-libs-api
Constify Default impl's for Arrays and Tuples.
Allows to create arrays and tuples in const Context using the ~const Default implementation of the inner type.
Constify slice.split_at_mut(_unchecked)
Tracking Issue: [Tracking Issue for const_slice_split_at_mut](https://github.com/rust-lang/rust/issues/101804)
Feature gate: `#![feature(const_slice_split_at_mut)]`
Still requires const_mut_refs to be actually used, but this feature removes the need to manually re implement these functions in a user crate.
Clarify `[T]::select_nth_unstable*` return values
In cases where the nth element is not unique within the slice, it is not
correct to say that the values in the returned triplet include ones for
"all elements" less/greater than that at the given index: indeed one (or
more) such values would then also contain elements equal to that at
the given index.
The text proposed here clarifies exactly what is returned, but in so
doing it is also documenting an implementation detail that previously
wasn't detailed: namely that the returned slices are slices into the
reordered slice. I don't think this can be contentious, because the
lifetimes of those returned slices are bound to that of the original
(now reordered) slice—so there really isn't any other reasonable
implementation that could have this behaviour; but nevertheless it's
probably best if `@rust-lang/libs-api` give it a nod?
Fixes#97982
r? `@m-ou-se`
`@rustbot` label +A-docs +C-bug +T-libs-api -T-libs
Make ZST checks in core/alloc more readable
There's a bunch of these checks because of special handing for ZSTs in various unsafe implementations of stuff.
This lets them be `T::IS_ZST` instead of `mem::size_of::<T>() == 0` every time, making them both more readable and more terse.
*Not* proposed for stabilization. Would be `pub(crate)` except `alloc` wants to use it too.
(And while it doesn't matter now, if we ever get something like #85836 making it a const can help codegen be simpler.)
Add const_closure, Constify Try trait
Adds a struct for creating const `FnMut` closures (for now just copy pasted form my [const_closure](https://crates.io/crates/const_closure) crate).
I'm not sure if this way is how it should be done.
The `ConstFnClosure` and `ConstFnOnceClosure` structs can probably also be entirely removed.
This is then used to constify the try trait.
Not sure if i should add const_closure in its own pr and maybe make it public behind a perma-unstable feature gate.
cc ```@fee1-dead``` ```@rust-lang/wg-const-eval```
Refactor some `std` code that works with pointer offstes
This PR replaces `pointer::offset` in standard library with `pointer::add` and `pointer::sub`, [re]moving some casts and using `.addr()` while we are at it.
This is a more complicated refactor than all other sibling PRs, so take a closer look when reviewing, please 😃 (though I've checked this multiple times and it looks fine).
r? ````@scottmcm````
_split off from #100746, continuation of #100822_
Add `#[inline]` to trivial functions on `core::sync::Exclusive`
When optimizing for size things like these sometimes don't inlined even though they're generic. This is bad because they're no-ops.
Only dodgy one is poll I guess since it forwards to the inner poll, but it's not like we're doing `#[inline(always)]` here.
Use internal iteration in `Iterator` comparison methods
Updates the `Iterator` methods `cmp_by`, `partial_cmp_by`, and `eq_by` to use internal iteration on `self`. I've also extracted their shared logic into a private helper function `iter_compare`, which will either short-circuit once the comparison result is known or return the comparison of the lengths of the iterators.
This change also indirectly benefits calls to `cmp`, `partial_cmp`, `eq`, `lt`, `le`, `gt`, and `ge`.
Unsurprising benchmark results: iterators that benefit from internal iteration (like `Chain`) see a speedup, while other iterators are unaffected.
```
name before ns/iter after ns/iter diff ns/iter diff % speedup
iter::bench_chain_partial_cmp 208,301 54,978 -153,323 -73.61% x 3.79
iter::bench_partial_cmp 55,527 55,702 175 0.32% x 1.00
iter::bench_lt 55,502 55,322 -180 -0.32% x 1.00
```
Add examples to `bool::then` and `bool::then_some`
Added examples to `bool::then` and `bool::then_some` to show the distinction between the eager evaluation of `bool::then_some` and the lazy evaluation of `bool::then`.
There's a bunch of these checks because of special handing for ZSTs in various unsafe implementations of stuff.
This lets them be `T::IS_ZST` instead of `mem::size_of::<T>() == 0` every time, making them both more readable and more terse.
*Not* proposed for stabilization at this time. Would be `pub(crate)` except `alloc` wants to use it too.
(And while it doesn't matter now, if we ever get something like 85836 making it a const can help codegen be simpler.)
Extend const_convert with const {FormResidual, Try} for ControlFlow.
Very small change so I just used the existing `const_convert` feature flag. #88674
Newly const API:
```
impl<B, C> const ops::Try for ControlFlow<B, C>;
impl<B, C> const ops::FromResidual for ControlFlow<B, C>;
```
`@usbalbin` I hope it is ok that I added to your feature.
Added examples to `bool::then` and `bool::then_some` to show the distinction between the eager evaluation of `bool::then_some` and the lazy evaluation of `bool::then`.
Optimize `array::IntoIter`
`.into_iter()` on arrays was slower than it needed to be (especially compared to slice iterator) since it uses `Range<usize>`, which needs to handle degenerate ranges like `10..4`.
This PR adds an internal `IndexRange` type that's like `Range<usize>` but with a safety invariant that means it doesn't need to worry about those cases -- it only handles `start <= end` -- and thus can give LLVM more information to optimize better.
I added one simple demonstration of the improvement as a codegen test.
(`vec::IntoIter` uses pointers instead of indexes, so doesn't have this problem, but that only works because its elements are boxed. `array::IntoIter` can't use pointers because that would keep it from being movable.)
`.into_iter()` on arrays was slower than it needed to be (especially compared to slice iterator) since it uses `Range<usize>`, which needs to handle degenerate ranges like `10..4`.
This PR adds an internal `IndexRange` type that's like `Range<usize>` but with a safety invariant that means it doesn't need to worry about those cases -- it only handles `start <= end` -- and thus can give LLVM more information to optimize better.
I added one simple demonstration of the improvement as a codegen test.
Make `from_waker`, `waker` and `from_raw` unstably `const`
Make
- `Context::from_waker`
- `Context::waker`
- `Waker::from_raw`
`const`.
Also added a small test.
Tone down explanation on RefCell::get_mut
The language around `RefCell::get_mut` is remarkably sketchy and especially to the novice seems to quite strongly discourage using the method ("be cautious", "Also, please be aware", "special circumstances", "usually not what you want"). It was added six years ago in #40634 due to confusion about when to use `get_mut` and `borrow_mut`.
While its signature limits the use-cases for `get_mut`, there is no chance for a safety footgun, and readers can be made aware of `borrow_mut` more softly. I've also just sent a [PR](https://github.com/rust-lang/rust-clippy/issues/9044) to lint situations where `get_mut` could be used to improve ergonomics and performance.
So this PR tones down the language around `get_mut` and also brings it more in line with [`std::sync::Mutex::get_mut()`](https://doc.rust-lang.org/stable/std/sync/struct.Mutex.html#method.get_mut).
array docs - advertise how to get array from slice
On my first Rust project, I spent more time than I care to admit figuring out how to efficiently get an array from a slice. Update the array documentation to explain this a bit more clearly.
(As a side note, it's a bit unfortunate that get-array-from-slice is only available via trait since that means it can't be used from const functions yet.)
Constify impl Fn* &(mut) Fn*
Tracking Issue: [101803](https://github.com/rust-lang/rust/issues/101803)
Feature gate: `#![feature(const_fn_trait_ref_impls)]`
This feature allows using references to Fn* Items as Fn* Items themself in a const context.