Implement `pin!()` using `super let`
Tracking issue for super let: https://github.com/rust-lang/rust/issues/139076
This uses `super let` to implement `pin!()`.
This means we can remove [the hack](https://github.com/rust-lang/rust/pull/138717) we had to put in to fix https://github.com/rust-lang/rust/issues/138596.
It also means we can remove the original hack to make `pin!()` work, which used a questionable public-but-unstable field rather than a proper private field.
While `super let` is still unstable and subject to change, it seems safe to assume that future Rust will always have a way to express `pin!()` in a compatible way, considering `pin!()` is already stable.
It'd help [the experiment](https://github.com/rust-lang/rust/issues/139076) to have `pin!()` use `super let`, so we can get some more experience with it.
Fix drop handling in `hint::select_unpredictable`
This intrinsic doesn't drop the value that is not selected so this is manually done in the public function that wraps the intrinsic.
f*::NAN: guarantee that this is a quiet NaN
I think we should guarantee that this is a quiet NaN. This then implies that programs not using `f*::from_bits` (or unsafe type conversions) are guaranteed to only work with quiet NaNs. It would be awkward if people start to write `0.0 / 0.0` instead of using the constant just because they want to get a guaranteed-quiet NaN.
This is a `@rust-lang/libs-api` change. The definition of this constant currently is `0.0 / 0.0`, which is already guaranteed to be a quiet NaN. So all this does is forward that guarantee to our users.
Enable contracts for const functions
Use `const_eval_select!()` macro to enable contract checking only at runtime. The existing contract logic relies on closures, which are not supported in constant functions.
This commit also removes one level of indirection for ensures clauses since we no longer build a closure around the ensures predicate.
Resolves#136925
**Call-out:** This is still a draft PR since CI is broken due to a new warning message for unreachable code when the bottom of the function is indeed unreachable. It's not clear to me why the warning wasn't triggered before.
r? ```@compiler-errors```
Avoid unused clones in `Cloned<I>` and `Copied<I>`
Avoid cloning in `Cloned<I>` or copying in `Copied<I>` when elements are only needed by reference or not at all. There is already some precedent for this, given that `__iterator_get_unchecked` is implemented, which can skip elements. The reduced clones are technically observable by a user impl of `Clone`.
r? libs-api
Avoid cloning in `Cloned<I>` or copying in `Copied<I>` when elements are
only needed by reference or not at all. There is already some precedent
for this, given that `__iterator_get_unchecked` is implemented, which
can skip elements. The reduced clones are technically observable by a
user impl of `Clone`.
Initial `UnsafePinned` implementation [Part 1: Libs]
Initial libs changes necessary to unblock further work on [RFC 3467](https://rust-lang.github.io/rfcs/3467-unsafe-pinned.html).
Tracking issue: #125735
This PR is split off from #136964, and includes just the libs changes:
- `UnsafePinned` struct
- private `UnsafeUnpin` structural auto trait
- Lang items for both
- Compiler changes necessary to block niches on `UnsafePinned`
This PR does not change codegen, miri, the existing `!Unpin` hack, or anything else. That work is to be split into later PRs.
---
cc ``@RalfJung`` ``@Noratrieb``
``@rustbot`` label F-unsafe_pinned T-libs-api
Move `select_unpredictable` to the `hint` module
There has been considerable discussion in both the ACP (rust-lang/libs-team#468) and tracking issue (#133962) about whether the `bool::select_unpredictable` method should be in `core::hint` instead.
I believe this is the right move for the following reasons:
- The documentation explicitly says that it is a hint, not a codegen guarantee.
- `bool` doesn't have a corresponding `select` method, and I don't think we should be adding one.
- This shouldn't be something that people reach for with auto-completion unless they specifically understand the interactions with branch prediction. Using conditional moves can easily make code *slower* by preventing the CPU from speculating past the condition due to the data dependency.
- Although currently `core::hint` only contains no-ops, this isn't a hard rule (for example `unreachable_unchecked` is a bit of a gray area). The documentation only status that the module contains "hints to compiler that affects how code should be emitted or optimized". This is consistent with what `select_unpredictable` does.
Use the chaining methods on PartialOrd for slices too
#138135 added these doc-hidden trait methods to improve the tuple codegen. This PR adds more implementations and callers so that the codegen for slice (and array) comparisons also improves.
docs: clarify uint exponent for `is_power_of_two`
This makes the documentation more explicit for that method. I know this might seem "nit-picky", but `k` could be interpreted as "any Real or Complex number". A trivial example would be $`3 = 2^{log_2(3)}`$ which "proves that three is a power of two" (according to that vague definition).
BTW, when I read the implementation, I was surprised to see that `1` is considered a power of 2 despite being odd (it does make sense in some contexts, but still not intuitive). So I wrote "positive int" before correcting it to "unsigned int"
Polymorphize `array::IntoIter`'s iterator impl
Today we emit all the iterator methods for every different array width. That's wasteful since the actual array length never even comes into it -- the indices used are from the separate `alive: IndexRange` field, not even the `N` const param.
This PR switches things so that an `array::IntoIter<T, N>` stores a `PolymorphicIter<[MaybeUninit<T>; N]>`, which we *unsize* to `PolymorphicIter<[MaybeUninit<T>]>` and call methods on that non-`Sized` type for all the iterator methods.
That also necessarily makes the layout consistent between the different lengths of arrays, because of the unsizing. Compare that to today <https://rust.godbolt.org/z/Prb4xMPrb>, where different widths can't even be deduped because the offset to the indices is different for different array widths.
add `core::intrinsics::simd::{simd_extract_dyn, simd_insert_dyn}`
fixes https://github.com/rust-lang/rust/issues/137372
adds `core::intrinsics::simd::{simd_extract_dyn, simd_insert_dyn}`, which contrary to their non-dyn counterparts allow a non-const index. Many platforms (but notably not x86_64 or aarch64) have dedicated instructions for this operation, which stdarch can emit with this change.
Future work is to also make the `Index` operation on the `Simd` type emit this operation, but the intrinsic can't be used directly. We'll need some MIR shenanigans for that.
r? `@ghost`
Ensure `swap_nonoverlapping` is really always untyped
This replaces #134954, which was arguably overcomplicated.
## Fixes#134713
Actually using the type passed to `ptr::swap_nonoverlapping` for anything other than its size + align turns out to not work, so this goes back to always erasing the types down to just bytes.
(Except in `const`, which keeps doing the same thing as before to preserve `@RalfJung's` fix from #134689)
## Fixes#134946
I'd previously moved the swapping to use auto-vectorization *on bytes*, but someone pointed out on Discord that the tail loop handling from that left a whole bunch of byte-by-byte swapping around. This goes back to manual tail handling to avoid that, then still triggers auto-vectorization on pointer-width values. (So you'll see `<4 x i64>` on `x86-64-v3` for example.)
Update `u8`-to-and-from-`i8` suggestions.
`u8::cast_signed` and `i8::cast_unsigned` have been stabilised, but `i8::from_ne_bytes` et al. still suggest using `as i8` or `as u8`.
speed up `String::push` and `String::insert`
Addresses the concerns described in #116235.
The performance gain comes mainly from avoiding temporary buffers.
Complex pattern matching in `encode_utf8` (introduced in #67569) has been simplified to a comparison and an exhaustive `match` in the `encode_utf8_raw_unchecked` helper function. It takes a slice of `MaybeUninit<u8>` because otherwise we'd have to construct a normal slice to uninitialized data, which is not desirable, I guess.
Several functions still have that [unneeded zeroing](https://rust.godbolt.org/z/5oKfMPo7j), but a single instruction is not that important, I guess.
`@rustbot` label T-libs C-optimization A-str
Promise `array::from_fn` is generated in order of increasing indices
Fixes#139061
I agree this needs to be documented because of the `FnMut`, either with a guarantee or to explicitly disclaim one.
I'm pretty sure this will be non-controversial (like the other "well sure you *could* do it in a different order, but why?" things were), but I couldn't find any previous libs-api decision on it so it's seemingly a new promise that will need FCP.
Basically, yes, it would be plausible to fill in the reverse order, but there's no obvious way we could ever know that that might even be a good idea, so forward seems like an easy thing to promise. We could always add a `from_fn_rev` or something later if there's ever a strong enough need, but it seems unlikely.
Let's just do the obvious thing so it matches what `[gen(0), gen(1), …, gen(N-1)]` does.
Make `cfg_match!` a semitransparent macro
IIUC this is preferred when (potentially) stabilizing `macro` items, to avoid potentially utilizing def-site hygiene instead of mixed-site.
Tracking issue: #115585