Improve `Iterator::collect_into` documentation
This improves the examples in the documentation of `Iterator::collect_into`, replacing the usages of `println!` with `assert_eq!` as suggested on [IRLO](https://internals.rust-lang.org/t/18534/9).
Beautify pin! docs
This makes pin docs a little bit less jargon-y and easier to read, by
* splitting up the sentences
* making them less interrupted by punctuation
* turning the footnotes into paragraphs, as they contain useful information that shouldn't be hidden in footnotes. Footnotes also interrupt the read flow.
Use `size_of_val` instead of manual calculation
Very minor thing that I happened to notice in passing, but it's both shorter and [means it gets `mul nsw`](https://rust.godbolt.org/z/Y9KxYETv5), so why not.
The indices are encoded as `u32`s in the range of invalid `char`s, so
that we know that if any mapping fails to parse as a `char` we should
use the value for lookup in the multi-table.
This avoids the second binary search in cases where a multi-`char`
mapping is needed.
Idea from @nikic
This makes pin docs a little bit less jargon-y and easier to read, by
* splitting up the sentences
* making them less interrupted by punctuation
* turning the footnotes into paragraphs, as they contain useful information
that shouldn't be hidden in footnotes. Footnotes also interrupt the read flow.
* other improvements and simplifications
Flatten/inline format_args!() and (string and int) literal arguments into format_args!()
Implements https://github.com/rust-lang/rust/issues/78356
Gated behind `-Zflatten-format-args=yes`.
Part of #99012
This change inlines string literals, integer literals and nested format_args!() into format_args!() during ast lowering, making all of the following pairs result in equivalent hir:
```rust
println!("Hello, {}!", "World");
println!("Hello, World!");
```
```rust
println!("[info] {}", format_args!("error"));
println!("[info] error");
```
```rust
println!("[{}] {}", status, format_args!("error: {}", msg));
println!("[{}] error: {}", status, msg);
```
```rust
println!("{} + {} = {}", 1, 2, 1 + 2);
println!("1 + 2 = {}", 1 + 2);
```
And so on.
This is useful for macros. E.g. a `log::info!()` macro could just pass the tokens from the user directly into a `format_args!()` that gets efficiently flattened/inlined into a `format_args!("info: {}")`.
It also means that `dbg!(x)` will have its file, line, and expression name inlined:
```rust
eprintln!("[{}:{}] {} = {:#?}", file!(), line!(), stringify!(x), x); // before
eprintln!("[example.rs:1] x = {:#?}", x); // after
```
Which can be nice in some cases, but also means a lot more unique static strings than before if dbg!() is used a lot.
The majority of char case replacements are single char replacements,
so storing them as [char; 3] wastes a lot of space.
This commit splits the replacement tables for both `to_lower` and
`to_upper` into two separate tables, one with single-character mappings
and one with multi-character mappings.
This reduces the binary size for programs using all of these tables
with roughly 24K bytes.
Since ascii chars are already handled by a special case in the
`to_lower` and `to_upper` functions, there's no need to waste space on
them in the LUTs.
Ensure `ptr::read` gets all the same LLVM `load` metadata that dereferencing does
I was looking into `array::IntoIter` optimization, and noticed that it wasn't annotating the loads with `noundef` for simple things like `array::IntoIter<i32, N>`. Trying to narrow it down, it seems that was because `MaybeUninit::assume_init_read` isn't marking the load as initialized (<https://rust.godbolt.org/z/Mxd8TPTnv>), which is unfortunate since that's basically its reason to exist.
The root cause is that `ptr::read` is currently implemented via the *untyped* `copy_nonoverlapping`, and thus the `load` doesn't get any type-aware metadata: no `noundef`, no `!range`. This PR solves that by lowering `ptr::read(p)` to `copy *p` in MIR, for which the backends already do the right thing.
Fortuitiously, this also improves the IR we give to LLVM for things like `mem::replace`, and fixes a couple of long-standing bugs where `ptr::read` on `Copy` types was worse than `*`ing them.
Zulip conversation: <https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/Move.20array.3A.3AIntoIter.20to.20ManuallyDrop/near/341189936>
cc `@erikdesjardins` `@JakobDegen` `@workingjubilee` `@the8472`
Fixes#106369Fixes#73258
Remove `identity_future` indirection
This was previously needed because the indirection used to hide some unexplained lifetime errors, which it turned out were related to the `min_choice` algorithm.
Removing the indirection also solves a couple of cycle errors, large moves and makes async blocks support the `#[track_caller]`annotation.
Fixes https://github.com/rust-lang/rust/issues/104826.
Stabilize `atomic_as_ptr`
Fixes#66893
This stabilizes the `as_ptr` methods for atomics. The stabilization feature gate used here is `atomic_as_ptr` which supersedes `atomic_mut_ptr` to match the change in https://github.com/rust-lang/rust/pull/107736.
This needs FCP.
New stable API:
```rust
impl AtomicBool {
pub const fn as_ptr(&self) -> *mut bool;
}
impl AtomicI32 {
pub const fn as_ptr(&self) -> *mut i32;
}
// Includes all other atomic types
impl<T> AtomicPtr<T> {
pub const fn as_ptr(&self) -> *mut *mut T;
}
```
r? libs-api
``@rustbot`` label +needs-fcp
Move `Option::as_slice` to an always-sound implementation
This approach depends on CSE to not have any branches or selects when the guessed offset is correct -- which it always will be right now -- but to also be *sound* (just less efficient) if the layout algorithms change such that the guess is incorrect.
The codegen test confirms that CSE handles this as expected, leaving the optimal codegen.
cc JakobDegen #108545
This approach depends on CSE to not have any branches or selects when the guessed offset is correct -- which it always will be right now -- but to also be *sound* (just less efficient) if the layout algorithms change such that the guess is incorrect.
I was looking into `array::IntoIter` optimization, and noticed that it wasn't annotating the loads with `noundef` for simple things like `array::IntoIter<i32, N>`.
Turned out to be a more general problem as `MaybeUninit::assume_init_read` isn't marking the load as initialized (<https://rust.godbolt.org/z/Mxd8TPTnv>), which is unfortunate since that's basically its reason to exist.
This PR lowers `ptr::read(p)` to `copy *p` in MIR, which fortuitiously also improves the IR we give to LLVM for things like `mem::replace`.
Stabilize `nonzero_min_max`
## Overall
Stabilizes `nonzero_min_max` to allow the "infallible" construction of ordinary minimum and maximum `NonZero*` instances.
The feature is fairly straightforward and already matured for some time in stable toolchains.
```rust
let _ = NonZeroU8::MIN;
let _ = NonZeroI32::MAX;
```
## History
* On 2022-01-25, implementation was [created](https://github.com/rust-lang/rust/pull/93293).
## Considerations
* This report is fruit of the inanition observed after two unsuccessful attempts at getting feedback.
* Other constant variants discussed at https://github.com/rust-lang/rust/issues/89065#issuecomment-923238190 are orthogonal to this feature.
Fixes https://github.com/rust-lang/rust/issues/89065
Fix the docs for pointer method with_metadata_of
The name of the argument to `{*const T, *mut T}::with_metadata_of` was changed from `val` to `meta` recently, but the docs weren't updated to match.
Relevant pull request: #103701
Rollup of 8 pull requests
Successful merges:
- #108754 (Retry `pred_known_to_hold_modulo_regions` with fulfillment if ambiguous)
- #108759 (1.41.1 supported 32-bit Apple targets)
- #108839 (Canonicalize root var when making response from new solver)
- #108856 (Remove DropAndReplace terminator)
- #108882 (Tweak E0740)
- #108898 (Set `LIBC_CHECK_CFG=1` when building Rust code in bootstrap)
- #108911 (Improve rustdoc-gui/tester.js code a bit)
- #108916 (Remove an unused return value in `rustc_hir_typeck`)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Guide `panic_immediate_abort` users away from `-Cpanic=unwind` and
towards `-Cpanic=abort` to avoid an accidental use of the feature with
the unwind strategy, e.g., on a targets where unwind is the default.
The `-Cpanic=unwind` combination doesn't offer the same benefits, since
the code would still be generated under the assumption that functions
implemented in Rust can unwind.
This was previously needed because the indirection used to hide some unexplained lifetime errors, which it turned out were related to the `min_choice` algorithm.
Removing the indirection also solves a couple of cycle errors, large moves and makes async blocks support the `#[track_caller]` annotation.
Use `nuw` when calculating slice lengths from `Range`s
An `assume` would definitely not be worth it, but since the flag is almost free we might as well tell LLVM this, especially on `_unchecked` calls where there's no obvious way for it to deduce it.
(Today neither safe nor unsafe indexing gets it: <https://rust.godbolt.org/z/G1jYT548s>)
Add `round_ties_even` to `f32` and `f64`
Tracking issue: #96710
Redux of #82273. See also #55107
Adds a new method, `round_ties_even`, to `f32` and `f64`, that rounds the float to the nearest integer , rounding halfway cases to the number with an even least significant bit. Uses the `roundeven` LLVM intrinsic to do this.
Of the five IEEE 754 rounding modes, this is the only one that doesn't already have a round-to-integer function exposed by Rust (others are `round`, `floor`, `ceil`, and `trunc`). Ties-to-even is also the rounding mode used for int-to-float and float-to-float `as` casts, as well as float arithmentic operations. So not having an explicit rounding method for it seems like an oversight.
Bikeshed: this PR currently uses `round_ties_even` for the name of the method. But maybe `round_ties_to_even` is better, or `round_even`, or `round_to_even`?
An `assume` would definitely not be worth it, but since the flag is almost free we might as well tell LLVM this, especially on `_unchecked` calls where there's no obvious way for it to deduce it.
(Today neither safe nor unsafe indexing gets it: <https://rust.godbolt.org/z/G1jYT548s>)
Use `partial_cmp` to implement tuple `lt`/`le`/`ge`/`gt`
In today's implementation, `(A, B)::gt` contains calls to *both* `A::eq` *and* `A::gt`.
That's fine for primitives, but for things like `String`s it's kinda weird -- `(String, usize)::gt` has a call to both `bcmp` and `memcmp` (<https://rust.godbolt.org/z/7jbbPMesf>) because when `bcmp` says the `String`s aren't equal, it turns around and calls `memcmp` to find out which one's bigger.
This PR changes the implementation to instead implement `(A, …, C, Z)::gt` using `A::partial_cmp`, `…::partial_cmp`, `C::partial_cmp`, and `Z::gt`. (And analogously for `lt`, `le`, and `ge`.) That way expensive comparisons don't need to be repeated.
Technically this is an observable change on stable, so I've marked it `needs-fcp` + `T-libs-api` and will
r? rust-lang/libs-api
I'm hoping that this will be non-controversial, however, since it's very similar to the observable changes that were made to the derives (#81384#98655) -- like those, this only changes behaviour if a type overrode behaviour in a way inconsistent with the rules for the various traits involved.
(The first commit here is #108156, adding the codegen test, which I used to make sure this doesn't regress behaviour for primitives.)
Zulip conversation about this change: <https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/.60.3E.60.20on.20Tuples/near/328392927>.
Match unmatched backticks in library/
Found with GNU grep:
```
grep -rEn '^(([^`]*`){2})*[^`]*`[^`]*$' library/ | rg -v '\s*[//]?.{1,2}```'
```
split out from #108685 as per advice.
Add `Atomic*::from_ptr`
This PR adds functions in the following form to all atomic types:
```rust
impl AtomicT {
pub const unsafe fn from_ptr<'a>(ptr: *mut T) -> &'a AtomicT;
}
```
r? `@m-ou-se` (we've talked about it before)
I'm not sure about docs & safety requirements, I'd appreciate some feedback on them.
Add support for QNX Neutrino to standard library
This change:
- adds standard library support for QNX Neutrino (7.1).
- upgrades `libc` to version `0.2.139` which supports QNX Neutrino
`@gh-tr`
⚠️ Backtraces on QNX require https://github.com/rust-lang/backtrace-rs/pull/507 which is not yet merged! (But everything else works without these changes) ⚠️
Tested mainly with a x86_64 virtual machine (see qnx-nto.md) and partially with an aarch64 hardware (some tests fail due to constrained resources).
Merge two different equality specialization traits in `core`
Arrays and slices each had their own version of this, without a matching set of `impl`s.
Merge them into one (still-`pub(crate)`) `cmp::BytewiseEq` trait, so we can stop doing all these things twice.
And that means that the `[T]::eq` → `memcmp` specialization picks up a bunch of types where that previously only worked for arrays, so examples like <https://rust.godbolt.org/z/KjsG8MGGT> will use it now instead of emitting loops.
r? the8472
Add `Option::as_`(`mut_`)`slice`
This adds the following functions:
* `Option<T>::as_slice(&self) -> &[T]`
* `Option<T>::as_mut_slice(&mut self) -> &[T]`
The `as_slice` and `as_mut_slice_mut` functions benefit from an optimization that makes them completely branch-free. ~~Unfortunately, this optimization is not available on by-value Options, therefore the `into_slice` implementations use the plain `match` + `slice::from_ref` approach.~~
Note that the optimization's soundness hinges on the fact that either the niche optimization makes the offset of the `Some(_)` contents zero or the mempory layout of `Option<T>` is equal to that of `Option<MaybeUninit<T>>`.
The idea has been discussed on [Zulip](https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/Option.3A.3Aas_slice). Notably the idea for the `as_slice_mut` and `into_slice´ methods came from `@cuviper` and `@Sp00ph` hardened the optimization against niche-optimized Options.
The [rust playground](https://play.rust-lang.org/?version=nightly&mode=release&edition=2021&gist=74f8e4239a19f454c183aaf7b4a969e0) shows that the generated assembly of the optimized method is basically only a copy while the naive method generates code containing a `test dx, dx` on x86_64.
---
EDIT from reviewer: ACP is https://github.com/rust-lang/libs-team/issues/150
add missing feature in core/tests
https://github.com/rust-lang/rust/pull/104265 introduced the `ip_in_core` feature. For some reason core tests seem to still build without that feature -- no idea how that is possible. Might be related to https://github.com/rust-lang/rust/issues/15702? I was under the impression that `pub use` with different stability doesn't actually work. That's why `intrinsics::transmute` is stable, for example.
Either way, core tests fail to build in miri-test-libstd, and adding the feature fixes that.
r? ```@thomcc```
This adds the following functions:
* `Option<T>::as_slice(&self) -> &[T]`
* `Option<T>::as_slice_mut(&mut self) -> &[T]`
The `as_slice` and `as_slice_mut` functions benefit from an
optimization that makes them completely branch-free.
Note that the optimization's soundness hinges on the fact that either
the niche optimization makes the offset of the `Some(_)` contents zero
or the mempory layout of `Option<T>` is equal to that of
`Option<MaybeUninit<T>>`.
Inline `Poll` methods
With `opt-level="z"`, the `Poll::map*` methods are sometimes not inlined (see <https://godbolt.org/z/ca5ajKTEK>). This PR adds `#[inline]` to these methods. I have a project that can benefit from this change, but do we want to enable this behavior universally?
Fixes#101080.
Stabilize `#![feature(target_feature_11)]`
## Stabilization report
### Summary
Allows for safe functions to be marked with `#[target_feature]` attributes.
Functions marked with `#[target_feature]` are generally considered as unsafe functions: they are unsafe to call, cannot be assigned to safe function pointers, and don't implement the `Fn*` traits.
However, calling them from other `#[target_feature]` functions with a superset of features is safe.
```rust
// Demonstration function
#[target_feature(enable = "avx2")]
fn avx2() {}
fn foo() {
// Calling `avx2` here is unsafe, as we must ensure
// that AVX is available first.
unsafe {
avx2();
}
}
#[target_feature(enable = "avx2")]
fn bar() {
// Calling `avx2` here is safe.
avx2();
}
```
### Test cases
Tests for this feature can be found in [`src/test/ui/rfcs/rfc-2396-target_feature-11/`](b67ba9ba20/src/test/ui/rfcs/rfc-2396-target_feature-11/).
### Edge cases
- https://github.com/rust-lang/rust/issues/73631
Closures defined inside functions marked with `#[target_feature]` inherit the target features of their parent function. They can still be assigned to safe function pointers and implement the appropriate `Fn*` traits.
```rust
#[target_feature(enable = "avx2")]
fn qux() {
let my_closure = || avx2(); // this call to `avx2` is safe
let f: fn() = my_closure;
}
```
This means that in order to call a function with `#[target_feature]`, you must show that the target-feature is available while the function executes *and* for as long as whatever may escape from that function lives.
### Documentation
- Reference: https://github.com/rust-lang/reference/pull/1181
---
cc tracking issue #69098
r? `@ghost`
Move IpAddr, SocketAddr and V4+V6 related types to `core`
Implements RFC https://github.com/rust-lang/rfcs/pull/2832. The RFC has completed FCP with disposition merge, but is not yet merged.
Moves IP types to `core` as specified in the RFC.
The full list of moved types is: `IpAddr`, `Ipv4Addr`, `Ipv6Addr`, `SocketAddr`, `SocketAddrV4`, `SocketAddrV6`, `Ipv6MulticastScope` and `AddrParseError`.
Doing this move was one of the main driving arguments behind #78802.
Remove `from` lang item
It was probably a leftover from the old `?` desugaring but anyways, it's unused now except for clippy, which can just use a diagnostics item.
Require `literal`s for some `(u)int_impl!` parameters
The point of these is to be seen *lexically* in the docs, so they should always be passed as the correct literal, not as an expression.
(Otherwise we could just compute `Min`/`Max` from `BITS`, for example.)
r? Nilstrieb
Optimize break patterns
Use `wyrand` instead of calling `XORSHIFT` 2 times in break patterns for the 64-bit platform. The new PRNG is 2x faster than the previous one.
Bench result(via https://gist.github.com/zhangyunhao116/11ef41a150f5c23bb47d86255fbeba89):
```
old time: [1.3258 ns 1.3262 ns 1.3266 ns]
change: [+0.5901% +0.6731% +0.7791%] (p = 0.00 < 0.05)
Change within noise threshold.
Found 13 outliers among 100 measurements (13.00%)
7 (7.00%) high mild
6 (6.00%) high severe
new time: [657.65 ps 657.89 ps 658.18 ps]
change: [-1.6910% -1.6110% -1.5256%] (p = 0.00 < 0.05)
Performance has improved.
Found 6 outliers among 100 measurements (6.00%)
2 (2.00%) high mild
4 (4.00%) high severe
```
implement const iterator using `rustc_do_not_const_check`
Previous experiment: #102225.
Explanation: rather than making all default methods work under `const` all at once, this uses `rustc_do_not_const_check` as a workaround to "trick" the compiler to not run any checks on those other default methods. Any const implementations are only required to implement the `next` method. Any actual calls to the trait methods other than `next` will either error in compile time (at CTFE runs), or run the methods correctly if they do not have any non-const operations. This is extremely easy to maintain, remove, or improve.
The point of these is to be seen lexically in the docs, so they should always be passed as the correct literal, not as an expression.
(Otherwise we could just compute `Min`/`Max` from `BITS`, for example.)
Rename atomic 'as_mut_ptr' to 'as_ptr' to match Cell (ref #66893)
Originally discussed in https://github.com/rust-lang/rust/issues/66893#issuecomment-1419198623
~~This uses #107706 as a base to avoid a merge conflict once that gets rolled up (so disregard const changes in the diff until it does)~~ all merged & rebased
`@rustbot` label +T-libs-api
r? m-ou-se