uefi: Implement getcwd and chdir
- Using EFI Shell Protocol. These functions do not make much sense unless a shell is present.
- Return the exe dir in case shell protocol is missing.
r? `@joboet`
Rollup of 9 pull requests
Successful merges:
- #122670 (Fix bug where `option_env!` would return `None` when env var is present but not valid Unicode)
- #131095 (Use environment variables instead of command line arguments for merged doctests)
- #131339 (Expand set_ptr_value / with_metadata_of docs)
- #131652 (Move polarity into `PolyTraitRef` rather than storing it on the side)
- #131675 (Update lint message for ABI not supported)
- #131681 (Fix up-to-date checking for run-make tests)
- #131702 (Suppress import errors for traits that couldve applied for method lookup error)
- #131703 (Resolved python deprecation warning in publish_toolstate.py)
- #131710 (Remove `'apostrophes'` from `rustc_parse_format`)
r? `@ghost`
`@rustbot` modify labels: rollup
Some float methods are now `const fn` under the `const_float_methods` feature gate.
In order to support `min`, `max`, `abs` and `copysign`, the implementation of some intrinsics had to be moved from Miri to rustc_const_eval.
Stabilise `const_make_ascii`.
Closes: #130698
This PR stabilises the `const_make_ascii` feature gate (i.e. marking the `make_ascii_uppercase` and `make_ascii_lowercase` methods in `char`, `u8`, `[u8]`, and `str` as const).
Implemented `FromStr` for `CString` and `TryFrom<CString>` for `String`
The motivation of this change is making it possible to use `CString` in generic methods with `FromStr` and `TryInto<String>` trait bounds. The same traits are already implemented for `OsString` which is an FFI type too.
Expand set_ptr_value / with_metadata_of docs
In preparation of a potential FCP, intends to clean up and expand the documentation of this operation.
Rewrite these blobs to explicitly mention the case of a sized operand. The previous made that seem wrong instead of emphasizing it is nothing but a simple cast. Instead, the explanation now emphasizes that the address portion of the argument, together with its provenance, is discarded which previously had to be inferred by the reader. Then an example demonstrates a simple line of incorrect usage based on this idea of provenance.
Tracking issue: https://github.com/rust-lang/rust/issues/75091
Fix bug where `option_env!` would return `None` when env var is present but not valid Unicode
Fixes#122669 by making `option_env!` emit an error when the value of the environment variable is not valid Unicode.
Autodiff Upstreaming - enzyme frontend
This is an upstream PR for the `autodiff` rustc_builtin_macro that is part of the autodiff feature.
For the full implementation, see: https://github.com/rust-lang/rust/pull/129175
**Content:**
It contains a new `#[autodiff(<args>)]` rustc_builtin_macro, as well as a `#[rustc_autodiff]` builtin attribute.
The autodiff macro is applied on function `f` and will expand to a second function `df` (name given by user).
It will add a dummy body to `df` to make sure it type-checks. The body will later be replaced by enzyme on llvm-ir level,
we therefore don't really care about the content. Most of the changes (700 from 1.2k) are in `compiler/rustc_builtin_macros/src/autodiff.rs`, which expand the macro. Nothing except expansion is implemented for now.
I have a fallback implementation for relevant functions in case that rustc should be build without autodiff support. The default for now will be off, although we want to flip it later (once everything landed) to on for nightly. For the sake of CI, I have flipped the defaults, I'll revert this before merging.
**Dummy function Body:**
The first line is an `inline_asm` nop to make inlining less likely (I have additional checks to prevent this in the middle end of rustc. If `f` gets inlined too early, we can't pass it to enzyme and thus can't differentiate it.
If `df` gets inlined too early, the call site will just compute this dummy code instead of the derivatives, a correctness issue. The following black_box lines make sure that none of the input arguments is getting optimized away before we replace the body.
**Motivation:**
The user facing autodiff macro can verify the user input. Then I write it as args to the rustc_attribute, so from here on I can know that these values should be sensible. A rustc_attribute also turned out to be quite nice to attach this information to the corresponding function and carry it till the backend.
This is also just an experiment, I expect to adjust the user facing autodiff macro based on user feedback, to improve usability.
As a simple example of what this will do, we can see this expansion:
From:
```
#[autodiff(df, Reverse, Duplicated, Const, Active)]
pub fn f1(x: &[f64], y: f64) -> f64 {
unimplemented!()
}
```
to
```
#[rustc_autodiff]
#[inline(never)]
pub fn f1(x: &[f64], y: f64) -> f64 {
::core::panicking::panic("not implemented")
}
#[rustc_autodiff(Reverse, Duplicated, Const, Active,)]
#[inline(never)]
pub fn df(x: &[f64], dx: &mut [f64], y: f64, dret: f64) -> f64 {
unsafe { asm!("NOP"); };
::core::hint::black_box(f1(x, y));
::core::hint::black_box((dx, dret));
::core::hint::black_box(f1(x, y))
}
```
I will add a few more tests once I figured out why rustc rebuilds every time I touch a test.
Tracking:
- https://github.com/rust-lang/rust/issues/124509
try-job: dist-x86_64-msvc
Other cell `into_inner` functions are const and there shouldn't be any
problem here. Make the unstable `LazyCell::into_inner` const under the
same gate as its stability (`lazy_cell_into_inner`).
Tracking issue: https://github.com/rust-lang/rust/issues/125623
Update precondition tests (especially for zero-size access to null)
I don't much like the current way I've updated the precondition check helpers, but I couldn't come up with anything better. Ideas welcome.
I've organized `tests/ui/precondition-checks` mostly with one file per function that has `assert_unsafe_precondition` in it, with revisions that check each precondition. The important new test is `tests/ui/precondition-checks/zero-size-null.rs`.
Stabilize `Pin::as_deref_mut()`
Tracking issue: closes#86918
Stabilizing the following API:
```rust
impl<Ptr: DerefMut> Pin<Ptr> {
pub fn as_deref_mut(self: Pin<&mut Pin<Ptr>>) -> Pin<&mut Ptr::Target>;
}
```
I know that an FCP has not been started yet, but this isn't a very complex stabilization, and I'm hoping this can motivate an FCP to *get* started - this has been pending for a while and it's a very useful function when writing Future impls.
r? ``@jonhoo``
- Using EFI Shell Protocol. These functions do not make much sense
unless a shell is present.
- Return the exe dir in case shell protocol is missing.
Signed-off-by: Ayush Singh <ayush@beagleboard.org>
merge const_ipv4 / const_ipv6 feature gate into 'ip' feature gate
https://github.com/rust-lang/rust/issues/76205 has been closed a while ago, but there are still some functions that reference it. Those functions are all unstable *and* const-unstable. There's no good reason to use a separate feature gate for their const-stability, so this PR moves their const-stability under the same gate as their regular stability, and therefore removes the remaining references to https://github.com/rust-lang/rust/issues/76205.
core/net: add Ipv[46]Addr::from_octets, Ipv6Addr::from_segments.
Adds:
- `Ipv4Address::from_octets([u8;4])`
- `Ipv6Address::from_octets([u8;16])`
- `Ipv6Address::from_segments([u16;8])`
equivalent to the existing `From` impls.
Advantages:
- Consistent with `to_bits, from_bits`.
- More discoverable than the `From` impls.
- Helps with type inference: it's common to want to convert byte slices to IP addrs. If you try this
```rust
fn foo(x: &[u8]) -> Ipv4Addr {
Ipv4Addr::from(foo.try_into().unwrap())
}
```
it [doesn't work](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=0e2873312de275a58fa6e33d1b213bec). You have to write `Ipv4Addr::from(<[u8;4]>::try_from(x).unwrap())` instead, which is not great. With `from_octets` it is able to infer the right types.
Found this while porting [smoltcp](https://github.com/smoltcp-rs/smoltcp/) from its own IP address types to the `core::net` types.
~~Tracking issues #27709 #76205~~
Tracking issue: https://github.com/rust-lang/rust/issues/131360
std::fs::get_path freebsd update.
what matters is we re doing the right things as doing sizeof, rather than passing KINFO_FILE_SIZE (only defined on intel architectures), the kernel
making sure it matches the expectation in its side.
Rollup of 8 pull requests
Successful merges:
- #130356 (don't warn about a missing change-id in CI)
- #130900 (Do not output () on empty description)
- #131066 (Add the Chinese translation entry to the RustByExample build process)
- #131067 (Fix std_detect links)
- #131644 (Clean up some Miri things in `sys/windows`)
- #131646 (sys/unix: add comments for some Miri fallbacks)
- #131653 (Remove const trait bound modifier hack)
- #131659 (enable `download_ci_llvm` test)
r? `@ghost`
`@rustbot` modify labels: rollup
Clean up some Miri things in `sys/windows`
- remove miri hack that is only needed for win7 (we don't support win7 as a target in Miri)
- remove outdated comment now that Miri is on CI
The allocator on Xous is now throwing warnings because the allocator
needs to be mutable, and allocators hand out mutable pointers, which
the `static_mut_refs` lint now catches.
Give the same treatment to Xous as wasm, at least until a solution is
devised for fixing the warning on wasm.
Signed-off-by: Sean Cross <sean@xobs.io>
Process arguments and environment variables are both passed by way of
Application Parameters. These are a TLV format that gets passed in as
the second process argument.
This patch combines both as they are very similar in their decode.
Signed-off-by: Sean Cross <sean@osdyne.com>
Optimize `escape_ascii` using a lookup table
Based upon my suggestion here: https://github.com/rust-lang/rust/pull/125340#issuecomment-2130441817
Effectively, we can take advantage of the fact that ASCII only needs 7 bits to make the eighth bit store whether the value should be escaped or not. This adds a 256-byte lookup table, but 256 bytes *should* be small enough that very few people will mind, according to my probably not incontrovertible opinion.
The generated assembly isn't clearly better (although has fewer branches), so, I decided to benchmark on three inputs: first on a random 200KiB, then on `/bin/cat`, then on `Cargo.toml` for this repo. In all cases, the generated code ran faster on my machine. (an old i7-8700)
But, if you want to try my benchmarking code for yourself:
<details><summary>Criterion code below. Replace <code>/home/ltdk/rustsrc</code> with the appropriate directory.</summary>
```rust
#![feature(ascii_char)]
#![feature(ascii_char_variants)]
#![feature(const_option)]
#![feature(let_chains)]
use core::ascii;
use core::ops::Range;
use criterion::{criterion_group, criterion_main, Criterion};
use rand::{thread_rng, Rng};
const HEX_DIGITS: [ascii::Char; 16] = *b"0123456789abcdef".as_ascii().unwrap();
#[inline]
const fn backslash<const N: usize>(a: ascii::Char) -> ([ascii::Char; N], Range<u8>) {
const { assert!(N >= 2) };
let mut output = [ascii::Char::Null; N];
output[0] = ascii::Char::ReverseSolidus;
output[1] = a;
(output, 0..2)
}
#[inline]
const fn hex_escape<const N: usize>(byte: u8) -> ([ascii::Char; N], Range<u8>) {
const { assert!(N >= 4) };
let mut output = [ascii::Char::Null; N];
let hi = HEX_DIGITS[(byte >> 4) as usize];
let lo = HEX_DIGITS[(byte & 0xf) as usize];
output[0] = ascii::Char::ReverseSolidus;
output[1] = ascii::Char::SmallX;
output[2] = hi;
output[3] = lo;
(output, 0..4)
}
#[inline]
const fn verbatim<const N: usize>(a: ascii::Char) -> ([ascii::Char; N], Range<u8>) {
const { assert!(N >= 1) };
let mut output = [ascii::Char::Null; N];
output[0] = a;
(output, 0..1)
}
/// Escapes an ASCII character.
///
/// Returns a buffer and the length of the escaped representation.
const fn escape_ascii_old<const N: usize>(byte: u8) -> ([ascii::Char; N], Range<u8>) {
const { assert!(N >= 4) };
match byte {
b'\t' => backslash(ascii::Char::SmallT),
b'\r' => backslash(ascii::Char::SmallR),
b'\n' => backslash(ascii::Char::SmallN),
b'\\' => backslash(ascii::Char::ReverseSolidus),
b'\'' => backslash(ascii::Char::Apostrophe),
b'\"' => backslash(ascii::Char::QuotationMark),
0x00..=0x1F => hex_escape(byte),
_ => match ascii::Char::from_u8(byte) {
Some(a) => verbatim(a),
None => hex_escape(byte),
},
}
}
/// Escapes an ASCII character.
///
/// Returns a buffer and the length of the escaped representation.
const fn escape_ascii_new<const N: usize>(byte: u8) -> ([ascii::Char; N], Range<u8>) {
/// Lookup table helps us determine how to display character.
///
/// Since ASCII characters will always be 7 bits, we can exploit this to store the 8th bit to
/// indicate whether the result is escaped or unescaped.
///
/// We additionally use 0x80 (escaped NUL character) to indicate hex-escaped bytes, since
/// escaped NUL will not occur.
const LOOKUP: [u8; 256] = {
let mut arr = [0; 256];
let mut idx = 0;
loop {
arr[idx as usize] = match idx {
// use 8th bit to indicate escaped
b'\t' => 0x80 | b't',
b'\r' => 0x80 | b'r',
b'\n' => 0x80 | b'n',
b'\\' => 0x80 | b'\\',
b'\'' => 0x80 | b'\'',
b'"' => 0x80 | b'"',
// use NUL to indicate hex-escaped
0x00..=0x1F | 0x7F..=0xFF => 0x80 | b'\0',
_ => idx,
};
if idx == 255 {
break;
}
idx += 1;
}
arr
};
let lookup = LOOKUP[byte as usize];
// 8th bit indicates escape
let lookup_escaped = lookup & 0x80 != 0;
// SAFETY: We explicitly mask out the eighth bit to get a 7-bit ASCII character.
let lookup_ascii = unsafe { ascii::Char::from_u8_unchecked(lookup & 0x7F) };
if lookup_escaped {
// NUL indicates hex-escaped
if matches!(lookup_ascii, ascii::Char::Null) {
hex_escape(byte)
} else {
backslash(lookup_ascii)
}
} else {
verbatim(lookup_ascii)
}
}
fn escape_bytes(bytes: &[u8], f: impl Fn(u8) -> ([ascii::Char; 4], Range<u8>)) -> Vec<ascii::Char> {
let mut vec = Vec::new();
for b in bytes {
let (buf, range) = f(*b);
vec.extend_from_slice(&buf[range.start as usize..range.end as usize]);
}
vec
}
pub fn criterion_benchmark(c: &mut Criterion) {
let mut group = c.benchmark_group("escape_ascii");
group.sample_size(1000);
let rand_200k = &mut [0; 200 * 1024];
thread_rng().fill(&mut rand_200k[..]);
let cat = include_bytes!("/bin/cat");
let cargo_toml = include_bytes!("/home/ltdk/rustsrc/Cargo.toml");
group.bench_function("old_rand", |b| {
b.iter(|| escape_bytes(rand_200k, escape_ascii_old));
});
group.bench_function("new_rand", |b| {
b.iter(|| escape_bytes(rand_200k, escape_ascii_new));
});
group.bench_function("old_bin", |b| {
b.iter(|| escape_bytes(cat, escape_ascii_old));
});
group.bench_function("new_bin", |b| {
b.iter(|| escape_bytes(cat, escape_ascii_new));
});
group.bench_function("old_cargo_toml", |b| {
b.iter(|| escape_bytes(cargo_toml, escape_ascii_old));
});
group.bench_function("new_cargo_toml", |b| {
b.iter(|| escape_bytes(cargo_toml, escape_ascii_new));
});
group.finish();
}
criterion_group!(benches, criterion_benchmark);
criterion_main!(benches);
```
</details>
My benchmark results:
```
escape_ascii/old_rand time: [1.6965 ms 1.7006 ms 1.7053 ms]
Found 22 outliers among 1000 measurements (2.20%)
4 (0.40%) high mild
18 (1.80%) high severe
escape_ascii/new_rand time: [1.6749 ms 1.6953 ms 1.7158 ms]
Found 38 outliers among 1000 measurements (3.80%)
38 (3.80%) high mild
escape_ascii/old_bin time: [224.59 µs 225.40 µs 226.33 µs]
Found 39 outliers among 1000 measurements (3.90%)
17 (1.70%) high mild
22 (2.20%) high severe
escape_ascii/new_bin time: [164.86 µs 165.63 µs 166.58 µs]
Found 107 outliers among 1000 measurements (10.70%)
43 (4.30%) high mild
64 (6.40%) high severe
escape_ascii/old_cargo_toml
time: [23.397 µs 23.699 µs 24.014 µs]
Found 204 outliers among 1000 measurements (20.40%)
21 (2.10%) high mild
183 (18.30%) high severe
escape_ascii/new_cargo_toml
time: [16.404 µs 16.438 µs 16.483 µs]
Found 88 outliers among 1000 measurements (8.80%)
56 (5.60%) high mild
32 (3.20%) high severe
```
Random: 1.7006ms => 1.6953ms (<1% speedup)
Binary: 225.40µs => 165.63µs (26% speedup)
Text: 23.699µs => 16.438µs (30% speedup)
The recent changes to naked `asm!()` macros made this unbuildable
on Xous. The upstream package maintainer released 0.2.3 to fix support
on newer nightly toolchains.
Update the dependency to 0.2.3, which is the oldest version that works
with the current nightly compiler.
This closes#131602 and fixes the build on xous.
Signed-off-by: Sean Cross <sean@xobs.io>
Use throw intrinsic from stdarch in wasm libunwind
Tracking issue: #118168
This is a very belated followup to #121438; now that rust-lang/stdarch#1542 is merged, we can use the intrinsic exported from `core::arch` instead of defining it inline. I also cleaned up the cfgs a bit and added a more detailed comment.
remove const_cow_is_borrowed feature gate
The two functions guarded by this are still unstable, and there's no reason to require a separate feature gate for their const-ness -- we can just have `cow_is_borrowed` cover both kinds of stability.
Cc #65143
std: fix stdout-before-main
Fixes#130210.
Since #124881, `ReentrantLock` uses `ThreadId` to identify threads. This has the unfortunate consequence of breaking uses of `Stdout` before main: Locking the `ReentrantLock` that synchronizes the output will initialize the thread ID before the handle for the main thread is set in `rt::init`. But since that would overwrite the current thread ID, `thread::set_current` triggers an abort.
This PR fixes the problem by using the already initialized thread ID for constructing the main thread handle and allowing `set_current` calls that do not change the thread's ID.
Stabilize const `ptr::write*` and `mem::replace`
Since `const_mut_refs` and `const_refs_to_cell` have been stabilized, we may now also stabilize the ability to write to places during const evaluation inside our library API. So, we now propose the `const fn` version of `ptr::write` and its variants. This allows us to also stabilize `mem::replace` and `ptr::replace`.
- const `mem::replace`: https://github.com/rust-lang/rust/issues/83164#issuecomment-2338660862
- const `ptr::write{,_bytes,_unaligned}`: https://github.com/rust-lang/rust/issues/86302#issuecomment-2330275266
Their implementation requires an additional internal stabilization of `const_intrinsic_forget`, which is required for `*::write*` and thus `*::replace`. Thus we const-stabilize the internal intrinsics `forget`, `write_bytes`, and `write_via_move`.
Fixes#130210.
Since #124881, `ReentrantLock` uses `ThreadId` to identify threads. This has the unfortunate consequence of breaking uses of `Stdout` before main: Locking the `ReentrantLock` that synchronizes the output will initialize the thread ID before the handle for the main thread is set in `rt::init`. But since that would overwrite the current thread ID, `thread::set_current` triggers an abort.
This PR fixes the problem by using the already initialized thread ID for constructing the main thread handle and allowing `set_current` calls that do not change the thread's ID.
Migrate lib's `&Option<T>` into `Option<&T>`
Trying out my new lint https://github.com/rust-lang/rust-clippy/pull/13336 - according to the [video](https://www.youtube.com/watch?v=6c7pZYP_iIE), this could lead to some performance and memory optimizations.
Basic thoughts expressed in the video that seem to make sense:
* `&Option<T>` in an API breaks encapsulation:
* caller must own T and move it into an Option to call with it
* if returned, the owner must store it as Option<T> internally in order to return it
* Performance is subject to compiler optimization, but at the basics, `&Option<T>` points to memory that has `presence` flag + value, whereas `Option<&T>` by specification is always optimized to a single pointer.
intrinsics fmuladdf{32,64}: expose llvm.fmuladd.* semantics
Add intrinsics `fmuladd{f32,f64}`. This computes `(a * b) + c`, to be fused if the code generator determines that (i) the target instruction set has support for a fused operation, and (ii) that the fused operation is more efficient than the equivalent, separate pair of `mul` and `add` instructions.
https://llvm.org/docs/LangRef.html#llvm-fmuladd-intrinsic
The codegen_cranelift uses the `fma` function from libc, which is a correct implementation, but without the desired performance semantic. I think this requires an update to cranelift to expose a suitable instruction in its IR.
I have not tested with codegen_gcc, but it should behave the same way (using `fma` from libc).
---
This topic has been discussed a few times on Zulip and was suggested, for example, by `@workingjubilee` in [Effect of fma disabled](https://rust-lang.zulipchat.com/#narrow/stream/122651-general/topic/Effect.20of.20fma.20disabled/near/274179331).
`IndexRange::len` is justified as an overall invariant, and
`take_prefix` and `take_suffix` are justified by local branch
conditions. A few more UB-checked calls remain in cases that are only
supported locally by `debug_assert!`, which won't do anything in
distributed builds, so those UB checks may still be useful.
We generally expect core's `#![rustc_preserve_ub_checks]` to optimize
away in user's release builds, but the mere presence of that extra code
can sometimes inhibit optimization, as seen in #131563.
Stabilise `const_char_encode_utf8`.
Closes: #130512
This PR stabilises the `const_char_encode_utf8` feature gate (i.e. support for `char::encode_utf8` in const scenarios).
Note that the linked tracking issue is currently awaiting FCP.
Port sort-research-rs test suite to Rust stdlib tests
This PR is a followup to https://github.com/rust-lang/rust/pull/124032. It replaces the tests that test the various sort functions in the standard library with a test-suite developed as part of https://github.com/Voultapher/sort-research-rs. The current tests suffer a couple of problems:
- They don't cover important real world patterns that the implementations take advantage of and execute special code for.
- The input lengths tested miss out on code paths. For example, important safety property tests never reach the quicksort part of the implementation.
- The miri side is often limited to `len <= 20` which means it very thoroughly tests the insertion sort, which accounts for 19 out of 1.5k LoC.
- They are split into to core and alloc, causing code duplication and uneven coverage.
- ~~The randomness is tied to a caller location, wasting the space exploration capabilities of randomized testing.~~ The randomness is not repeatable, as it relies on `std:#️⃣:RandomState::new().build_hasher()`.
Most of these issues existed before https://github.com/rust-lang/rust/pull/124032, but they are intensified by it. One thing that is new and requires additional testing, is that the new sort implementations specialize based on type properties. For example `Freeze` and non `Freeze` execute different code paths.
Effectively there are three dimensions that matter:
- Input type
- Input length
- Input pattern
The ported test-suite tests various properties along all three dimensions, greatly improving test coverage. It side-steps the miri issue by preferring sampled approaches. For example the test that checks if after a panic the set of elements is still the original one, doesn't do so for every single possible panic opportunity but rather it picks one at random, and performs this test across a range of input length, which varies the panic point across them. This allows regular execution to easily test inputs of length 10k, and miri execution up to 100 which covers significantly more code. The randomness used is tied to a fixed - but random per process execution - seed. This allows for fully repeatable tests and fuzzer like exploration across multiple runs.
Structure wise, the tests are previously found in the core integration tests for `sort_unstable` and alloc unit tests for `sort`. The new test-suite was developed to be a purely black-box approach, which makes integration testing the better place, because it can't accidentally rely on internal access. Because unwinding support is required the tests can't be in core, even if the implementation is, so they are now part of the alloc integration tests. Are there architectures that can only build and test core and not alloc? If so, do such platforms require sort testing? For what it's worth the current implementation state passes miri `--target mips64-unknown-linux-gnuabi64` which is big endian.
The test-suite also contains tests for properties that were and are given by the current and previous implementations, and likely relied upon by users but weren't tested. For example `self_cmp` tests that the two parameters `a` and `b` passed into the comparison function are never references to the same object, which if the user is sorting for example a `&mut [Mutex<i32>]` could lead to a deadlock.
Instead of using the hashed caller location as rand seed, it uses seconds since unix epoch / 10, which given timestamps in the CI should be reasonably easy to reproduce, but also allows fuzzer like space exploration.
---
Test run-time changes:
Setup:
```
Linux 6.10
rustc 1.83.0-nightly (f79a912d9 2024-09-18)
AMD Ryzen 9 5900X 12-Core Processor (Zen 3 micro-architecture)
CPU boost enabled.
```
master: e9df22f
Before core integration tests:
```
$ LD_LIBRARY_PATH=build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/ hyperfine build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/coretests-219cbd0308a49e2f
Time (mean ± σ): 869.6 ms ± 21.1 ms [User: 1327.6 ms, System: 95.1 ms]
Range (min … max): 845.4 ms … 917.0 ms 10 runs
# MIRIFLAGS="-Zmiri-disable-isolation" to get real time
$ MIRIFLAGS="-Zmiri-disable-isolation" ./x.py miri library/core
finished in 738.44s
```
After core integration tests:
```
$ LD_LIBRARY_PATH=build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/ hyperfine build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/coretests-219cbd0308a49e2f
Time (mean ± σ): 865.1 ms ± 14.7 ms [User: 1283.5 ms, System: 88.4 ms]
Range (min … max): 836.2 ms … 885.7 ms 10 runs
$ MIRIFLAGS="-Zmiri-disable-isolation" ./x.py miri library/core
finished in 752.35s
```
Before alloc unit tests:
```
LD_LIBRARY_PATH=build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/ hyperfine build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/alloc-19c15e6e8565aa54
Time (mean ± σ): 295.0 ms ± 9.9 ms [User: 719.6 ms, System: 35.3 ms]
Range (min … max): 284.9 ms … 319.3 ms 10 runs
$ MIRIFLAGS="-Zmiri-disable-isolation" ./x.py miri library/alloc
finished in 322.75s
```
After alloc unit tests:
```
LD_LIBRARY_PATH=build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/ hyperfine build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/alloc-19c15e6e8565aa54
Time (mean ± σ): 97.4 ms ± 4.1 ms [User: 297.7 ms, System: 28.6 ms]
Range (min … max): 92.3 ms … 109.2 ms 27 runs
$ MIRIFLAGS="-Zmiri-disable-isolation" ./x.py miri library/alloc
finished in 309.18s
```
Before alloc integration tests:
```
$ LD_LIBRARY_PATH=build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/ hyperfine build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/alloctests-439e7300c61a8046
Time (mean ± σ): 103.2 ms ± 1.7 ms [User: 135.7 ms, System: 39.4 ms]
Range (min … max): 99.7 ms … 107.3 ms 28 runs
$ MIRIFLAGS="-Zmiri-disable-isolation" ./x.py miri library/alloc
finished in 231.35s
```
After alloc integration tests:
```
$ LD_LIBRARY_PATH=build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/ hyperfine build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/alloctests-439e7300c61a8046
Time (mean ± σ): 379.8 ms ± 4.7 ms [User: 4620.5 ms, System: 1157.2 ms]
Range (min … max): 373.6 ms … 386.9 ms 10 runs
$ MIRIFLAGS="-Zmiri-disable-isolation" ./x.py miri library/alloc
finished in 449.24s
```
In my opinion the results don't change iterative library development or CI execution in meaningful ways. For example currently the library doc-tests take ~66s and incremental compilation takes 10+ seconds. However I only have limited knowledge of the various local development workflows that exist, and might be missing one that is significantly impacted by this change.
Add intrinsics `fmuladd{f16,f32,f64,f128}`. This computes `(a * b) +
c`, to be fused if the code generator determines that (i) the target
instruction set has support for a fused operation, and (ii) that the
fused operation is more efficient than the equivalent, separate pair
of `mul` and `add` instructions.
https://llvm.org/docs/LangRef.html#llvm-fmuladd-intrinsic
MIRI support is included for f32 and f64.
The codegen_cranelift uses the `fma` function from libc, which is a
correct implementation, but without the desired performance semantic. I
think this requires an update to cranelift to expose a suitable
instruction in its IR.
I have not tested with codegen_gcc, but it should behave the same
way (using `fma` from libc).
rustc_target: Add sme-b16b16 as an explicit aarch64 target feature
LLVM 20 split out what used to be called b16b16 and correspond to aarch64
FEAT_SVE_B16B16 into sve-b16b16 and sme-b16b16.
Add sme-b16b16 as an explicit feature and update the codegen accordingly.
Resolves https://github.com/rust-lang/rust/pull/129894.
Stabilize const `{slice,array}::from_mut`
This PR stabilizes the following APIs as const stable as of rust `1.83`:
```rs
// core::array
pub const fn from_mut<T>(s: &mut T) -> &mut [T; 1];
// core::slice
pub const fn from_mut<T>(s: &mut T) -> &mut [T];
```
This is made possible by `const_mut_refs` being stabilized (yay).
Tracking issue: #90206
The `Box<T: Default>` impl currently calls `T::default()` before allocating
the `Box`.
Most `Default` impls are trivial, which should in theory allow
LLVM to construct `T: Default` directly in the `Box` allocation when calling
`<Box<T>>::default()`.
However, the allocation may fail, which necessitates calling `T's` destructor if it has one.
If the destructor is non-trivial, then LLVM has a hard time proving that it's
sound to elide, which makes it construct `T` on the stack first, and then copy it into the allocation.
Create an uninit `Box` first, and then write `T::default` into it, so that LLVM now only needs to prove
that the `T::default` can't panic, which should be trivial for most `Default` impls.
LLVM 20 split out what used to be called b16b16 and correspond to aarch64
FEAT_SVE_B16B16 into sve-b16b16 and sme-b16b16.
Add sme-b16b16 as an explicit feature and update the codegen accordingly.
Decouple WASIp2 sockets from WasiFd
This is a follow up to #129638, decoupling WASIp2's socket implementation from WASIp1's `WasiFd` as discussed with `@alexcrichton.`
Quite a few trait implementations in `std::os::fd` rely on the fact that there is an additional layer of abstraction between `Socket` and `OwnedFd`. I thus had to add a thin `WasiSocket` wrapper struct that just "forwards" to `OwnedFd`. Alternatively, I could have added a lot of conditional compilation to `std::os::fd`, which feels even worse.
Since `WasiFd::sock_accept` is no longer accessible from `TcpListener` and since WASIp2 has proper support for accepting sockets through `Socket::accept`, the `std::os::wasi::net` module has been removed from WASIp2, which only contains a single `TcpListenerExt` trait with a `sock_accept` method as well as an implementation for `TcpListener`. Let me know if this is an acceptable solution.
liballoc: introduce String, Vec const-slicing
This change `const`-qualifies many methods on `Vec` and `String`, notably `as_slice`, `as_str`, `len`. These changes are made behind the unstable feature flag `const_vec_string_slice`.
## Motivation
This is to support simultaneous variance over ownership and constness. I have an enum type that may contain either `String` or `&str`, and I want to produce a `&str` from it in a possibly-`const` context.
```rust
enum StrOrString<'s> {
Str(&'s str),
String(String),
}
impl<'s> StrOrString<'s> {
const fn as_str(&self) -> &str {
match self {
// In a const-context, I really only expect to see this variant, but I can't switch the implementation
// in some mode like #[cfg(const)] -- there has to be a single body
Self::Str(s) => s,
// so this is a problem, since it's not `const`
Self::String(s) => s.as_str(),
}
}
}
```
Currently `String` and `Vec` don't support this, but can without functional changes. Similar logic applies for `len`, `capacity`, `is_empty`.
## Changes
The essential thing enabling this change is that `Unique::as_ptr` is `const`. This lets us convert `RawVec::ptr` -> `Vec::as_ptr` -> `Vec::as_slice` -> `String::as_str`.
I had to move the `Deref` implementations into `as_{str,slice}` because `Deref` isn't `#[const_trait]`, but I would expect this change to be invisible up to inlining. I moved the `DerefMut` implementations as well for uniformity.
This change `const`-qualifies many methods on Vec and String, notably
`as_slice`, `as_str`, `len`. These changes are made behind the unstable
feature flag `const_vec_string_slice` with the following tracking issue:
https://github.com/rust-lang/rust/issues/129041
Rewrite these blobs to explicitly mention the case of a sized operand.
The previous made that seem wrong instead of emphasizing it is nothing
but a simple cast. Instead, the explanation now emphasizes that the
address portion of the argument, together with its provenance, is
discarded which previously had to be inferred by the reader. Then an
example demonstrates a simple line of incorrect usage based on this
idea of provenance.
Android: Debug assertion after setting thread name
While `prctl` cannot fail if it points to a valid buffer, it's still better to assert the result as it's done for other places.
make Cell unstably const
Now that we can do interior mutability in `const`, most of the Cell API can be `const fn`. :) The main exception is `set`, because it drops the old value. So from const context one has to use `replace`, which delegates the responsibility for dropping to the caller.
Tracking issue: https://github.com/rust-lang/rust/issues/131283
`as_array_of_cells` is itself still unstable to I added the const-ness to the feature gate for that function and not to `const_cell`, Cc #88248.
r? libs-api
std: replace `LazyBox` with `OnceBox`
This PR replaces the `LazyBox` wrapper used to allocate the pthread primitives with `OnceBox`, which has a more familiar API mirroring that of `OnceLock`. This cleans up the code in preparation for larger changes like #128184 (from which this PR was split) and allows some neat optimizations, like avoid an acquire-load of the allocation pointer in `Mutex::unlock`, where the initialization of the allocation must have already been observed.
Additionally, I've gotten rid of the TEEOS `Condvar` code, it's just a duplicate of the pthread one anyway and I didn't want to repeat myself.
Stabilize the `map`/`value` methods on `ControlFlow`
And fix the stability attribute on the `pub use` in `core::ops`.
libs-api in https://github.com/rust-lang/rust/issues/75744#issuecomment-2231214910 seemed reasonably happy with naming for these, so let's try for an FCP.
Summary:
```rust
impl<B, C> ControlFlow<B, C> {
pub fn break_value(self) -> Option<B>;
pub fn map_break<T>(self, f: impl FnOnce(B) -> T) -> ControlFlow<T, C>;
pub fn continue_value(self) -> Option<C>;
pub fn map_continue<T>(self, f: impl FnOnce(C) -> T) -> ControlFlow<B, T>;
}
```
Resolves#75744
``@rustbot`` label +needs-fcp +t-libs-api -t-libs
---
Aside, in case it keeps someone else from going down the same dead end: I looked at the `{break,continue}_value` methods and tried to make them `const` as part of this, but that's disallowed because of not having `const Drop`, so put it back to not even unstably-const.
what matters is we re doing the right things as doing sizeof, rather than
KINFO_FILE_SIZE (only defined on intel architectures), the kernel
making sure it matches the expectation in its side.
Avoid emptiness check in `PeekMut::pop`
This PR avoids an unnecessary emptiness check in `PeekMut::pop` by replacing `Option::unwrap` with `Option::unwrap_unchecked`.
Add `get_line` confusable to `Stdin::read_line()`
This pull request resolves https://github.com/rust-lang/rust/issues/131091
---
I've updated tests for `tests/ui/attributes/rustc_confusables_std_cases` in order to verify this change is working as intended.
Before I submitted this pull request, I had a pull request to my local fork. If you're interested in seeing the conversation on that PR, go to https://github.com/JakenHerman/rust/pull/1.
---
**Testing**:
Run `./x.py test tests/ui/attributes/rustc_confusables_std_cases.rs`
impl `Default` for `HashMap`/`HashSet` iterators that don't already have it
This is a follow-up to #128261 that isn't included in that PR because it depends on:
* [x] rust-lang/hashbrown#542 (`Default`)
* [x] `hashbrown` release containing above
It also wasn't included in #128261 initially and should have its own FCP, since these are also insta-stable.
Changes added:
* `Default for hash_map::{Iter, IterMut, IntoIter, IntoKeys, IntoValues, Keys, Values, ValuesMut}`
* `Default for hash_set::{Iter, IntoIter}`
Changes that were added before FCP, but are being deferred to later:
* `Clone for hash_map::{IntoIter, IntoKeys, IntoValues} where K: Clone, V: Clone`
* `Clone for hash_set::IntoIter where K: Clone`
std: make `thread::current` available in all `thread_local!` destructors
... and thereby allow the panic runtime to always print the right thread name.
This works by modifying the TLS destructor system to schedule a runtime cleanup function after all other TLS destructors registered by `std` have run. Unfortunately, this doesn't affect foreign TLS destructors, `thread::current` will still panic there.
Additionally, the thread ID returned by `current_id` will now always be available, even inside the global allocator, and will not change during the lifetime of one thread (this was previously the case with key-based TLS).
The mechanisms I added for this (`local_pointer` and `thread_cleanup`) will also allow finally fixing #111272 by moving the signal stack to a similar runtime-cleanup TLS variable.
Update hashbrown to 0.15 and adjust some methods
This PR updates `hashbrown` to 0.15 in the standard library and adjust some methods as well as removing some as they no longer exists in Hashbrown it-self.
- `HashMap::get_many_mut` change API to return array-of-Option
- `HashMap::{replace_entry, replace_key}` are removed, FCP close [already finished](https://github.com/rust-lang/rust/issues/44286#issuecomment-2293825619)
- `HashSet::get_or_insert_owned` is removed as it no longer exists in hashbrown
Closes https://github.com/rust-lang/rust/issues/44286
r? `@Amanieu`
mpmc doctest: make sure main thread waits for child threads
Currently, chances are half the test is not executed because the main thread exits while the other threads are still working.
Cc `@obeis` `@Amanieu`
Add `[Option<T>; N]::transpose`
This PR as a new unstable libs API, `[Option<T>; N]::transpose`, which permits going from `[Option<T>; N]` to `Option<[T; N]>`.
This new API doesn't have an ACP as it was directly asked by T-libs-api in https://github.com/rust-lang/rust/issues/97601#issuecomment-2372109119:
> [..] but it'd be trivial to provide a helper method `.transpose()` that turns array-of-Option into Option-of-array (**and we think that method should exist**; it already does for array-of-MaybeUninit).
r? libs
This PR replaces the `LazyBox` wrapper used to allocate the pthread primitives with `OnceBox`, which has a more familiar API mirroring that of `OnceLock`. This cleans up the code in preparation for larger changes like #128184 (from which this PR was split) and allows some neat optimizations, like avoid an acquire-load of the allocation pointer in `Mutex::unlock`, where the initialization of the allocation must have already been observed.
Additionally, I've gotten rid of the TEEOS `Condvar` code, it's just a duplicate of the pthread one anyway and I didn't want to repeat myself.
Update Unicode escapes in `/library/core/src/char/methods.rs`
`char::MAX` is inconsistent on how Unicode escapes should be formatted. This PR resolves that.
ptr::add/sub: do not claim equivalence with `offset(c as isize)`
In https://github.com/rust-lang/rust/pull/110837, the `offset` intrinsic got changed to also allow a `usize` offset parameter. The intention is that this will do an unsigned multiplication with the size, and we have UB if that overflows -- and we also have UB if the result is larger than `usize::MAX`, i.e., if a subsequent cast to `isize` would wrap. ~~The LLVM backend sets some attributes accordingly.~~
This updates the docs for `add`/`sub` to match that intent, in preparation for adjusting codegen to exploit this UB. We use this opportunity to clarify what the exact requirements are: we compute the offset using mathematical multiplication (so it's no problem to have an `isize * usize` multiplication, we just multiply integers), and the result must fit in an `isize`.
Cc `@rust-lang/opsem` `@nikic`
https://github.com/rust-lang/rust/pull/130239 updates Miri to detect this UB.
`sub` still has some cases of UB not reflected in the underlying intrinsic semantics (and Miri does not catch): when we subtract `usize::MAX`, then after casting to `isize` that's just `-1` so we end up adding one unit without noticing any UB, but actually the offset we gave does not fit in an `isize`. Miri will currently still not complain for such cases:
```rust
fn main() {
let x = &[0i32; 2];
let x = x.as_ptr();
// This should be UB, we are subtracting way too much.
unsafe { x.sub(usize::MAX).read() };
}
```
However, the LLVM IR we generate here also is UB-free. This is "just" library UB but not language UB.
Cc `@saethlin;` might be worth adding precondition checks against overflow on `offset`/`add`/`sub`?
Fixes https://github.com/rust-lang/rust/issues/130211
Rollup of 5 pull requests
Successful merges:
- #130630 (Support clobber_abi and vector/access registers (clobber-only) in s390x inline assembly)
- #131042 (Instantiate binders in `supertrait_vtable_slot`)
- #131079 (Update wasm-component-ld to 0.5.9)
- #131085 (make test_lots_of_insertions test take less long in Miri)
- #131088 (add fixme to remove LLVM_ENABLE_TERMINFO when minimal llvm version is 19)
r? `@ghost`
`@rustbot` modify labels: rollup
make test_lots_of_insertions test take less long in Miri
This is by far the slowest `std` test in Miri, taking >2min in https://github.com/rust-lang/miri-test-libstd CI. So let's make this `count` smaller. The runtime should be quadratic in `count` so reducing it to around 2/3 of it's previous value should cut the total time down to less than half -- making it still the slowest test, but by less of a margin. (And this way we still insert >64 elements into the HashMap, in case that power of 2 matters.)
There is a MinGW ABI bug that prevents `f16` and `f128` from being
usable on `windows-gnu` targets. This does not affect MSVC; however, we
have `f16` and `f128` tests disabled on all Windows targets.
Update the gating to only affect `windows-gnu`, which means `f16` tests
will be enabled. There is no effect for `f128` since the default
fallback is `false`.
make ptr metadata functions callable from stable const fn
So far this was done with a bunch of `rustc_allow_const_fn_unstable`. But those should be the exception, not the norm. If we are confident we can expose the ptr metadata APIs *indirectly* in stable const fn, we should just mark them as `rustc_const_stable`. And we better be confident we can do that since it's already been done a while ago. ;)
In particular this marks two intrinsics as const-stable: `aggregate_raw_ptr`, `ptr_metadata`. This should be uncontroversial, they are trivial to implement in the interpreter.
Cc `@rust-lang/wg-const-eval` `@rust-lang/lang`
Enable `f16` tests on x86 Apple platforms
These were disabled because Apple uses a special ABI for `f16`. `compiler-builtins` merged a fix for this in [1], which has since propagated to rust-lang/rust. Enable tests since there should be no remaining issues on these platforms.
[1]: https://github.com/rust-lang/compiler-builtins/pull/675
try-job: x86_64-apple-1
try-job: x86_64-apple-2
Mark some more types as having insignificant dtor
These were caught by https://github.com/rust-lang/rust/pull/129864#issuecomment-2376658407, which is implementing a lint for some changes in drop order for temporaries in tail expressions.
Specifically, the destructors of `CString` and the bitpacked repr for `std::io::Error` are insignificant insofar as they don't have side-effects on things like locking or synchronization; they just free memory.
See some discussion on #89144 for what makes a drop impl "significant"
Hook up std::net to wasi-libc on wasm32-wasip2 target
One of the improvements of the `wasm32-wasip2` target over `wasm32-wasip1` is better support for networking. Right now, p2 is just re-using the `std::net` implementation from p1. This PR adds a new net module for p2 that makes use of net from `sys_common` and calls wasi-libc functions directly.
There are currently a few limitations:
- Duplicating a socket is not supported by WASIp2 (directly returns an error)
- Peeking is not yet implemented in wasi-libc (we could let wasi-libc handle this, but I opted to directly return an error instead)
- Vectored reads/writes are not supported by WASIp2 (the necessary functions are available in wasi-libc, but they call WASIp1 functions which do not support sockets, so I opted to directly return an error instead)
- Getting/setting `TCP_NODELAY` is faked in wasi-libc (uses the fake implementation instead of returning an error)
- Getting/setting `SO_LINGER` is not supported by WASIp2 (directly returns an error)
- Setting `SO_REUSEADDR` is faked in wasi-libc (since this is done from `sys_common`, the fake implementation is used instead of returning an error)
- Getting/setting `IPV6_V6ONLY` is not supported by WASIp2 and will always be set for IPv6 sockets (since this is done from `sys_common`, wasi-libc will return an error)
- UDP broadcast/multicast is not supported by WASIp2 (since this is configured from `sys_common`, wasi-libc will return appropriate errors)
- The `MSG_NOSIGNAL` send flag is a no-op because there are no signals in WASIp2 (since explicitly setting this flag would require a change to `sys_common` and the result would be exactly the same, I opted to not set it)
Do those decisions make sense?
While working on this PR, I noticed that there is a `std::os::wasi::net::TcpListenerExt` trait that adds a `sock_accept()` method to `std::net::TcpListener`. Now that WASIp2 supports standard accept, would it make sense to remove this?
cc `@alexcrichton`
This commit is a followup to https://github.com/rust-lang/rust/pull/124032. It
replaces the tests that test the various sort functions in the standard library
with a test-suite developed as part of
https://github.com/Voultapher/sort-research-rs. The current tests suffer a
couple of problems:
- They don't cover important real world patterns that the implementations take
advantage of and execute special code for.
- The input lengths tested miss out on code paths. For example, important safety
property tests never reach the quicksort part of the implementation.
- The miri side is often limited to `len <= 20` which means it very thoroughly
tests the insertion sort, which accounts for 19 out of 1.5k LoC.
- They are split into to core and alloc, causing code duplication and uneven
coverage.
- The randomness is not repeatable, as it
relies on `std:#️⃣:RandomState::new().build_hasher()`.
Most of these issues existed before
https://github.com/rust-lang/rust/pull/124032, but they are intensified by it.
One thing that is new and requires additional testing, is that the new sort
implementations specialize based on type properties. For example `Freeze` and
non `Freeze` execute different code paths.
Effectively there are three dimensions that matter:
- Input type
- Input length
- Input pattern
The ported test-suite tests various properties along all three dimensions,
greatly improving test coverage. It side-steps the miri issue by preferring
sampled approaches. For example the test that checks if after a panic the set of
elements is still the original one, doesn't do so for every single possible
panic opportunity but rather it picks one at random, and performs this test
across a range of input length, which varies the panic point across them. This
allows regular execution to easily test inputs of length 10k, and miri execution
up to 100 which covers significantly more code. The randomness used is tied to a
fixed - but random per process execution - seed. This allows for fully
repeatable tests and fuzzer like exploration across multiple runs.
Structure wise, the tests are previously found in the core integration tests for
`sort_unstable` and alloc unit tests for `sort`. The new test-suite was
developed to be a purely black-box approach, which makes integration testing the
better place, because it can't accidentally rely on internal access. Because
unwinding support is required the tests can't be in core, even if the
implementation is, so they are now part of the alloc integration tests. Are
there architectures that can only build and test core and not alloc? If so, do
such platforms require sort testing? For what it's worth the current
implementation state passes miri `--target mips64-unknown-linux-gnuabi64` which
is big endian.
The test-suite also contains tests for properties that were and are given by the
current and previous implementations, and likely relied upon by users but
weren't tested. For example `self_cmp` tests that the two parameters `a` and `b`
passed into the comparison function are never references to the same object,
which if the user is sorting for example a `&mut [Mutex<i32>]` could lead to a
deadlock.
Instead of using the hashed caller location as rand seed, it uses seconds since
unix epoch / 10, which given timestamps in the CI should be reasonably easy to
reproduce, but also allows fuzzer like space exploration.
stabilize const_cell_into_inner
This const-stabilizes
- `UnsafeCell::into_inner`
- `Cell::into_inner`
- `RefCell::into_inner`
- `OnceCell::into_inner`
`@rust-lang/wg-const-eval` this uses `rustc_allow_const_fn_unstable(const_precise_live_drops)`, so we'd be comitting to always finding *some* way to accept this code. IMO that's fine -- what these functions do is to move out the only field of a struct, and that struct has no destructor itself. The field's destructor does not get run as it gets returned to the caller.
`@rust-lang/libs-api` this was FCP'd already [years ago](https://github.com/rust-lang/rust/issues/78729#issuecomment-811409860), except that `OnceCell::into_inner` was added to the same feature gate since then (Cc `@tgross35).` Does that mean we have to re-run the FCP? If yes, I'd honestly prefer to move `OnceCell` into its own feature gate to not risk missing the next release. (That's why it's not great to add new functions to an already FCP'd feature gate.) OTOH if this needs an FCP either way since the previous FCP was so long ago, then we might as well do it all at once.
Improve Ord docs
- Makes wording more clear and re-structures some sections that can be overwhelming for someone not already in the know.
- Adds examples of how *not* to implement Ord, inspired by various anti-patterns found in real world code.
Many of the wording changes are inspired directly by my personal experience of being confused by the `Ord` docs and seeing other people get it wrong as well, especially lately having looked at a number of `Ord` implementations as part of #128899.
Created with help by `@orlp.`
r? `@workingjubilee`
Clarifications for set_nonblocking methods
Closes#129903.
The issue mentions that `send`, `recv` and other operations are interpreted by some users as methods of `TcpSocket` which led to confusion since it hasn't them. To fix it I added "system" into the documentation as being more precise for two reasons:
* it's makes it clear that these names are system operations;
* it doesn't point to the location of these methods like `libc` because not every system is POSIX compatible.
Update `catch_unwind` doc comments for `c_unwind`
Updates `catch_unwind` doc comments to indicate that catching a foreign exception _will no longer_ be UB. Instead, there are two possible behaviors, though it is not specified which one an implementation will choose.
Nominated for t-lang to confirm that they are okay with making such a promise based on t-opsem FCP, or whether they would like to be included in the FCP.
Related: https://github.com/rust-lang/rust/issues/74990, https://github.com/rust-lang/rust/issues/115285, https://github.com/rust-lang/reference/pull/1226
Improve autovectorization of to_lowercase / to_uppercase functions
Refactor the code in the `convert_while_ascii` helper function to make it more suitable for auto-vectorization and also process the full ascii prefix of the string. The generic case conversion logic will only be invoked starting from the first non-ascii character.
The runtime on a microbenchmark with a small ascii-only input decreases from ~55ns to ~18ns per iteration. The new implementation also reduces the amount of unsafe code and encapsulates all unsafe inside the helper function.
Fixes#123712
These were disabled because Apple uses a special ABI for `f16`.
`compiler-builtins` merged a fix for this in [1], which has since
propagated to rust-lang/rust. Enable tests since there should be no
remaining issues on these platforms.
[1]: https://github.com/rust-lang/compiler-builtins/pull/675
Enable `f16` tests on platforms that were missing conversion symbols
The only requirement for `f16` support, aside from LLVM not crashing and no ABI issues, is that symbols to convert to and from `f32` are available. Since the update to compiler-builtins in https://github.com/rust-lang/rust/pull/125016, we now provide these on all platforms.
This also enables `f16` math since there are no further requirements.
Still excluded are platforms for which LLVM emits infinitely-recursing code.
try-job: arm-android
try-job: test-various
try-job: x86_64-fuchsia
atomics: allow atomic and non-atomic reads to race
We currently define our atomics in terms of C++ `atomic_ref`. That has the unfortunate side-effect of making it UB for an atomic and a non-atomic read to race (concretely, [this code](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=d1a743774e60923db33def7fe314d754) has UB). There's really no good reason for this, all the academic models of the C++ memory model I am aware of allow this -- C++ just disallows this because of their insistence on an "object model" with typed memory, where `atomic_ref` temporarily creates an "atomic object" that may not be accesses via regular non-atomic operations.
So instead of tying our operations to `atomic_ref`, let us tie them directly to the underlying C++ memory model. I am not sure what is the best way to phrase this, so here's a first attempt.
We also carve out an exception from the "no mixed-size atomic accesses" rule to permit mixed-size atomic reads -- given that we permit mixed-size non-atomic reads, it seems odd that this would be disallowed for atomic reads. However, when an atomic write races with any other atomic operation, they must use the same size.
With this change, it is finally the case that every non-atomic access can be replaced by an atomic access without introducing UB.
Cc `@rust-lang/opsem` `@chorman0773` `@m-ou-se` `@WaffleLapkin` `@Amanieu`
Fixes https://github.com/rust-lang/unsafe-code-guidelines/issues/483
The only requirement for `f16` support, aside from LLVM not crashing and
no ABI issues, is that symbols to convert to and from `f32` are
available. Since the update to compiler-builtins in [1], we now provide
these on all platforms.
This also enables `f16` math since there are no further requirements.
Still excluded are platforms for which LLVM emits infinitely-recursing
code.
[1]: https://github.com/rust-lang/rust/pull/125016
- Makes wording more clear and re-structures some
sections that can be overwhelming for some not
already in the know.
- Adds examples of how *not* to implement Ord,
inspired by various anti-patterns found in real
world code.
Partially update `library/Cargo.lock`
Run `cargo update` in library but exclude updates to `cc` and to `compiler_builtins`. Exclusions were done because `cc` seems to have some issues updating [1], and `compiler_builtins` needs to be updated on its own.
Partially supersedes https://github.com/rust-lang/rust/pull/129538.
[1]: https://github.com/rust-lang/rust/pull/130720
try-job: x86_64-msvc
update `compiler-builtins` to 0.1.126
this requires the addition of a bootstrap variant of the new `naked_asm!` macro
r? `@tgross35`
extracted from https://github.com/rust-lang/rust/pull/128651
Revert Break into the debugger on panic (129019)
This was talked about a bit at a recent libs meeting. While I think experimenting with this is worthwhile, I am nervous about this new behaviour reaching stable. We've already reverted on one tier 1 platform (Linux, https://github.com/rust-lang/rust/pull/130810) which means we have differing semantics on different tier 1 platforms. Also the fact it triggers even when `catch_unwind` is used to catch the panic means it can be very noisy in some projects.
At the very least I think it could use some more discussion before being instantly stable. I think this could maybe be re-landed with an environment variable to control/override the behaviour. But that part would likely need a libs-api decision.
cc ````@workingjubilee```` ````@kromych````
[`cfg_match`] Generalize inputs
cc #115585
Changes the input type from `item` to `tt`, which makes the macro have the same functionality of `cfg_if`.
Also adds a test to ensure that `stmt_expr_attributes` is not triggered.
Utf8Chunks: add link to Utf8Chunk
It is currently surprisingly non-trivial to go from the `utf8_chunks` method to the docs of the `valid`/`invalid` methods used in the example. This should help.
Run `cargo update` in library but exclude updates to `cc` and to
`compiler_builtins`. Exclusions were done because `cc` seems to have
some issues updating [1], and `compiler_builtins` needs to be updated on
its own.
[1]: https://github.com/rust-lang/rust/pull/130720
Since the stabilization in #127679 has reached stage0, 1.82-beta, we can
start using `&raw` freely, and even the soft-deprecated `ptr::addr_of!`
and `ptr::addr_of_mut!` can stop allowing the unstable feature.
I intentionally did not change any documentation or tests, but the rest
of those macro uses are all now using `&raw const` or `&raw mut` in the
standard library.
fix some cfg logic around optimize_for_size and 16-bit targets
Fixes https://github.com/rust-lang/rust/issues/130818.
Fixes https://github.com/rust-lang/rust/issues/129910.
There are still some warnings when building on a 16bit target:
```
warning: struct `AlignedStorage` is never constructed
--> /home/r/src/rust/rustc.2/library/core/src/slice/sort/stable/mod.rs:135:8
|
135 | struct AlignedStorage<T, const N: usize> {
| ^^^^^^^^^^^^^^
|
= note: `#[warn(dead_code)]` on by default
warning: associated items `new` and `as_uninit_slice_mut` are never used
--> /home/r/src/rust/rustc.2/library/core/src/slice/sort/stable/mod.rs:141:8
|
140 | impl<T, const N: usize> AlignedStorage<T, N> {
| -------------------------------------------- associated items in this implementation
141 | fn new() -> Self {
| ^^^
...
145 | fn as_uninit_slice_mut(&mut self) -> &mut [MaybeUninit<T>] {
| ^^^^^^^^^^^^^^^^^^^
warning: function `quicksort` is never used
--> /home/r/src/rust/rustc.2/library/core/src/slice/sort/unstable/quicksort.rs:19:15
|
19 | pub(crate) fn quicksort<'a, T, F>(
| ^^^^^^^^^
warning: `core` (lib) generated 3 warnings
```
However, the cfg stuff here is sufficiently messy that I didn't want to touch more of it. I think all `feature = "optimize_for_size"` should become `any(feature = "optimize_for_size", target_pointer_width = "16")` but I am not entirely certain. Warnings are fine, Miri will just ignore them.
Cc `@Voultapher`
Add `must_use` attribute to `len_utf8` and `len_utf16`.
The `len_utf8` and `len_utf16` methods in `char` should have the `must_use` attribute.
The somewhat similar method `<[T]>::len` has had this attribute since #95274. Considering that these two methods would most likely be used to test the size of a buffer (before a call to `encode_utf8` or `encode_utf16`), *not* using their return values could indicate a bug.
According to ["When to add `#[must_use]`](https://std-dev-guide.rust-lang.org/policy/must-use.html), this is **not** considered a breaking change (and could be reverted again at a later time).
Rollup of 6 pull requests
Successful merges:
- #130549 (Add RISC-V vxworks targets)
- #130595 (Initial std library support for NuttX)
- #130734 (Fix: ices on virtual-function-elimination about principal trait)
- #130787 (Ban combination of GCE and new solver)
- #130809 (Update llvm triple for OpenHarmony targets)
- #130810 (Don't trap into the debugger on panics under Linux)
r? `@ghost`
`@rustbot` modify labels: rollup
Initial std library support for NuttX
This PR add the initial libstd support for NuttX platform (Tier 3), currently it depends on https://github.com/rust-lang/libc/pull/3909 which provide the essential libc definitions.
Add `File` constructors that return files wrapped with a buffer
In addition to the light convenience, these are intended to raise visibility that buffering is something you should consider when opening a file, since unbuffered I/O is a common performance footgun to Rust newcomers.
ACP: https://github.com/rust-lang/libs-team/issues/446
Tracking Issue: #130804
Pin memchr to 2.5.0 in the library rather than rustc_ast
The latest versions of `memchr` experience LTO-related issues when compiling for windows-gnu [1], so needs to be pinned. The issue is present in the standard library.
`memchr` has been pinned in `rustc_ast`, but since the workspace was recently split, this pin no longer has any effect on library crates.
Resolve this by adding `memchr` as an _unused_ dependency in `std`, pinned to 2.5. Additionally, remove the pin in `rustc_ast` to allow non-library crates to upgrade to the latest version.
Link: https://github.com/rust-lang/rust/issues/127890 [1]
try-job: x86_64-mingw
try-job: x86_64-msvc
Add `optimize_for_size` variants for stable and unstable sort as well as select_nth_unstable
- Stable sort uses a simple merge-sort that re-uses the existing - rather gnarly - merge function.
- Unstable sort jumps directly to the branchless heapsort fallback.
- select_nth_unstable jumps directly to the median_of_medians fallback, which is augmented with a custom tiny smallsort and partition impl.
Some code is duplicated but de-duplication would bring it's own problems. For example `swap_if_less` is critical for performance, if the sorting networks don't inline it perf drops drastically, however `#[inline(always)]` is also a poor fit, if the provided comparison function is huge, it gives the compiler an out to only instantiate `swap_if_less` once and call it. Another aspect that would suffer when making `swap_if_less` pub, is having to cfg out dozens of functions in in smallsort module.
Part of https://github.com/rust-lang/rust/issues/125612
r? `@Kobzol`
The latest versions of `memchr` experience LTO-related issues when
compiling for windows-gnu [1], so needs to be pinned. The issue is
present in the standard library.
`memchr` has been pinned in `rustc_ast`, but since the workspace was
recently split, this pin no longer has any effect on library crates.
Resolve this by adding `memchr` as an _unused_ dependency in `std`,
pinned to 2.5. Additionally, remove the pin in `rustc_ast` to allow
non-library crates to upgrade to the latest version.
Link: https://github.com/rust-lang/rust/issues/127890 [1]
Mark `make_ascii_uppercase` and `make_ascii_lowercase` in `[u8]` and `str` as const.
Relevant tracking issue: #130698
This PR extends #130697 and #130713 to the similar methods in byte slices (`[u8]`) and string slices (`str`).
For the `str` methods, this simply requires adding the `const` specifier to the function signatures. The `[u8]` methods, however, require (at least a temporary) reimplementation due to the use of iterators and `for` loops.
Refactor the code in the `convert_while_ascii` helper function to make
it more suitable for auto-vectorization and also process the full ascii
prefix of the string. The generic case conversion logic will only be
invoked starting from the first non-ascii character.
The runtime on microbenchmarks with ascii-only inputs improves between
1.5x for short and 4x for long inputs on x86_64 and aarch64.
The new implementation also encapsulates all unsafe inside the
`convert_while_ascii` function.
Fixes#123712
Add test for `available_parallelism()`
This is a redo of [this PR](https://github.com/rust-lang/rust/pull/104095).
I changed the location of the test as per comments in the original thread. Otherwise the test is practically the same.
try-job: test-various
Mark `u8::make_ascii_uppercase` and `u8::make_ascii_lowercase` as const.
Relevant tracking issue: #130698
This PR extends #130697 by also marking the `make_ascii_uppercase` and `make_ascii_lowercase` methods in `u8` as const.
The `const_char_make_ascii` feature gate is additionally renamed to `const_make_ascii`.
Support `char::encode_utf16` in const scenarios.
Relevant tracking issue: #130660
The method `char::encode_utf16` should be marked "const" to allow compile-time conversions.
This PR additionally rewrites the `encode_utf16_raw` function for better readability whilst also reducing the amount of unsafe code.
try-job: x86_64-msvc
Add str.as_str() for easy Deref to string slices
Working with `Box<str>` is cumbersome, because in places like `iter.filter()` it can end up being `&Box<str>` or even `&&Box<str>`, and such type doesn't always get auto-dereferenced as expected.
Dereferencing such box to `&str` requires ugly syntax like `&**boxed_str` or `&***boxed_str`, with the exact amount of `*`s.
`Box<str>` is [not easily comparable with other string types](https://github.com/rust-lang/rust/pull/129852) via `PartialEq`. `Box<str>` won't work for lookups in types like `HashSet<String>`, because `Borrow<String>` won't take types like `&Box<str>`. OTOH `set.contains(s.as_str())` works nicely regardless of levels of indirection.
`String` has a simple solution for this: the `as_str()` method, and `Box<str>` should too.
Mark `char::make_ascii_uppercase` and `char::make_ascii_lowercase` as const.
Relevant tracking issue: #130698
The `make_ascii_uppercase` and `make_ascii_lowercase` methods in `char` should be marked "const."
With the stabilisation of [`const_mut_refs`](https://github.com/rust-lang/rust/issues/57349/), this simply requires adding the `const` specifier to the function signatures.
ABI compatibility: mention Result guarantee
This has been already documented in https://doc.rust-lang.org/std/result/index.html#representation, but for `Option` we mirrored those docs in the "ABI compatibility" section, so let's do the same here.
Cc ``@workingjubilee`` ``@rust-lang/lang``
Avoid re-validating UTF-8 in `FromUtf8Error::into_utf8_lossy`
Part of the unstable feature `string_from_utf8_lossy_owned` - #129436
Refactor `FromUtf8Error::into_utf8_lossy` to copy valid UTF-8 bytes into the buffer, avoiding double validation of bytes.
Add tests that mirror the `String::from_utf8_lossy` tests.
Refactor `into_utf8_lossy` to copy valid UTF-8 bytes into the buffer,
avoiding double validation of bytes.
Add tests that mirror the `String::from_utf8_lossy` tests
Address diagnostics regression for `const_char_encode_utf8`.
Relevant tracking issue: #130512
This PR regains full diagnostics for non-const calls to `char::encode_utf8`.
Remove macOS 10.10 dynamic linker bug workaround
Rust's current minimum macOS version is 10.12, so the hack can be removed. This PR also updates the `remove_dir_all` docs to reflect that all supported macOS versions are protected against TOCTOU race conditions (the fallback implementation was already removed in #127683).
try-job: dist-x86_64-apple
try-job: dist-aarch64-apple
try-job: dist-apple-various
try-job: aarch64-apple
try-job: x86_64-apple-1
`pal::unsupported::process::ExitCode`: use an `u8` instead of a `bool`
`ExitCode` should “represents the status code the current process can return to its parent under normal termination”, but is currently represented as a `bool` on unsupported platforms, making the `impl From<u8> for ExitCode` lossy.
Fixes#130532.
History: [IRLO thread](https://internals.rust-lang.org/t/mini-pre-rfc-redesigning-process-exitstatus/5426) (`ExitCode` as a `main` return), #48618 (initial impl), #93445 (`From<u8>` impl).
[Clippy] Get rid of most `std` `match_def_path` usage, swap to diagnostic items.
Part of https://github.com/rust-lang/rust-clippy/issues/5393.
This was going to remove all `std` paths, but `SeekFrom` has issues being cleanly replaced with a diagnostic item as the paths are for variants, which currently cannot be diagnostic items.
This also, as a last step, categories the paths to help with future path removals.
Improve documentation for <integer>::from_str_radix
Two improvements to the documentation:
- Document `-` as a valid character for signed integer destinations
- Make the documentation even more clear that extra whitespace and non-digit characters is invalid. Many other languages, e.g. c++, are very permissive in string to integer routines and simply try to consume as much as they can, ignoring the rest. This is trying to make the transition for developers who are used to the conversion semantics in these languages a bit easier.
Update the minimum external LLVM to 18
With this change, we'll have stable support for LLVM 18 and 19.
For reference, the previous increase to LLVM 17 was #122649.
cc `@rust-lang/wg-llvm`
r? nikic
Win: Open dir for sync access in remove_dir_all
A small follow up to #129800.
We should explicitly open directories for synchronous access. We ultimately use `GetFileInformationByHandleEx` to read directories which should paper over any issues caused by using async directory reads (or else return an error) but it's better to do the right thing in the first place. Note though that `delete` does not read or write any data so it's not necessary there.
Pass `fmt::Arguments` by reference to `PanicInfo` and `PanicMessage`
Resolves#129330
For some reason after #115974 and #126732 optimizations applied to panic handler became worse and compiler stopped removing panic locations if they are not used in the panic message. This PR fixes that and maybe we can merge it into beta before rust 1.81 is released.
Note: optimization only works with `lto = "fat"`.
r? libs-api
Take more advantage of the `isize::MAX` limit in `Layout`
Things like `padding_needed_for` are current implemented being super careful to handle things like `Layout::size` potentially being `usize::MAX`.
But now that #95295 has happened, that's no longer a concern. It's possible to add two `Layout::size`s together without risking overflow now.
So take advantage of that to remove a bunch of checked math that's not actually needed. For example, the round-up-and-add-next-size in `extend` doesn't need any overflow checks at all, just the final check for compatibility with the alignment.
(And while I was doing that I made it all unstably const, because there's nothing in `Layout` that's fundamentally runtime-only.)
Things like `padding_needed_for` are current implemented being super careful to handle things like `Layout::size` potentially being `usize::MAX`.
But now that 95295 has happened, that's no longer a concern. It's possible to add two `Layout::size`s together without risking overflow now.
So take advantage of that to remove a bunch of checked math that's not actually needed. For example, the round-up-and-add-next-size in `extend` doesn't need any overflow checks at all, just the final check for compatibility with the alignment.
(And while I was doing that I made it all unstably const, because there's nothing in `Layout` that's fundamentally runtime-only.)
Remove uneeded PartialOrd bound in cmp::Ord::clamp
There is a `Self: PartialOrd` bound in `Ord::clamp`, but it is already required by the trait itself. Likely a left-over from the const trait deletion in 76dbe29104.
Reported-by: `@noeensarguet`
There is a Self: PartialOrd bound in Ord::clamp, but it is already
required by the trait itself. Likely a left-over from the const trait
deletion in 76dbe29104.
Reported-by: @noeensarguet
Add new_cyclic_in for Rc and Arc
Currently, new_cyclic_in does not exist for Rc and Arc. This is an oversight according to https://github.com/rust-lang/wg-allocators/issues/132.
This PR adds new_cyclic_in for Rc and Arc. The implementation is almost the exact same as new_cyclic with some small differences to make it allocator-specific. new_cyclic's implementation has been replaced with a call to `new_cyclic_in(data_fn, Global)`.
Remaining questions:
* ~~Is requiring Allocator to be Clone OK? According to https://github.com/rust-lang/wg-allocators/issues/88, Allocators should be cheap to clone. I'm just hesitant to add unnecessary constraints, though I don't see an obvious workaround for this function since many called functions in new_cyclic_in expect an owned Allocator. I see Allocator.by_ref() as an option, but that doesn't work on when creating Weak { ptr: init_ptr, alloc: alloc.clone() }, because the type of Weak then becomes Weak<T, &A> which is incompatible.~~ Fixed, thank you `@zakarumych!` This PR no longer requires the allocator to be Clone.
* Currently, new_cyclic_in's documentation is almost entirely copy-pasted from new_cyclic, with minor tweaks to make it more accurate (e.g. Rc<T> -> Rc<T, A>). The example section is removed to mitigate redundancy and instead redirects to cyclic_in. Is this appropriate?
* ~~The comments in new_cyclic_in (and much of the implementation) are also copy-pasted from new_cyclic. Would it be better to make a helper method new_cyclic_in_internal that both functions call, with either Global or the custom allocator? I'm not sure if that's even possible, since the internal method would have to return Arc<T, Global> and I don't know if it's possible to "downcast" that to an Arc<T>. Maybe transmute would work here?~~ Done, thanks `@zakarumych`
* Arc::new_cyclic is #[inline], but Rc::new_cyclic is not. Which is preferred?
* nit: does it matter where in the impl block new_cyclic_in is defined?
In the implementation of `force_mut`, I chose performance over safety.
For `LazyLock` this isn't really a choice; the code has to be unsafe.
But for `LazyCell`, we can have a full-safe implementation, but it will
be a bit less performant, so I went with the unsafe approach.
fix: Remove duplicate `LazyLock` example.
The top-level docs for `LazyLock` included two lines of code, each with an accompanying comment, that were identical and with nearly- identical comments. This looks like an oversight from a past edit which was perhaps trying to rewrite an existing example but ended up duplicating rather than replacing, though I haven't gone back through the Git history to check.
This commit removes what I personally think is the less-clear of the two examples.
[library/std/src/process.rs] `PartialEq` for `ExitCode`
Converting a third-party CLI to a library so started passing around [`std::process::ExitCode`](https://doc.rust-lang.org/std/process/struct.ExitCode.html) in an `Either`. Then I realised the tests can't be modified to compare equality of `ExitCode`s.
This PR fixes this oversight.
The top-level docs for `LazyLock` included two lines of code, each
with an accompanying comment, that were identical and with nearly-
identical comments. This looks like an oversight from a past edit
which was perhaps trying to rewrite an existing example but ended
up duplicating rather than replacing, though I haven't gone back
through the Git history to check.
This commit removes what I personally think is the less-clear of
the two examples.
Signed-off-by: Andrew Lilley Brinker <alilleybrinker@gmail.com>
Document futility of printing temporary pointers
In the user forum I've seen a few people trying to understand how borrowing and moves are implemented by peppering their code with printing of `{:p}` of references to variables and expressions. This is a bad idea. It gives misleading and confusing results, because of autoderef magic, printing pointers of temporaries on the stack, and/or causes LLVM to optimize code differently when values had their address exposed.
simplify float::classify logic
I played around with the float-classify test in the hope of triggering x87 bugs by strategically adding `black_box`, and still the exact expression `@beetrees` suggested [here](https://github.com/rust-lang/rust/pull/129835#issuecomment-2325661597) remains the only case I found where we get the wrong result on x87. Curiously, this bug only occurs when MIR optimizations are enabled -- probably the extra inlining that does is required for LLVM to hit the right "bad" case in the backend. But even for that case, it makes no difference whether `classify` is implemented in the simple bit-pattern-based version or the more complicated version we had before.
Without even a single testcase that can distinguish our `classify` from the naive version, I suggest we switch to the naive version.
Add `core::panic::abort_unwind`
`abort_unwind` is like `catch_unwind` except that it aborts the process if it unwinds, using the `#[rustc_nounwind]` mechanism also used by `extern "C" fn` to abort unwinding. The docs attempt to make it clear when to (rarely) and when not to (usually) use the function.
Although usage of the function is discouraged, having it available will help to normalize the experience when abort_unwind shims are hit, as opposed to the current ecosystem where there exist multiple common patterns for converting unwinding into a process abort.
For further information and justification, see the linked ACP.
- Tracking issue: https://github.com/rust-lang/rust/issues/130338
- ACP: https://github.com/rust-lang/libs-team/issues/441
Implement feature `string_from_utf8_lossy_owned` for lossy conversion from `Vec<u8>` to `String` methods
Accepted ACP: https://github.com/rust-lang/libs-team/issues/116
Tracking issue: #129436
Implement feature for lossily converting from `Vec<u8>` to `String`
- Add `String::from_utf8_lossy_owned`
- Add `FromUtf8Error::into_utf8_lossy`
---
Related to #64727, but unsure whether to mark it "fixed" by this PR.
That issue partly asks for in-place replacement of the original allocation. We fulfill the other half of that request with these functions.
closes#64727
library: Compute Rust exception class from its string repr
Noticed this awkwardness while scanning through the code. I think we can do better than that.
move Option::unwrap_unchecked into const_option feature gate
That's where `unwrap` and `expect` are so IMO it makes more sense to group them together.
Part of #91930, #67441
Stabilize `&mut` (and `*mut`) as well as `&Cell` (and `*const Cell`) in const
This stabilizes `const_mut_refs` and `const_refs_to_cell`. That allows a bunch of new things in const contexts:
- Mentioning `&mut` types
- Creating `&mut` and `*mut` values
- Creating `&T` and `*const T` values where `T` contains interior mutability
- Dereferencing `&mut` and `*mut` values (both for reads and writes)
The same rules as at runtime apply: mutating immutable data is UB. This includes mutation through pointers derived from shared references; the following is diagnosed with a hard error:
```rust
#[allow(invalid_reference_casting)]
const _: () = {
let mut val = 15;
let ptr = &val as *const i32 as *mut i32;
unsafe { *ptr = 16; }
};
```
The main limitation that is enforced is that the final value of a const (or non-`mut` static) may not contain `&mut` values nor interior mutable `&` values. This is necessary because the memory those references point to becomes *read-only* when the constant is done computing, so (interior) mutable references to such memory would be pretty dangerous. We take a multi-layered approach here to ensuring no mutable references escape the initializer expression:
- A static analysis rejects (interior) mutable references when the referee looks like it may outlive the current MIR body.
- To be extra sure, this static check is complemented by a "safety net" of dynamic checks. ("Dynamic" in the sense of "running during/after const-evaluation, e.g. at runtime of this code" -- in contrast to "static" which works entirely by looking at the MIR without evaluating it.)
- After the final value is computed, we do a type-driven traversal of the entire value, and if we find any `&mut` or interior-mutable `&` we error out.
- However, the type-driven traversal cannot traverse `union` or raw pointers, so there is a second dynamic check where if the final value of the const contains any pointer that was not derived from a shared reference, we complain. This is currently a future-compat lint, but will become an ICE in #128543. On the off-chance that it's actually possible to trigger this lint on stable, I'd prefer if we could make it an ICE before stabilizing const_mut_refs, but it's not a hard blocker. This part of the "safety net" is only active for mutable references since with shared references, it has false positives.
Altogether this should prevent people from leaking (interior) mutable references out of the const initializer.
While updating the tests I learned that surprisingly, this code gets rejected:
```rust
const _: Vec<i32> = {
let mut x = Vec::<i32>::new(); //~ ERROR destructor of `Vec<i32>` cannot be evaluated at compile-time
let r = &mut x;
let y = x;
y
};
```
The analysis that rejects destructors in `const` is very conservative when it sees an `&mut` being created to `x`, and then considers `x` to be always live. See [here](https://github.com/rust-lang/rust/issues/65394#issuecomment-541499219) for a longer explanation. `const_precise_live_drops` will solve this, so I consider this problem to be tracked by https://github.com/rust-lang/rust/issues/73255.
Cc `@rust-lang/wg-const-eval` `@rust-lang/lang`
Cc https://github.com/rust-lang/rust/issues/57349
Cc https://github.com/rust-lang/rust/issues/80384
Add `NonNull` convenience methods to `Box` and `Vec`
Implements the ACP: https://github.com/rust-lang/libs-team/issues/418.
The docs for the added methods are mostly copied from the existing methods that use raw pointers instead of `NonNull`.
I'm new to this "contributing to rustc" thing, so I'm sorry if I did something wrong. In particular, I don't know what the process is for creating a new unstable feature. Please advise me if I should do something. Thank you.
Stabilize entry_insert
This stabilises `HashMap::Entry::insert_entry`, following the FCP in tracking issue #65225.
This was implemented in #64656 five years ago.
simd_shuffle: require index argument to be a vector
Remove some codegen hacks by forcing the SIMD shuffle `index` argument to be a vector, which means (thanks to https://github.com/rust-lang/rust/pull/128537) that it will automatically be passed as an immediate in LLVM. The only special-casing we still have is for the extra sanity-checks we add that ensure that the indices are all in-bounds. (And the GCC backend needs to do a bunch of work since the Rust intrinsic is modeled after what LLVM expects, which seems to be quite different from what GCC expects.)
Fixes https://github.com/rust-lang/rust/issues/128738, see that issue for more context.