Update doc comments to make the guarantee explicit. However, some
implementations does not have the statement though.
* `HashMap`, `HashSet`: require guarantees on hashbrown side.
* `PathBuf`: simply redirecting to `OsString`.
Fixes#99606.
This function is available on Linux since glibc 2.12, musl 1.1.16, and
uClibc 1.0.20. The main advantage over `prctl` is that it properly
represents the pointer argument, rather than a multi-purpose `long`,
so we're better representing strict provenance (#95496).
test: skip terminfo parsing in Miri
Terminfo parsing takes a significant amount of time in Miri, making libtest startup very slow. To work around that Miri in fact unsets the `TERM` variable. However, this means we don't get colors in `cargo miri test`.
So I propose we add some logic in libtest that skips parsing terminfo files under Miri, and just uses the regular basic coloring commands (taken from the `colored` crate).
As far as I can see, these two commands are all that libtest ever needs from terminfo, so Miri doesn't even lose any functionality through this. If you want I can entirely remove the terminfo parsing code and just use these commands instead.
Cc https://github.com/rust-lang/miri/issues/2292 `@saethlin`
Remove Windows function preloading
After `@Mark-Simulacrum` asked me to provide guidance for when optionally imported functions should be preloaded, I realised my justifications were now quite weak. I think the strongest argument that can be made is that it avoids some degree of nondeterminism when calling these functions (in as far as system API calls can be said to be deterministic). However, I don't think that's particularly convincing unless there's a real world use case where it matters. Further discussion with `@thomcc` has strengthened my feeling that preloading isn't really needed.
Note that `WaitOnAddress` needed some adjustment to work without preloading. I opted not to use a macro for this special case as it seemed silly to do so for just one thing (and I don't like macros tbh).
Add back Send and Sync impls on ChunksMut iterators
Fixes https://github.com/rust-lang/rust/issues/100014
These were accidentally removed in #94247 because the representation was changed from `&mut [T]` to `*mut T`, which has `!Send + !Sync`.
Fix futex module imports on wasm+atomics
The futex modules were rearranged a bit in #98707, which meant that wasm+atomics would no longer compile on nightly. I don’t believe any other targets were impacted by this.
do not claim that transmute is like memcpy
Saying transmute is like memcpy is not a well-formed statement, since memcpy is by-ref whereas transmute is by-val. The by-val nature of transmute inherently means that padding is lost along the way. (This is not specific to transmute, this is how all by-value operations work.) So adjust the docs to clarify this aspect.
Cc `@workingjubilee`
Remove synchronization from Windows `hashmap_random_keys`
Unfortunately using synchronization when generating hashmap keys can prevent it being used in `DllMain`.
~~Fixes #99341~~
Add validation to const fn CStr creation
Improves upon the existing validation that only worked when building the stdlib from source. `CStr::from_bytes_with_nul_unchecked` now utilizes `const_eval_select` to validate the safety requirements of the function when used as `const FOO: &CStr = CStr::from_bytes_with_nul_unchecked(b"Foobar\0");`.
This can help catch erroneous code written by accident and, assuming a new enough `rustc` in use, remove the need for boilerplate safety comments for this function in `const` contexts.
~~I think this might need a UI test, but I don't know where to put it. If this is a worth change, a perf run would be nice to make sure the `O(n)` validation isn't too bad. I didn't notice a difference building the stdlib tests locally.~~
Rollup of 8 pull requests
Successful merges:
- #99156 (`codegen_fulfill_obligation` expect erased regions)
- #99293 (only run --all-targets in stage0 for Std)
- #99779 (Fix item info pos and height)
- #99994 (Remove `guess_head_span`)
- #100011 (Use Parser's `restrictions` instead of `let_expr_allowed`)
- #100017 (kmc-solid: Update `Socket::connect_timeout` to be in line with #78802)
- #100037 (Update rustc man page to match `rustc --help`)
- #100042 (Update books)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
kmc-solid: Update `Socket::connect_timeout` to be in line with #78802
Fixes the build failure of the [`*-kmc-solid_*`](https://doc.rust-lang.org/nightly/rustc/platform-support/kmc-solid.html) Tier 3 targets after #78802.
```
error[E0308]: mismatched types
--> library\std\src\sys\solid\net.rs:234:45
|
234 | cvt(netc::connect(self.0.raw(), addrp, len))
| ------------- ^^^^^ expected *-ptr, found union `SocketAddrCRepr`
| |
| arguments to this function are incorrect
|
= note: expected raw pointer `*const sockets::sockaddr`
found union `SocketAddrCRepr`
note: function defined here
--> library\std\src\sys\solid\abi\sockets.rs:173:12
|
173 | pub fn connect(s: c_int, name: *const sockaddr, namelen: socklen_t) -> c_int;
| ^^^^^^^
```
Support setting file accessed/modified timestamps
Add `struct FileTimes` to contain the relevant file timestamps, since
most platforms require setting all of them at once. (This also allows
for future platform-specific extensions such as setting creation time.)
Add `File::set_file_time` to set the timestamps for a `File`.
Implement the `sys` backends for UNIX, macOS (which needs to fall back
to `futimes` before macOS 10.13 because it lacks `futimens`), Windows,
and WASI.
Currently we skip deriving `PartialEq::ne` for C-like (fieldless) enums
and empty structs, thus reyling on the default `ne`. This behaviour is
unnecessarily conservative, because the `PartialEq` docs say this:
> Implementations must ensure that eq and ne are consistent with each other:
>
> `a != b` if and only if `!(a == b)` (ensured by the default
> implementation).
This means that the default implementation (`!(a == b)`) is always good
enough. So this commit changes things such that `ne` is never derived.
The motivation for this change is that not deriving `ne` reduces compile
times and binary sizes.
Observable behaviour may change if a user has defined a type `A` with an
inconsistent `PartialEq` and then defines a type `B` that contains an
`A` and also derives `PartialEq`. Such code is already buggy and
preserving bug-for-bug compatibility isn't necessary.
Two side-effects of the change:
- There is only one error message produced for types where `PartialEq`
cannot be derived, instead of two.
- For coverage reports, some warnings about generated `ne` methods not
being executed have disappeared.
Both side-effects seem fine, and possibly preferable.
Fix unwinding on certain platforms when debug assertions are enabled
This came up on `armv7-apple-ios` when using `-Zbuild-std`.
Looks like this is a leftover from a [conversion from C to Rust](051c2d14fb), where integer wrapping is implicit.
Not at all sure how the unwinding code works!
Implement network primitives with ideal Rust layout, not C system layout
This PR is the result of this internals forum thread: https://internals.rust-lang.org/t/why-are-socketaddrv4-socketaddrv6-based-on-low-level-sockaddr-in-6/13321.
Instead of basing `std:::net::{Ipv4Addr, Ipv6Addr, SocketAddrV4, SocketAddrV6}` on system (C) structs, they are encoded in a more optimal and idiomatic Rust way.
This changes the public API of std by introducing structural equality impls for all four types here, which means that `match ipv4addr { SOME_CONSTANT => ... }` will now compile, whereas previously this was an error. No other intentional changes are introduced to public API.
It's possible to observe the current layout of these types (e.g., by pointer casting); most but not all libraries which were found by Crater to do this have had updates issued and affected versions yanked. See report below.
### Benefits of this change
- It will become possible to move these fundamental network types from `std` into `core` ([RFC](https://github.com/rust-lang/rfcs/pull/2832)).
- Some methods that can't be made `const fn`s today can be made `const fn`s with this change.
- `SocketAddrV4` only occupies 6 bytes instead of 16 bytes.
- These simple primitives become easier to read and uses less `unsafe`.
- Makes these types support structural equality, which means you can now (for instance) match an `Ipv4Addr` against a constant
### ~Remaining~ Previous problems
This change obviously changes the memory layout of the types. And it turns out some libraries invalidly assumes the memory layout and does very dangerous pointer casts to convert them. These libraries will have undefined behaviour and perform invalid memory access until patched.
- [x] - `mio` - Issue: https://github.com/tokio-rs/mio/issues/1386.
- [x] `0.7` branch https://github.com/tokio-rs/mio/pull/1388
- [x] `0.7.6` published https://github.com/tokio-rs/mio/pull/1398
- [x] Yank all `0.7` versions older than `0.7.6`
- [x] Report `<0.7.6` to RustSec Advisory Database https://rustsec.org/advisories/RUSTSEC-2020-0081.html
- [x] - `socket2` - Issue: https://github.com/rust-lang/socket2-rs/issues/119.
- [x] `0.3.x` branch https://github.com/rust-lang/socket2-rs/pull/120
- [x] `0.3.16` published
- [x] `master` branch https://github.com/rust-lang/socket2-rs/pull/122
- [x] Yank all `0.3` versions older than `0.3.16`
- [x] Report `<0.3.16` to RustSec Advisory Database https://rustsec.org/advisories/RUSTSEC-2020-0079.html
- [x] - `net2` - Issue: https://github.com/deprecrated/net2-rs/issues/105
- [x] https://github.com/deprecrated/net2-rs/pull/106
- [x] `0.2.36` published
- [x] Yank all `0.2` versions older than `0.2.36`
- [x] Report `<0.2.36` to RustSec Advisory Database https://rustsec.org/advisories/RUSTSEC-2020-0078.html
- [x] - `miow` - Issue: https://github.com/yoshuawuyts/miow/issues/38
- [x] `0.3.x` - https://github.com/yoshuawuyts/miow/pull/39
- [x] `0.3.6` published
- [x] `0.2.x` - https://github.com/yoshuawuyts/miow/pull/40
- [x] `0.2.2` published
- [x] Yanked all `0.2` versions older than `0.2.2`
- [x] Yanked all `0.3` versions older than `0.3.6`
- [x] Report `<0.2.2` and `<0.3.6` to RustSec Advisory Database https://rustsec.org/advisories/RUSTSEC-2020-0080.html
- [x] - `quinn master` (aka what became 0.7) - https://github.com/quinn-rs/quinn/issues/968https://github.com/quinn-rs/quinn/pull/987
- [x] - `quinn 0.6` - https://github.com/quinn-rs/quinn/pull/1045
- [x] - `quinn 0.5` - https://github.com/quinn-rs/quinn/pull/1046
- [x] - Release `0.7.0`, `0.6.2` and `0.5.4`
- [x] - `nb-connect` - https://github.com/smol-rs/nb-connect/issues/1
- [x] - Release `1.0.3`
- [x] - Yank all versions older than `1.0.3`
- [x] - `shadowsocks-rust` - https://github.com/shadowsocks/shadowsocks-rust/issues/462
- [ ] - `rio` - https://github.com/spacejam/rio/issues/44
- [ ] - `seaslug` - https://github.com/spacejam/seaslug/issues/1
#### Fixed crate versions
All crates I have found that assumed the memory layout have been fixed and published. The crates and versions that will continue working even as/if this PR is merged is (please upgrade these to help unblock this PR):
* `net2 0.2.36`
* `socket2 0.3.16`
* `miow 0.2.2`
* `miow 0.3.6`
* `mio 0.7.6`
* `mio 0.6.23` - Never had the invalid assumption itself, but has now been bumped to only allow fixed dependencies (`net2` + `miow`)
* `nb-connect 1.0.3`
* `quinn 0.5.4`
* `quinn 0.6.2`
### Release notes draft
This release changes the memory layout of `Ipv4Addr`, `Ipv6Addr`, `SocketAddrV4` and `SocketAddrV6`. The standard library no longer implements these as the corresponding `libc` structs (`sockaddr_in`, `sockaddr_in6` etc.). This internal representation was never exposed, but some crates relied on it anyway by unsafely transmuting. This change will cause those crates to make invalid memory accesses. Notably `net2 <0.2.36`, `socket2 <0.3.16`, `mio <0.7.6`, `miow <0.3.6` and a few other crates are affected. All known affected crates have been patched and have had fixed versions published over a year ago. If any affected crate is still in your dependency tree, you need to upgrade them before using this version of Rust.
Rewrite Windows `compat_fn` macro
This allows using most delay loaded functions before the init code initializes them. It also only preloads a select few functions, rather than all functions.
This is optimized for the common case where a function is used after already being loaded (or failed to load). The only change in codegen at the call site is to use an atomic load instead of a plain load, which should have negligible or no impact.
I've split the old `compat_fn` macro in two so as not to mix two different use cases. If/when Windows 7 support is dropped `compat_fn_optional` can be removed entirely.
r? rust-lang/libs
This is done by having the crossbeam dependency inserted into the
proc_macro server code from the server side, to avoid adding a
dependency to proc_macro.
In addition, this introduces a -Z command-line option which will switch
rustc to run proc-macros using this cross-thread executor. With the
changes to the bridge in #98186, #98187, #98188 and #98189, the
performance of the executor should be much closer to same-thread
execution.
In local testing, the crossbeam executor was substantially more
performant than either of the two existing CrossThread strategies, so
they have been removed to keep things simple.
mem::uninitialized: mitigate many incorrect uses of this function
Alternative to https://github.com/rust-lang/rust/pull/98966: fill memory with `0x01` rather than leaving it uninit. This is definitely bitewise valid for all `bool` and nonnull types, and also those `Option<&T>` that we started putting `noundef` on. However it is still invalid for `char` and some enums, and on references the `dereferenceable` attribute is still violated, so the generated LLVM IR still has UB -- but in fewer cases, and `dereferenceable` is hopefully less likely to cause problems than clearly incorrect range annotations.
This can make using `mem::uninitialized` a lot slower, but that function has been deprecated for years and we keep telling everyone to move to `MaybeUninit` because it is basically impossible to use `mem::uninitialized` correctly. For the cases where that hasn't helped (and all the old code out there that nobody will ever update), we can at least mitigate the effect of using this API. Note that this is *not* in any way a stable guarantee -- it is still UB to call `mem::uninitialized::<bool>()`, and Miri will call it out as such.
This is somewhat similar to https://github.com/rust-lang/rust/pull/87032, which proposed to make `uninitialized` return a buffer filled with 0x00. However
- That PR also proposed to reduce the situations in which we panic, which I don't think we should do at this time.
- The 0x01 bit pattern means that nonnull requirements are satisfied, which (due to references) is the most common validity invariant.
`@5225225` I hope I am using `cfg(sanitize)` the right way; I was not sure for which ones to test here.
Cc https://github.com/rust-lang/rust/issues/66151
Fixes https://github.com/rust-lang/rust/issues/87675
This initial implementation handles transmutations between types with specified layouts, except when references are involved.
Co-authored-by: Igor null <m1el.2027@gmail.com>
Fix slice::ChunksMut aliasing
Fixes https://github.com/rust-lang/rust/issues/94231, details in that issue.
cc `@RalfJung`
This isn't done just yet, all the safety comments are placeholders. But otherwise, it seems to work.
I don't really like this approach though. There's a lot of unsafe code where there wasn't before, but as far as I can tell the only other way to uphold the aliasing requirement imposed by `__iterator_get_unchecked` is to use raw slices, which I think require the same amount of unsafe code. All that would do is tie the `len` and `ptr` fields together.
Oh I just looked and I'm pretty sure that `ChunksExactMut`, `RChunksMut`, and `RChunksExactMut` also need to be patched. Even more reason to put up a draft.
Rollup of 5 pull requests
Successful merges:
- #99079 (Check that RPITs constrained by a recursive call in a closure are compatible)
- #99704 (Add `Self: ~const Trait` to traits with `#[const_trait]`)
- #99769 (Sync rustc_codegen_cranelift)
- #99783 (rustdoc: remove Clean trait impls for more items)
- #99789 (Refactor: use `pluralize!`)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Remove some redundant checks from BufReader
The implementation of BufReader contains a lot of redundant checks. While any one of these checks is not particularly expensive to execute, especially when taken together they dramatically inhibit LLVM's ability to make subsequent optimizations by confusing data flow increasing the code size of anything that uses BufReader.
In particular, these changes have a ~2x increase on the benchmark that this adds a `black_box` to. I'm adding that `black_box` here just in case LLVM gets clever enough to remove the reads entirely. Right now it can't, but these optimizations are really setting it up to do so.
We get this optimization by factoring all the actual buffer management and bounds-checking logic into a new module inside `bufreader` with a new `Buffer` type. This makes it much easier to ensure that we have correctly encapsulated the management of the region of the buffer that we have read bytes into, and it lets us provide a new faster way to do small reads. `Buffer::consume_with` lets a caller do a read from the buffer with a single bounds check, instead of the double-check that's required to use `buffer` + `consume`.
Unfortunately I'm not aware of a lot of open-source usage of `BufReader` in perf-critical environments. Some time ago I tweaked this code because I saw `BufReader` in a profile at work, and I contributed some benchmarks to the `bincode` crate which exercise `BufReader::buffer`. These changes appear to help those benchmarks at little, but all these sorts of benchmarks are kind of fragile so I'm wary of quoting anything specific.
Rollup of 8 pull requests
Successful merges:
- #98583 (Stabilize Windows `FileTypeExt` with `is_symlink_dir` and `is_symlink_file`)
- #99698 (Prefer visibility map parents that are not `doc(hidden)` first)
- #99700 (Add a clickable link to the layout section)
- #99712 (passes: port more of `check_attr` module)
- #99759 (Remove dead code from cg_llvm)
- #99765 (Don't build std for *-uefi targets)
- #99771 (Update pulldown-cmark version to 0.9.2 (fixes url encoding for some chars))
- #99775 (rustdoc: do not allocate String when writing path full name)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Stabilize Windows `FileTypeExt` with `is_symlink_dir` and `is_symlink_file`
These calls allow detecting whether a symlink is a file or a directory,
a distinction Windows maintains, and one important to software that
wants to do further operations on the symlink (e.g. removing it).
Optimized vec::IntoIter::next_chunk impl
```
x86_64v1, default
test vec::bench_next_chunk ... bench: 696 ns/iter (+/- 22)
x86_64v1, pr
test vec::bench_next_chunk ... bench: 309 ns/iter (+/- 4)
znver2, default
test vec::bench_next_chunk ... bench: 17,272 ns/iter (+/- 117)
znver2, pr
test vec::bench_next_chunk ... bench: 211 ns/iter (+/- 3)
```
On znver2 the default impl seems to be slow due to different inlining decisions. It goes through `core::array::iter_next_chunk`
which has a deep call tree.
codegen: use new {re,de,}allocator annotations in llvm
This obviates the patch that teaches LLVM internals about
_rust_{re,de}alloc functions by putting annotations directly in the IR
for the optimizer.
The sole test change is required to anchor FileCheck to the body of the
`box_uninitialized` method, so it doesn't see the `allocalign` on
`__rust_alloc` and get mad about the string `alloca` showing up. Since I
was there anyway, I added some checks on the attributes to prove the
right attributes got set.
r? `@nikic`
```
test vec::bench_next_chunk ... bench: 696 ns/iter (+/- 22)
x86_64v1, pr
test vec::bench_next_chunk ... bench: 309 ns/iter (+/- 4)
znver2, default
test vec::bench_next_chunk ... bench: 17,272 ns/iter (+/- 117)
znver2, pr
test vec::bench_next_chunk ... bench: 211 ns/iter (+/- 3)
```
The znver2 default impl seems to be slow due to inlining decisions. It goes through `core::array::iter_next_chunk`
which has a deeper call tree.
This obviates the patch that teaches LLVM internals about
_rust_{re,de}alloc functions by putting annotations directly in the IR
for the optimizer.
The sole test change is required to anchor FileCheck to the body of the
`box_uninitialized` method, so it doesn't see the `allocalign` on
`__rust_alloc` and get mad about the string `alloca` showing up. Since I
was there anyway, I added some checks on the attributes to prove the
right attributes got set.
While we're here, we also emit allocator attributes on
__rust_alloc_zeroed. This should allow LLVM to perform more
optimizations for zeroed blocks, and probably fixes#90032. [This
comment](https://github.com/rust-lang/rust/issues/24194#issuecomment-308791157)
mentions "weird UB-like behaviour with bitvec iterators in
rustc_data_structures" so we may need to back this change out if things
go wrong.
The new test cases require LLVM 15, so we copy them into LLVM
14-supporting versions, which we can delete when we drop LLVM 14.
This allows using most delay loaded functions before the init code initializes them. It also only preloads a select few functions, rather than all functions.
Co-Authored-By: Mark Rousskov <mark.simulacrum@gmail.com>
interpret, ptr_offset_from: refactor and test too-far-apart check
We didn't have any tests for the "too far apart" message, and indeed that check mostly relied on the in-bounds check and was otherwise probably not entirely correct... so I rewrote that check, and it is before the in-bounds check so we can test it separately.
Expose size_hint() for TokenStream's iterator
The iterator for `proc_macro::TokenStream` is a wrapper around a `Vec` iterator:
babff2211e/library/proc_macro/src/lib.rs (L363-L371)
so it can cheaply provide a perfectly precise size hint, with just a pointer subtraction:
babff2211e/library/alloc/src/vec/into_iter.rs (L170-L177)
I need the size hint in syn (https://github.com/dtolnay/syn/blob/1.0.98/src/buffer.rs) to reduce allocations when converting TokenStream into syn's internal TokenBuffer representation.
Aside from `size_hint`, the other non-default methods in `std::vec::IntoIter`'s `Iterator` impl are `advance_by`, `count`, and `__iterator_get_unchecked`. I've included `count` in this PR since it is trivial. I did not include `__iterator_get_unchecked` because it is spoopy and I did not feel like dealing with that. Lastly, I did not include `advance_by` because that requires `feature(iter_advance_by)` (https://github.com/rust-lang/rust/issues/77404) and I noticed this comment at the top of libproc_macro:
babff2211e/library/proc_macro/src/lib.rs (L20-L22)
correct the output of a `capacity` method example
The output of this example in std::alloc is different from which shown in the comment. I have tested it on both Linux and Windows.
Constify a few `(Partial)Ord` impls
Only a few `impl`s are constified for now, as #92257 has not landed in the bootstrap compiler yet and quite a few impls would need that fix.
This unblocks #92228, which unblocks marking iterator methods as `default_method_body_is_const`.
add miri-track-caller to more intrinsic-exposing methods
Follow-up to https://github.com/rust-lang/rust/pull/98674: I went through the Miri test suite to find more functions that would benefit from Miri backtrace pruning, and this is what I found.
Basically anything that just exposes a potentially-UB intrinsic to the user should get this treatment.
kmc-solid: Use `libc::abort` to abort a program
This PR updates the target-specific abort subroutine for the [`*-kmc-solid_*`](https://doc.rust-lang.org/nightly/rustc/platform-support/kmc-solid.html) Tier 3 targets.
The current implementation uses a `hlt` instruction, which is the most direct way to notify a connected debugger but is not the most flexible way. This PR changes it to call the `abort` libc function, making it possible for a system designer to override its behavior as they see fit.
Support vec zero-alloc optimization for tuples and byte arrays
* Implement IsZero trait for tuples up to 8 IsZero elements;
* Implement IsZero for u8/i8, leading to implementation of it for arrays of them too;
* Add more codegen tests for this optimization.
* Lower size of array for IsZero trait because it fails to inline checks
* Implement IsZero trait for tuples up to 8 IsZero elements;
* Implement IsZero for u8/i8, leading to implementation of it for arrays of them too;
* Add more codegen tests for this optimization.
* Lower size of array for IsZero trait because it fails to inline checks
The implementation of BufReader contains a lot of redundant checks.
While any one of these checks is not particularly expensive to execute,
especially when taken together they dramatically inhibit LLVM's ability
to make subsequent optimizations.
miri: prune some atomic operation and raw pointer details from stacktrace
Since Miri removes `track_caller` frames from the stacktrace, adding that attribute can help make backtraces more readable (similar to how it makes panic locations better). I made them only show up with `cfg(miri)` to make sure the extra arguments induced by `track_caller` do not cause any runtime performance trouble.
This is also testing the waters for whether the libs team is okay with having these attributes in their code, or whether you'd prefer if we find some other way to do this. If you are fine with this, we will probably want to add it to a lot more functions (all the other atomic operations, to start).
Before:
```
error: Undefined Behavior: Data race detected between Atomic Load on Thread(id = 2) and Write on Thread(id = 1) at alloc1727 (current vector clock = VClock([9, 0, 6]), conflicting timestamp = VClock([0, 6]))
--> /home/r/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/sync/atomic.rs:2594:23
|
2594 | SeqCst => intrinsics::atomic_load_seqcst(dst),
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Data race detected between Atomic Load on Thread(id = 2) and Write on Thread(id = 1) at alloc1727 (current vector clock = VClock([9, 0, 6]), conflicting timestamp = VClock([0, 6]))
|
= help: this indicates a bug in the program: it performed an invalid operation, and caused Undefined Behavior
= help: see https://doc.rust-lang.org/nightly/reference/behavior-considered-undefined.html for further information
= note: inside `std::sync::atomic::atomic_load::<usize>` at /home/r/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/sync/atomic.rs:2594:23
= note: inside `std::sync::atomic::AtomicUsize::load` at /home/r/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/sync/atomic.rs:1719:26
note: inside closure at ../miri/tests/fail/data_race/atomic_read_na_write_race1.rs:22:13
--> ../miri/tests/fail/data_race/atomic_read_na_write_race1.rs:22:13
|
22 | (&*c.0).load(Ordering::SeqCst)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
After:
```
error: Undefined Behavior: Data race detected between Atomic Load on Thread(id = 2) and Write on Thread(id = 1) at alloc1727 (current vector clock = VClock([9, 0, 6]), conflicting timestamp = VClock([0, 6]))
--> tests/fail/data_race/atomic_read_na_write_race1.rs:22:13
|
22 | (&*c.0).load(Ordering::SeqCst)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Data race detected between Atomic Load on Thread(id = 2) and Write on Thread(id = 1) at alloc1727 (current vector clock = VClock([9, 0, 6]), conflicting timestamp = VClock([0, 6]))
|
= help: this indicates a bug in the program: it performed an invalid operation, and caused Undefined Behavior
= help: see https://doc.rust-lang.org/nightly/reference/behavior-considered-undefined.html for further information
= note: inside closure at tests/fail/data_race/atomic_read_na_write_race1.rs:22:13
```
Lock stdout once when listing tests
This is a marginal optimization for normal operation, but for `cargo miri nextest list` (which is invoked by `cargo miri nextest run`) this knocks the startup time on `regex` down from 87 seconds to 17 seconds. Still slow, but a nice 5x improvement.
Add cgroupv1 support to available_parallelism
Fixes#97549
My dev machine uses cgroup v2 so I was only able to test that code path. So the v1 code path is written only based on documentation. I could use some help testing that it works on a machine with cgroups v1:
```
$ x.py build --stage 1
# quota.rs
fn main() {
println!("{:?}", std:🧵:available_parallelism());
}
# assuming stage1 is linked in rustup
$ rust +stage1 quota.rs
# spawn a new cgroup scope for the current user
$ sudo systemd-run -p CPUQuota="300%" --uid=$(id -u) -tdS
# should print Ok(3)
$ ./quota
```
If it doesn't work as expected an strace, the contents of `/proc/self/cgroups` and the structure of `/sys/fs/cgroups` would help.
Add `[f32]::sort_floats` and `[f64]::sort_floats`
It's inconvenient to sort a slice or Vec of floats, compared to sorting integers. To simplify numeric code, add a convenience method to `[f32]` and `[f64]` to sort them using `sort_unstable_by` with `total_cmp`.
Rename `<*{mut,const} T>::as_{const,mut}` to `cast_`
This renames the methods to use the `cast_` prefix instead of `as_` to
make it more readable and avoid confusion with `<*mut T>::as_mut()`
which is `unsafe` and returns a reference.
Sorry, didn't notice ACP process exists, opened https://github.com/rust-lang/libs-team/issues/51
See #92675
make vtable pointers entirely opaque
This implements the scheme discussed in https://github.com/rust-lang/unsafe-code-guidelines/issues/338: vtable pointers should be considered entirely opaque and not even readable by Rust code, similar to function pointers.
- We have a new kind of `GlobalAlloc` that symbolically refers to a vtable.
- Miri uses that kind of allocation when generating a vtable.
- The codegen backends, upon encountering such an allocation, call `vtable_allocation` to obtain an actually dataful allocation for this vtable.
- We need new intrinsics to obtain the size and align from a vtable (for some `ptr::metadata` APIs), since direct accesses are UB now.
I had to touch quite a bit of code that I am not very familiar with, so some of this might not make much sense...
r? `@oli-obk`
Fix the stable version of `AsFd for Arc<T>` and `Box<T>`
These merged in #97437 for 1.64.0, apart from the main `io_safety`
feature that stabilized in 1.63.0.
std: use futex-based locks on Fuchsia
This switches `Condvar` and `RwLock` to the futex-based implementation currently used on Linux and some BSDs. Additionally, `Mutex` now has its own, priority-inheriting implementation based on the mutex in Fuchsia's `libsync`. It differs from the original in that it panics instead of aborting when reentrant locking is detected.
````@rustbot```` ping fuchsia
r? ````@m-ou-se````
This renames the methods to use the `cast_` prefix instead of `as_` to
make it more readable and avoid confusion with `<*mut T>::as_mut()`
which is `unsafe` and returns a reference.
See #92675
stdlib support for Apple WatchOS
This is a follow-up to https://github.com/rust-lang/rust/pull/95243 (Add Apple WatchOS compiler targets) that adds stdlib support for Apple WatchOS.
`@deg4uss3r`
`@nagisa`
Windows: Use `FindFirstFileW` for getting the metadata of locked system files
Fixes#96980
Usually opening a file handle with access set to metadata only will always succeed, even if the file is locked. However some special system files, such as `C:\hiberfil.sys`, are locked by the system in a way that denies even that. So as a fallback we try reading the cached metadata from the directory.
Note that the test is a bit iffy. I don't know if `hiberfil.sys` actually exists in the CI.
r? rust-lang/libs
Improve the function pointer docs
This is #97842 but for function pointers instead of tuples. The concept is basically the same.
* Reduce duplicate impls; show `fn (T₁, T₂, …, Tₙ)` and include a sentence saying that there exists up to twelve of them.
* Show `Copy` and `Clone`.
* Show auto traits like `Send` and `Sync`, and blanket impls like `Any`.
https://notriddle.com/notriddle-rustdoc-test/std/primitive.fn.html
* Reduce duplicate impls; show only the `fn (T)` and include a sentence
saying that there exists up to twelve of them.
* Show `Copy` and `Clone`.
* Show auto traits like `Send` and `Sync`, and blanket impls like `Any`.
core::any: replace some generic types with impl Trait
This gives a cleaner API since the caller only specifies the concrete type they usually want to.
r? `@yaahc`
Fix `Skip::next` for non-fused inner iterators
`iter.skip(n).next()` will currently call `nth` and `next` in succession on `iter`, without checking whether `nth` exhausts the iterator. Using `?` to propagate a `None` value returned by `nth` avoids this.
Use split_once in FromStr docs
Current implementation:
```rust
fn from_str(s: &str) -> Result<Self, Self::Err> {
let coords: Vec<&str> = s.trim_matches(|p| p == '(' || p == ')' )
.split(',')
.collect();
let x_fromstr = coords[0].parse::<i32>()?;
let y_fromstr = coords[1].parse::<i32>()?;
Ok(Point { x: x_fromstr, y: y_fromstr })
}
```
Creating the vector is not necessary, `split_once` does the job better.
Alternatively we could also remove `trim_matches` with `strip_prefix` and `strip_suffix`:
```rust
let (x, y) = s
.strip_prefix('(')
.and_then(|s| s.strip_suffix(')'))
.and_then(|s| s.split_once(','))
.unwrap();
```
The question is how much 'correctness' is too much and distracts from the example. In a real implementation you would also not unwrap (or originally access the vector without bounds checks), but implementing a custom Error and adding a `From<ParseIntError>` and implementing the `Error` trait adds a lot of code to the example which is not relevant to the `FromStr` trait.
proc_macro/bridge: stop using a remote object handle for proc_macro Ident and Literal
This is the fourth part of https://github.com/rust-lang/rust/pull/86822, split off as requested in https://github.com/rust-lang/rust/pull/86822#pullrequestreview-1008655452. This patch transforms the `Ident` and `Group` types into structs serialized over IPC rather than handles.
Symbol values are interned on both the client and server when deserializing, to avoid unnecessary string copies and keep the size of `TokenTree` down. To do the interning efficiently on the client, the proc-macro crate is given a vendored version of the fxhash hasher, as `SipHash` appeared to cause performance issues. This was done rather than depending on `rustc_hash` as it is unfortunately difficult to depend on crates from within `proc_macro` due to it being built at the same time as `std`.
In addition, a custom arena allocator and symbol store was also added, inspired by those in `rustc_arena` and `rustc_span`. To prevent symbol re-use across multiple invocations of a macro on the same thread, a new range of `Symbol` names are used for each invocation of the macro, and symbols from previous invocations are cleaned-up.
In order to keep `Ident` creation efficient, a special ASCII-only case was added to perform ident validation without using RPC for simple identifiers. Full identifier validation couldn't be easily added, as it would require depending on the `rustc_lexer` and `unicode-normalization` crates from within `proc_macro`. Unicode identifiers are validated and normalized using RPC.
See the individual commit messages for more details on trade-offs and design decisions behind these patches.
This method is still only used for Literal::subspan, however the
implementation only depends on the Span component, so it is simpler and
more efficient for now to pass down only the information that is needed.
In the future, if more information about the Literal is required in the
implementation (e.g. to validate that spans line up as expected with
source text), that extra information can be added back with extra
arguments.
This builds on the symbol infrastructure built for `Ident` to replicate
the `LitKind` and `Lit` structures in rustc within the `proc_macro`
client, allowing literals to be fully created and interacted with from
the client thread. Only parsing and subspan operations still require
sync RPC.
Doing this for all unicode identifiers would require a dependency on
`unicode-normalization` and `rustc_lexer`, which is currently not
possible for `proc_macro` due to it being built concurrently with `std`
and `core`. Instead, ASCII identifiers are validated locally, and an RPC
message is used to validate unicode identifiers when needed.
String values are interned on the both the server and client when
deserializing, to avoid unnecessary copies and keep Ident cheap to copy and
move. This appears to be important for performance.
The client-side interner is based roughly on the one from rustc_span, and uses
an arena inspired by rustc_arena.
RPC messages passing symbols always include the full value. This could
potentially be optimized in the future if it is revealed to be a
performance bottleneck.
Despite now having a relevant implementaion of Display for Ident, ToString is
still specialized, as it is a hot-path for this object.
The symbol infrastructure will also be used for literals in the next
part.
Unfortunately, as it is difficult to depend on crates from within proc_macro,
this is done by vendoring a copy of the hasher as a module rather than
depending on the rustc_hash crate.
This probably doesn't have a substantial impact up-front, however will be more
relevant once symbols are interned within the proc_macro client.
add missing null ptr check in alloc example
`alloc` can return null on OOM, if I understood correctly. So we should never just deref a pointer we get from `alloc`.
Add assertion that `transmute_copy`'s U is not larger than T
This is called out as a safety requirement in the docs, but because knowing this can be done at compile time and constant folded (just like the `align_of` branch is removed), we can just panic here.
I've looked at the asm (using `cargo-asm`) of a function that both is correct and incorrect, and the panic is completely removed, or is unconditional, without needing build-std.
I don't expect this to cause much breakage in the wild. I scanned through https://miri.saethlin.dev/ub for issues that would look like this (error: Undefined Behavior: memory access failed: alloc1768 has size 1, so pointer to 8 bytes starting at offset 0 is out-of-bounds), but couldn't find any.
That doesn't rule out it happening in crates tested that fail earlier for some other reason, though, but it indicates that doing this is rare, if it happens at all. A crater run for this would need to be build and test, since this is a runtime thing.
Also added a few more transmute_copy tests.
Rearrange slice::split_mut to remove bounds check
Closes https://github.com/rust-lang/rust/issues/86313
Turns out that all we need to do here is reorder the bounds checks to convince LLVM that all the bounds checks can be removed. It seems like LLVM just fails to propagate the original length information past the first bounds check and into the second one. With this implementation it doesn't need to, each check can be proven inbounds based on the one immediately previous.
I've gradually convinced myself that this implementation is unambiguously better based on the above logic, but maybe this is still deserving of a codegen test?
Also the mentioned borrowck limitation no longer seems to exist.