Fix a bunch of typo
This PR will fix some typos detected by [typos].
I only picked the ones I was sure were spelling errors to fix, mostly in
the comments.
[typos]: https://github.com/crate-ci/typos
This PR will fix some typos detected by [typos].
I only picked the ones I was sure were spelling errors to fix, mostly in
the comments.
[typos]: https://github.com/crate-ci/typos
Before, the text simply asked people to use a symbol which is hard to
search for. Now the text links back to the chapter on error
propagation in The Book. That should help people find the relevant
keywords for further searches.
Strengthen invalid_value lint to forbid uninit primitives, adjust docs to say that's UB
For context: https://github.com/rust-lang/rust/issues/66151#issuecomment-1174477404=
This does not make it a FCW, but it does explicitly state in the docs that uninit integers are UB.
This also doesn't affect any runtime behavior, uninit u32's will still successfully be created through mem::uninitialized.
Rollup of 7 pull requests
Successful merges:
- #100898 (Do not report too many expr field candidates)
- #101056 (Add the syntax of references to their documentation summary.)
- #101106 (Rustdoc-Json: Retain Stripped Modules when they are imported, not when they have items)
- #101131 (CTFE: exposing pointers and calling extern fn is just impossible)
- #101141 (Simplify `get_trait_ref` fn used for `virtual_function_elimination`)
- #101146 (Various changes to logging of borrowck-related code)
- #101156 (Remove `Sync` requirement from lint pass objects)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Add the syntax of references to their documentation summary.
Without this change, in <https://doc.rust-lang.org/1.63.0/std/#primitives>, `reference` is the only entry in that list which does not contain the syntax by which the type is named in source code. With this change, it contains them, in roughly the same way as the `pointer` entry does.
Make use of `[wrapping_]byte_{add,sub}`
These new methods trivially replace old `.cast().wrapping_offset().cast()` & similar code.
Note that [`arith_offset`](https://doc.rust-lang.org/std/intrinsics/fn.arith_offset.html) and `wrapping_offset` are the same thing.
r? ``@scottmcm``
_split off from #100746_
Add next_up and next_down for f32/f64 - take 2
This is a revival of https://github.com/rust-lang/rust/pull/88728 which staled due to inactivity of the original author. I've address the last review comment.
---
This is a pull request implementing the features described at https://github.com/rust-lang/rfcs/pull/3173.
`@rustbot` label +T-libs-api -T-libs
r? `@scottmcm`
cc `@orlp`
Some papercuts on error::Error
Renames the chain method, since I chain could mean anything and doesn't refer to a chain of sources (cc #58520) (and adds a comment explaining why sources is not a provided method on Error). Renames arguments to the request method from `req` to `demand` since the type is `Demand` rather than Request or Requisition.
r? ``@yaahc``
Add pointer masking convenience functions
This PR adds the following public API:
```rust
impl<T: ?Sized> *const T {
fn mask(self, mask: usize) -> *const T;
}
impl<T: ?Sized> *mut T {
fn mask(self, mask: usize) -> *const T;
}
// mod intrinsics
fn mask<T>(ptr: *const T, mask: usize) -> *const T
```
This is equivalent to `ptr.map_addr(|a| a & mask)` but also uses a cool llvm intrinsic.
Proposed in https://github.com/rust-lang/rust/pull/95643#issuecomment-1121562352
cc `@Gankra` `@scottmcm` `@RalfJung`
r? rust-lang/libs-api
Update documentation for `write!` and `writeln!`
https://github.com/rust-lang/rust/pull/37472 added this documentation, but it
needs updating:
- Remove some documentation duplicated between `writeln!` and `write!`
- Update `write!` docs: can now import traits as `_` to avoid conflicts
- Expand example to show how to implement qualified trait names
Without this change, in <https://doc.rust-lang.org/1.63.0/std/#primitives>,
`reference` is the only entry in that list which does not contain the
syntax by which the type is named in source code. With this change, it
contains them, in roughly the same way as the `pointer` entry does.
Reduce code size of `assert_matches_failed`
Using `write_str` instead of `<str as Display>::fmt` avoids the `pad` function which is very expensive to have in size-constrained code.
make slice::{split_at,split_at_unchecked} const functions
Now that `slice::from_raw_parts` is const in stable 1.64, it makes sense to have `split_at` const as well, otherwise unsafe code is required to achieve a const equivalent.
is_whitespace() performance improvements
This is my first rust PR, so if I miss anything obvious please let me know and I'll do my best to fix it.
This was a bit more of a challenge than I realized because, while I made working code locally and tested it against the native `is_whitespace()`, this PR required changing `src/tools/unicode-table-generator`, the code that generated the code.
I have benchmarked this locally, using criterion, and have seen meaningful performance improvements. I can add those outputs to this if you'd like, but am guessing that the perf run that `@fmease` recommended is what's needed.
I have run ` ./x.py test --stage 0 library/std` after building it locally after executing `./x.py build library`. I didn't try to build the whole compiler, but maybe I should have - any guidance would be appreciated.
If this general approach makes sense, I'll take a look at some other candidate categories, e.g., `Cc`, in the future.
Oh, and I wasn't sure whether the generated code should be included in this PR or not. I did include it.
Update stdarch submodule
Changes from stdarch:
* Fix links in documentation of cmpxchg16b
* Use load intrinsic and loop for intrinsic-test programs. Add --release flag back to intrinsic-test programs.
* Properly fix vext intrinsic tests
* Replace some calls to `pointer::offset` with `add` and `sub`
* Allow internal use of stdsimd from detect_feature
* fix target name in contributing.md
* Tweak constant for ARM vext instruction tests
* Use `llvm.ppc.altivec.lvx` intrinsic for `vec_ld`
* Adding doc links for arm neon intrinsics
* Adding doc links for arm crypto and aes intrinsics
* Remove instruction tests for `__mmask*` intrinsics
* Update ubuntu 21.10 docker containers to 22.04
* Adding documentation links for arm crc32 intrinsics
* Remove restrictions on compare-exchange memory ordering.
* Fix a typo in the document.
* Allow mapping a runtime feature to a set of target_features
* Update atomic intrinsics
* Fully qualify recursive macro calls
* Ensure the neon vector aggregates like `float32x4x4_t` are `#[repr(C)]`
* Remove useless conditional compilation
* Fix ARM vbsl* NEON intrinsics
r? `@Amanieu`
Rewrite error index generator to greatly reduce the size of the pages
Fixes https://github.com/rust-lang/rust/issues/100736.
Instead of having all error codes in a same page (making the DOM way too big), I split the output into multiple files and generated a list of links (if there is an explanation) to the error codes' explanation into the already existing file.
I also used this opportunity to greatly simplify the code. Instead of needing a `build.rs`, I simply imported the file we want and wrote the macro which generates a function containing everything we need. We just need to call it to get the error codes and their explanation (if any). Also, considering the implementations between markdown and HTML formats differed even further, the `Formatter` trait was becoming too problematic so I removed it too.
You can test it [here](https://rustdoc.crud.net/imperio/rewrite-error-index/error-index.html).
cc ``@jsha``
r? ``@notriddle``
Move Error trait into core
This PR moves the error trait from the standard library into a new unstable `error` module within the core library. The goal of this PR is to help unify error reporting across the std and no_std ecosystems, as well as open the door to integrating the error trait into the panic reporting system when reporting panics whose source is an errors (such as via `expect`).
This PR is a rewrite of https://github.com/rust-lang/rust/pull/90328 using new compiler features that have been added to support error in core.
While the `provide_*` methods already short-circuit when a value has
been provided, there are times where an expensive computation is
needed to determine if the `provide_*` method can even be called.
Use pointer `is_aligned*` methods
This PR replaces some manual alignment checks with calls to `pointer::{is_aligned, is_aligned_to}` and removes a useless pointer cast.
r? `@scottmcm`
_split off from #100746_
Std module docs improvements
My primary goal is to create a cleaner separation between primitive types and primitive type helper modules (fixes#92777). I also changed a few header lines in other top-level std modules (seen at https://doc.rust-lang.org/std/) for consistency.
Some conventions used/established:
* "The \`Box\<T>` type for heap allocation." - if a module mainly provides a single type, name it and summarize its purpose in the module header
* "Utilities for the _ primitive type." - this wording is used for the header of helper modules
* Documentation for primitive types themselves are removed from helper modules
* provided-by-core functionality of primitive types is documented in the primitive type instead of the helper module (such as the "Iteration" section in the slice docs)
I wonder if some content in `std::ptr` should be in `pointer` but I did not address this.
Replace most uses of `pointer::offset` with `add` and `sub`
As PR title says, it replaces `pointer::offset` in compiler and standard library with `pointer::add` and `pointer::sub`. This generally makes code cleaner, easier to grasp and removes (or, well, hides) integer casts.
This is generally trivially correct, `.offset(-constant)` is just `.sub(constant)`, `.offset(usized as isize)` is just `.add(usized)`, etc. However in some cases we need to be careful with signs of things.
r? ````@scottmcm````
_split off from #100746_
Make some docs nicer wrt pointer offsets
This PR replaces `pointer::offset` with `pointer::add` and similarly `.cast().wrapping_add().cast()` with `.wrapping_byte_add()` **in docs**.
r? ``````@scottmcm``````
_split off from #100746_
Clamp Function for f32 and f64
I thought the clamp function could use a little improvement for readability purposes. The function now returns early in order to skip the extra bound checks.
If there was a reason for binding `self` to `x` or if this code is incorrect, please correct me :)
This commit adds the following functions all of which have a signature
`pointer, usize -> pointer`:
- `<*mut T>::mask`
- `<*const T>::mask`
- `intrinsics::ptr_mask`
These functions are equivalent to `.map_addr(|a| a & mask)` but they
utilize `llvm.ptrmask` llvm intrinsic.
*masks your pointers*
Fix trailing space showing up in example
The current text is rendered as: U+005B ..= U+0060 ``[ \ ] ^ _ ` ``, or (**note the final space!**)
This patch changes that to render as: U+005B ..= U+0060 `` [ \ ] ^ _ ` ``, or (**note no final space!**)
The reason for that, is that CommonMark has a solution for starting or ending inline code with a backtick/grave accent: padding both sides with a space, makes that padding disappear.
Expose `Utf8Lossy` as `Utf8Chunks`
This PR changes the feature for `Utf8Lossy` from `str_internals` to `utf8_lossy` and improves the API. This is done to eventually expose the API as stable.
Proposal: rust-lang/libs-team#54
Tracking Issue: #99543
Refactor iteration logic in the `Flatten` and `FlatMap` iterators
The `Flatten` and `FlatMap` iterators both delegate to `FlattenCompat`:
```rust
struct FlattenCompat<I, U> {
iter: Fuse<I>,
frontiter: Option<U>,
backiter: Option<U>,
}
```
Every individual iterator method that `FlattenCompat` implements needs to carefully manage this state, checking whether the `frontiter` and `backiter` are present, and storing the current iterator appropriately if iteration is aborted. This has led to methods such as `next`, `advance_by`, and `try_fold` all having similar code for managing the iterator's state.
I have extracted this common logic of iterating the inner iterators with the option to exit early into a `iter_try_fold` method:
```rust
impl<I, U> FlattenCompat<I, U>
where
I: Iterator<Item: IntoIterator<IntoIter = U>>,
{
fn iter_try_fold<Acc, Fold, R>(&mut self, acc: Acc, fold: Fold) -> R
where
Fold: FnMut(Acc, &mut U) -> R,
R: Try<Output = Acc>,
{ ... }
}
```
It passes each of the inner iterators to the given function as long as it keep succeeding. It takes care of managing `FlattenCompat`'s state, so that the actual `Iterator` methods don't need to. The resulting code that makes use of this abstraction is much more straightforward:
```rust
fn next(&mut self) -> Option<U::Item> {
#[inline]
fn next<U: Iterator>((): (), iter: &mut U) -> ControlFlow<U::Item> {
match iter.next() {
None => ControlFlow::CONTINUE,
Some(x) => ControlFlow::Break(x),
}
}
self.iter_try_fold((), next).break_value()
}
```
Note that despite being implemented in terms of `iter_try_fold`, `next` is still able to benefit from `U`'s `next` method. It therefore does not take the performance hit that implementing `next` directly in terms of `Self::try_fold` causes (in some benchmarks).
This PR also adds `iter_try_rfold` which captures the shared logic of `try_rfold` and `advance_back_by`, as well as `iter_fold` and `iter_rfold` for folding without early exits (used by `fold`, `rfold`, `count`, and `last`).
Benchmark results:
```
before after
bench_flat_map_sum 423,255 ns/iter 414,338 ns/iter
bench_flat_map_ref_sum 1,942,139 ns/iter 2,216,643 ns/iter
bench_flat_map_chain_sum 1,616,840 ns/iter 1,246,445 ns/iter
bench_flat_map_chain_ref_sum 4,348,110 ns/iter 3,574,775 ns/iter
bench_flat_map_chain_option_sum 780,037 ns/iter 780,679 ns/iter
bench_flat_map_chain_option_ref_sum 2,056,458 ns/iter 834,932 ns/iter
```
I added the last two benchmarks specifically to demonstrate an extreme case where `FlatMap::next` can benefit from custom internal iteration of the outer iterator, so take it with a grain of salt. We should probably do a perf run to see if the changes to `next` are worth it in practice.
I noticed in the MIR for <https://play.rust-lang.org/?version=nightly&mode=release&edition=2021&gist=67097e0494363ee27421a4e3bdfaf513> that it's inlined most stuff
```
scope 5 (inlined <Result<i32, u32> as Try>::branch)
```
```
scope 8 (inlined <Result<i32, u32> as Try>::from_output)
```
But yet the do-nothing `from` call was still there:
```
_17 = <u32 as From<u32>>::from(move _18) -> bb9;
```
So let's give this a try and see what perf has to say.
Don't derive `PartialEq::ne`.
Currently we skip deriving `PartialEq::ne` for C-like (fieldless) enums
and empty structs, thus reyling on the default `ne`. This behaviour is
unnecessarily conservative, because the `PartialEq` docs say this:
> Implementations must ensure that eq and ne are consistent with each other:
>
> `a != b` if and only if `!(a == b)` (ensured by the default
> implementation).
This means that the default implementation (`!(a == b)`) is always good
enough. So this commit changes things such that `ne` is never derived.
The motivation for this change is that not deriving `ne` reduces compile
times and binary sizes.
Observable behaviour may change if a user has defined a type `A` with an
inconsistent `PartialEq` and then defines a type `B` that contains an
`A` and also derives `PartialEq`. Such code is already buggy and
preserving bug-for-bug compatibility isn't necessary.
Two side-effects of the change:
- There is only one error message produced for types where `PartialEq`
cannot be derived, instead of two.
- For coverage reports, some warnings about generated `ne` methods not
being executed have disappeared.
Both side-effects seem fine, and possibly preferable.
Simple Clamp Function
I thought this was more robust and easier to read. I also allowed this function to return early in order to skip the extra bound check (I'm sure the difference is negligible). I'm not sure if there was a reason for binding `self` to `x`; if so, please correct me.
Simple Clamp Function for f64
I thought this was more robust and easier to read. I also allowed this function to return early in order to skip the extra bound check (I'm sure the difference is negligible). I'm not sure if there was a reason for binding `self` to `x`; if so, please correct me.
Floating point clamp test
f32 clamp using mut self
f64 clamp using mut self
Update library/core/src/num/f32.rs
Update f64.rs
Update x86_64-floating-point-clamp.rs
Update src/test/assembly/x86_64-floating-point-clamp.rs
Update x86_64-floating-point-clamp.rs
Co-Authored-By: scottmcm <scottmcm@users.noreply.github.com>
Update the minimum external LLVM to 13
With this change, we'll have stable support for LLVM 13 through 15 (pending release).
For reference, the previous increase to LLVM 12 was #90175.
r? `@nagisa`
The current text is rendered as: U+005B ..= U+0060 ``[ \ ] ^ _ ` ``, or.
This patch changes that to render as: U+005B ..= U+0060 `` [ \ ] ^ _ ` ``, or
The reason for that, is that CommonMark has a solution for starting or ending inline code with a backtick/grave accent: padding both sides with a space, makes that padding disappear.
Add `Iterator::array_chunks` (take N+1)
A revival of https://github.com/rust-lang/rust/pull/92393.
r? `@Mark-Simulacrum`
cc `@rossmacarthur` `@scottmcm` `@the8472`
I've tried to address most of the review comments on the previous attempt. The only thing I didn't address is `try_fold` implementation, I've left the "custom" one for now, not sure what exactly should it use.
make raw_eq precondition more restrictive
Specifically, don't allow comparing pointers that way. Comparing pointers is subtle because you have to talk about what happens to the provenance.
This matches what [Miri already implements](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=9eb1dfb8a61b5a2d4a7cee43df2717af), and all existing users are fine with this.
If raw_eq on pointers is ever desired, we can adjust the intrinsic spec and Miri implementation as needed, but for now that seems just unnecessary. Also, this is a const intrinsic, and in const, comparing pointers this way is *not possible* -- so if we allow the intrinsic to compare pointers in general, we need to impose an extra restrictions saying that in const-context, pointers are *not* okay.
Reoptimize layout array
This way it's one check instead of two, so hopefully (cc #99117) it'll be simpler for rustc perf too 🤞
Quick demonstration:
```rust
pub fn demo(n: usize) -> Option<Layout> {
Layout::array::<i32>(n).ok()
}
```
Nightly: <https://play.rust-lang.org/?version=nightly&mode=release&edition=2021&gist=e97bf33508aa03f38968101cdeb5322d>
```nasm
mov rax, rdi
mov ecx, 4
mul rcx
seto cl
movabs rdx, 9223372036854775805
xor esi, esi
cmp rax, rdx
setb sil
shl rsi, 2
xor edx, edx
test cl, cl
cmove rdx, rsi
ret
```
This PR (note no `mul`, in addition to being much shorter):
```nasm
xor edx, edx
lea rax, [4*rcx]
shr rcx, 61
sete dl
shl rdx, 2
ret
```
This is built atop `@CAD97` 's #99136; the new changes are cb8aba66ef6a0e17f08a0574e4820653e31b45a0.
I added a bunch more tests for `Layout::from_size_align` and `Layout::array` too.
Inline CStr::from_bytes_with_nul_unchecked::rt_impl
Currently `CStr::from_bytes_with_nul_unchecked::rt_impl` is not being inlined. The following function:
```rust
pub unsafe fn from_bytes_with_nul_unchecked(bytes: &[u8]) {
CStr::from_bytes_with_nul_unchecked(bytes);
}
```
Outputs the following assembly on current nightly
```asm
example::from_bytes_with_nul_unchecked:
jmp qword ptr [rip + _ZN4core3ffi5c_str4CStr29from_bytes_with_nul_unchecked7rt_impl17h026f29f3d6a41333E@GOTPCREL]
```
Meanwhile on beta this provides the following assembly:
```asm
example::from_bytes_with_nul_unchecked:
ret
```
This pull request adds `#[inline]` annotation to`rt_impl` to fix a code generation regression for `CStr::from_bytes_with_nul_unchecked`.
docs: remove repetition in `is_numeric` function docs
In https://github.com/rust-lang/rust/pull/99628 we introduce new docs for the `is_numeric` function, and this is a follow-up PR that removes some unnecessary repetition that may be introduced by some rebasing.
`@rustbot` r? `@joshtriplett`
Add back Send and Sync impls on ChunksMut iterators
Fixes https://github.com/rust-lang/rust/issues/100014
These were accidentally removed in #94247 because the representation was changed from `&mut [T]` to `*mut T`, which has `!Send + !Sync`.
do not claim that transmute is like memcpy
Saying transmute is like memcpy is not a well-formed statement, since memcpy is by-ref whereas transmute is by-val. The by-val nature of transmute inherently means that padding is lost along the way. (This is not specific to transmute, this is how all by-value operations work.) So adjust the docs to clarify this aspect.
Cc `@workingjubilee`
Currently we skip deriving `PartialEq::ne` for C-like (fieldless) enums
and empty structs, thus reyling on the default `ne`. This behaviour is
unnecessarily conservative, because the `PartialEq` docs say this:
> Implementations must ensure that eq and ne are consistent with each other:
>
> `a != b` if and only if `!(a == b)` (ensured by the default
> implementation).
This means that the default implementation (`!(a == b)`) is always good
enough. So this commit changes things such that `ne` is never derived.
The motivation for this change is that not deriving `ne` reduces compile
times and binary sizes.
Observable behaviour may change if a user has defined a type `A` with an
inconsistent `PartialEq` and then defines a type `B` that contains an
`A` and also derives `PartialEq`. Such code is already buggy and
preserving bug-for-bug compatibility isn't necessary.
Two side-effects of the change:
- There is only one error message produced for types where `PartialEq`
cannot be derived, instead of two.
- For coverage reports, some warnings about generated `ne` methods not
being executed have disappeared.
Both side-effects seem fine, and possibly preferable.
mem::uninitialized: mitigate many incorrect uses of this function
Alternative to https://github.com/rust-lang/rust/pull/98966: fill memory with `0x01` rather than leaving it uninit. This is definitely bitewise valid for all `bool` and nonnull types, and also those `Option<&T>` that we started putting `noundef` on. However it is still invalid for `char` and some enums, and on references the `dereferenceable` attribute is still violated, so the generated LLVM IR still has UB -- but in fewer cases, and `dereferenceable` is hopefully less likely to cause problems than clearly incorrect range annotations.
This can make using `mem::uninitialized` a lot slower, but that function has been deprecated for years and we keep telling everyone to move to `MaybeUninit` because it is basically impossible to use `mem::uninitialized` correctly. For the cases where that hasn't helped (and all the old code out there that nobody will ever update), we can at least mitigate the effect of using this API. Note that this is *not* in any way a stable guarantee -- it is still UB to call `mem::uninitialized::<bool>()`, and Miri will call it out as such.
This is somewhat similar to https://github.com/rust-lang/rust/pull/87032, which proposed to make `uninitialized` return a buffer filled with 0x00. However
- That PR also proposed to reduce the situations in which we panic, which I don't think we should do at this time.
- The 0x01 bit pattern means that nonnull requirements are satisfied, which (due to references) is the most common validity invariant.
`@5225225` I hope I am using `cfg(sanitize)` the right way; I was not sure for which ones to test here.
Cc https://github.com/rust-lang/rust/issues/66151
Fixes https://github.com/rust-lang/rust/issues/87675
This initial implementation handles transmutations between types with specified layouts, except when references are involved.
Co-authored-by: Igor null <m1el.2027@gmail.com>
Fix slice::ChunksMut aliasing
Fixes https://github.com/rust-lang/rust/issues/94231, details in that issue.
cc `@RalfJung`
This isn't done just yet, all the safety comments are placeholders. But otherwise, it seems to work.
I don't really like this approach though. There's a lot of unsafe code where there wasn't before, but as far as I can tell the only other way to uphold the aliasing requirement imposed by `__iterator_get_unchecked` is to use raw slices, which I think require the same amount of unsafe code. All that would do is tie the `len` and `ptr` fields together.
Oh I just looked and I'm pretty sure that `ChunksExactMut`, `RChunksMut`, and `RChunksExactMut` also need to be patched. Even more reason to put up a draft.
interpret, ptr_offset_from: refactor and test too-far-apart check
We didn't have any tests for the "too far apart" message, and indeed that check mostly relied on the in-bounds check and was otherwise probably not entirely correct... so I rewrote that check, and it is before the in-bounds check so we can test it separately.
Constify a few `(Partial)Ord` impls
Only a few `impl`s are constified for now, as #92257 has not landed in the bootstrap compiler yet and quite a few impls would need that fix.
This unblocks #92228, which unblocks marking iterator methods as `default_method_body_is_const`.
miri: prune some atomic operation and raw pointer details from stacktrace
Since Miri removes `track_caller` frames from the stacktrace, adding that attribute can help make backtraces more readable (similar to how it makes panic locations better). I made them only show up with `cfg(miri)` to make sure the extra arguments induced by `track_caller` do not cause any runtime performance trouble.
This is also testing the waters for whether the libs team is okay with having these attributes in their code, or whether you'd prefer if we find some other way to do this. If you are fine with this, we will probably want to add it to a lot more functions (all the other atomic operations, to start).
Before:
```
error: Undefined Behavior: Data race detected between Atomic Load on Thread(id = 2) and Write on Thread(id = 1) at alloc1727 (current vector clock = VClock([9, 0, 6]), conflicting timestamp = VClock([0, 6]))
--> /home/r/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/sync/atomic.rs:2594:23
|
2594 | SeqCst => intrinsics::atomic_load_seqcst(dst),
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Data race detected between Atomic Load on Thread(id = 2) and Write on Thread(id = 1) at alloc1727 (current vector clock = VClock([9, 0, 6]), conflicting timestamp = VClock([0, 6]))
|
= help: this indicates a bug in the program: it performed an invalid operation, and caused Undefined Behavior
= help: see https://doc.rust-lang.org/nightly/reference/behavior-considered-undefined.html for further information
= note: inside `std::sync::atomic::atomic_load::<usize>` at /home/r/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/sync/atomic.rs:2594:23
= note: inside `std::sync::atomic::AtomicUsize::load` at /home/r/.rustup/toolchains/miri/lib/rustlib/src/rust/library/core/src/sync/atomic.rs:1719:26
note: inside closure at ../miri/tests/fail/data_race/atomic_read_na_write_race1.rs:22:13
--> ../miri/tests/fail/data_race/atomic_read_na_write_race1.rs:22:13
|
22 | (&*c.0).load(Ordering::SeqCst)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
After:
```
error: Undefined Behavior: Data race detected between Atomic Load on Thread(id = 2) and Write on Thread(id = 1) at alloc1727 (current vector clock = VClock([9, 0, 6]), conflicting timestamp = VClock([0, 6]))
--> tests/fail/data_race/atomic_read_na_write_race1.rs:22:13
|
22 | (&*c.0).load(Ordering::SeqCst)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Data race detected between Atomic Load on Thread(id = 2) and Write on Thread(id = 1) at alloc1727 (current vector clock = VClock([9, 0, 6]), conflicting timestamp = VClock([0, 6]))
|
= help: this indicates a bug in the program: it performed an invalid operation, and caused Undefined Behavior
= help: see https://doc.rust-lang.org/nightly/reference/behavior-considered-undefined.html for further information
= note: inside closure at tests/fail/data_race/atomic_read_na_write_race1.rs:22:13
```
Add `[f32]::sort_floats` and `[f64]::sort_floats`
It's inconvenient to sort a slice or Vec of floats, compared to sorting integers. To simplify numeric code, add a convenience method to `[f32]` and `[f64]` to sort them using `sort_unstable_by` with `total_cmp`.
Rename `<*{mut,const} T>::as_{const,mut}` to `cast_`
This renames the methods to use the `cast_` prefix instead of `as_` to
make it more readable and avoid confusion with `<*mut T>::as_mut()`
which is `unsafe` and returns a reference.
Sorry, didn't notice ACP process exists, opened https://github.com/rust-lang/libs-team/issues/51
See #92675
make vtable pointers entirely opaque
This implements the scheme discussed in https://github.com/rust-lang/unsafe-code-guidelines/issues/338: vtable pointers should be considered entirely opaque and not even readable by Rust code, similar to function pointers.
- We have a new kind of `GlobalAlloc` that symbolically refers to a vtable.
- Miri uses that kind of allocation when generating a vtable.
- The codegen backends, upon encountering such an allocation, call `vtable_allocation` to obtain an actually dataful allocation for this vtable.
- We need new intrinsics to obtain the size and align from a vtable (for some `ptr::metadata` APIs), since direct accesses are UB now.
I had to touch quite a bit of code that I am not very familiar with, so some of this might not make much sense...
r? `@oli-obk`
This renames the methods to use the `cast_` prefix instead of `as_` to
make it more readable and avoid confusion with `<*mut T>::as_mut()`
which is `unsafe` and returns a reference.
See #92675
Changed wording in sections on "Reflexivity":
replaced "that is there is" with "i.e. there would be" and removed comma
before "with"
Reason: "there is" somewhat contradicted the "would be" hypothetical.
A slightly redundant wording has now been chosen for better clarity.
The comma seemed to be superfluous.
Improve the function pointer docs
This is #97842 but for function pointers instead of tuples. The concept is basically the same.
* Reduce duplicate impls; show `fn (T₁, T₂, …, Tₙ)` and include a sentence saying that there exists up to twelve of them.
* Show `Copy` and `Clone`.
* Show auto traits like `Send` and `Sync`, and blanket impls like `Any`.
https://notriddle.com/notriddle-rustdoc-test/std/primitive.fn.html
* Reduce duplicate impls; show only the `fn (T)` and include a sentence
saying that there exists up to twelve of them.
* Show `Copy` and `Clone`.
* Show auto traits like `Send` and `Sync`, and blanket impls like `Any`.
core::any: replace some generic types with impl Trait
This gives a cleaner API since the caller only specifies the concrete type they usually want to.
r? `@yaahc`
- Explicitly mention that `AsRef` and `AsMut` do not auto-dereference
generally for all dereferencable types (but only if inner type is a
shared and/or mutable reference)
- Give advice to not use `AsRef` or `AsMut` for the sole purpose of
dereferencing
- Suggest providing a transitive `AsRef` or `AsMut` implementation for
types which implement `Deref`
- Add new section "Reflexivity" in documentation comments for `AsRef`
and `AsMut`
- Provide better example for `AsMut`
- Added heading "Relation to `Borrow`" in `AsRef`'s docs to improve
structure
Issue #45742 and a corresponding FIXME in the libcore suggest that
`AsRef` and `AsMut` should provide a blanket implementation over
`Deref`. As that is difficult to realize at the moment, this commit
updates the documentation to better describe the status-quo and to give
advice on how to use `AsRef` and `AsMut`.
Fix `Skip::next` for non-fused inner iterators
`iter.skip(n).next()` will currently call `nth` and `next` in succession on `iter`, without checking whether `nth` exhausts the iterator. Using `?` to propagate a `None` value returned by `nth` avoids this.
Use split_once in FromStr docs
Current implementation:
```rust
fn from_str(s: &str) -> Result<Self, Self::Err> {
let coords: Vec<&str> = s.trim_matches(|p| p == '(' || p == ')' )
.split(',')
.collect();
let x_fromstr = coords[0].parse::<i32>()?;
let y_fromstr = coords[1].parse::<i32>()?;
Ok(Point { x: x_fromstr, y: y_fromstr })
}
```
Creating the vector is not necessary, `split_once` does the job better.
Alternatively we could also remove `trim_matches` with `strip_prefix` and `strip_suffix`:
```rust
let (x, y) = s
.strip_prefix('(')
.and_then(|s| s.strip_suffix(')'))
.and_then(|s| s.split_once(','))
.unwrap();
```
The question is how much 'correctness' is too much and distracts from the example. In a real implementation you would also not unwrap (or originally access the vector without bounds checks), but implementing a custom Error and adding a `From<ParseIntError>` and implementing the `Error` trait adds a lot of code to the example which is not relevant to the `FromStr` trait.
Add assertion that `transmute_copy`'s U is not larger than T
This is called out as a safety requirement in the docs, but because knowing this can be done at compile time and constant folded (just like the `align_of` branch is removed), we can just panic here.
I've looked at the asm (using `cargo-asm`) of a function that both is correct and incorrect, and the panic is completely removed, or is unconditional, without needing build-std.
I don't expect this to cause much breakage in the wild. I scanned through https://miri.saethlin.dev/ub for issues that would look like this (error: Undefined Behavior: memory access failed: alloc1768 has size 1, so pointer to 8 bytes starting at offset 0 is out-of-bounds), but couldn't find any.
That doesn't rule out it happening in crates tested that fail earlier for some other reason, though, but it indicates that doing this is rare, if it happens at all. A crater run for this would need to be build and test, since this is a runtime thing.
Also added a few more transmute_copy tests.
Rearrange slice::split_mut to remove bounds check
Closes https://github.com/rust-lang/rust/issues/86313
Turns out that all we need to do here is reorder the bounds checks to convince LLVM that all the bounds checks can be removed. It seems like LLVM just fails to propagate the original length information past the first bounds check and into the second one. With this implementation it doesn't need to, each check can be proven inbounds based on the one immediately previous.
I've gradually convinced myself that this implementation is unambiguously better based on the above logic, but maybe this is still deserving of a codegen test?
Also the mentioned borrowck limitation no longer seems to exist.
Add a special case for align_offset /w stride != 1
This generalizes the previous `stride == 1` special case to apply to any
situation where the requested alignment is divisible by the stride. This
in turn allows the test case from #98809 produce ideal assembly, along
the lines of:
leaq 15(%rdi), %rax
andq $-16, %rax
This also produces pretty high quality code for situations where the
alignment of the input pointer isn’t known:
pub unsafe fn ptr_u32(slice: *const u32) -> *const u32 {
slice.offset(slice.align_offset(16) as isize)
}
// =>
movl %edi, %eax
andl $3, %eax
leaq 15(%rdi), %rcx
andq $-16, %rcx
subq %rdi, %rcx
shrq $2, %rcx
negq %rax
sbbq %rax, %rax
orq %rcx, %rax
leaq (%rdi,%rax,4), %rax
Here LLVM is smart enough to replace the `usize::MAX` special case with
a branch-less bitwise-OR approach, where the mask is constructed using
the neg and sbb instructions. This appears to work across various
architectures I’ve tried.
This change ends up introducing more branches and code in situations
where there is less knowledge of the arguments. For example when the
requested alignment is entirely unknown. This use-case was never really
a focus of this function, so I’m not particularly worried, especially
since llvm-mca is saying that the new code is still appreciably faster,
despite all the new branching.
Fixes#98809.
Sadly, this does not help with #72356.
This generalizes the previous `stride == 1` special case to apply to any
situation where the requested alignment is divisible by the stride. This
in turn allows the test case from #98809 produce ideal assembly, along
the lines of:
leaq 15(%rdi), %rax
andq $-16, %rax
This also produces pretty high quality code for situations where the
alignment of the input pointer isn’t known:
pub unsafe fn ptr_u32(slice: *const u32) -> *const u32 {
slice.offset(slice.align_offset(16) as isize)
}
// =>
movl %edi, %eax
andl $3, %eax
leaq 15(%rdi), %rcx
andq $-16, %rcx
subq %rdi, %rcx
shrq $2, %rcx
negq %rax
sbbq %rax, %rax
orq %rcx, %rax
leaq (%rdi,%rax,4), %rax
Here LLVM is smart enough to replace the `usize::MAX` special case with
a branch-less bitwise-OR approach, where the mask is constructed using
the neg and sbb instructions. This appears to work across various
architectures I’ve tried.
This change ends up introducing more branches and code in situations
where there is less knowledge of the arguments. For example when the
requested alignment is entirely unknown. This use-case was never really
a focus of this function, so I’m not particularly worried, especially
since llvm-mca is saying that the new code is still appreciably faster,
despite all the new branching.
Fixes#98809.
Sadly, this does not help with #72356.
Stabilize `core::ffi::CStr`, `alloc::ffi::CString`, and friends
Stabilize the `core_c_str` and `alloc_c_string` feature gates.
Change `std::ffi` to re-export these types rather than creating type
aliases, since they now have matching stability.
Stabilize the `core_c_str` and `alloc_c_string` feature gates.
Change `std::ffi` to re-export these types rather than creating type
aliases, since they now have matching stability.
Stabilize `core::ffi:c_*` and rexport in `std::ffi`
This only stabilizes the base types, not the non-zero variants, since
those have their own separate tracking issue and have not gone through
FCP to stabilize.
This only stabilizes the base types, not the non-zero variants, since
those have their own separate tracking issue and have not gone through
FCP to stabilize.