enable test_join test in Miri
Miri for quite a while now has a hack to support self-referential generators: non-`Unique` mutable references are exempt from aliasing conditions. So we can run this test now. (It passes.)
Also extend a comment in a Vec test, while I am at it.
Rollup of 3 pull requests
Successful merges:
- #113253 (Fixed documentation of from<CString> for Rc<CStr>: Arc -> Rc)
- #113258 (Migrate GUI colors test to original CSS color format)
- #113259 (Suggest `x build library` for a custom toolchain that fails to load `core`)
r? `@ghost`
`@rustbot` modify labels: rollup
Test benchmarks with `-Z panic-abort-tests`
During test execution, when a `#[bench]` benchmark is encountered it's executed once to check whether it works. Unfortunately that was not compatible with `-Z panic-abort-tests`: the feature works by spawning a subprocess for each test, which prevents the use of dynamic tests as we cannot pass closures to child processes, and before this PR the conversion from benchmark to test was done by turning benchmarks into dynamic tests whose closures execute the benchmark once.
The approach this PR took was to add two new kinds of `TestFn`s: `StaticBenchAsTestFn` and `DynBenchAsTestFn` (⚠️ **this is a breaking change** ⚠️). With that change, a `StaticBenchFn` can be converted into a `StaticBenchAsTestFn` without creating dynamic tests, and making it possible to test `#[bench]` functions with `-Z panic-abort-tests`. The subprocess test runner also had to be updated to perform the conversion from benchmark to test when appropriate.
Along with the bug fix, in the first commit I refactored how tests are executed: rather than executing the test function in multiple places across `libtest`, there is now a private `TestFn::into_runnable()` method, which returns either a `RunnableTest` or `RunnableBench`, on which you can call the `run()` method. This simplified the rest of the changes in the PR.
This PR is best reviewed commit-by-commit.
Fixes https://github.com/rust-lang/rust/issues/73509
Mark wrapped intrinsics as inline(always)
This should mitigate having the inliner decide not to inline when the architecture is lacking an implementation of
TargetTransformInfo::areInlineCompatible aware of the target features (e.g. PowerPC as today).
See https://github.com/rust-lang/stdarch/pull/1443#issuecomment-1613788080
This should mitigate having the inliner decide not to inline when
the architecture is lacking an implementation of
TargetTransformInfo::areInlineCompatible aware of the target
features (e.g. PowerPC as today).
Stabilize `const_cstr_methods`
This PR seeks to stabilize `const_cstr_methods`. Fixes most of #101719
## New const stable API
```rust
impl CStr {
// depends: memchr
pub const fn from_bytes_with_nul(bytes: &[u8]) -> Result<&Self, FromBytesWithNulError> {...}
// depends: const_slice_index
pub const fn to_bytes(&self) -> &[u8] {}
// depends: pointer casts
pub const fn to_bytes_with_nul(&self) -> &[u8] {}
// depends: str::from_utf8
pub const fn to_str(&self) -> Result<&str, str::Utf8Error> {}
}
```
I don't think any of these methods will have any issue when `CStr` becomes a thin pointer as long as `memchr` is const (which also allows for const `strlen`) .
## Notes
- `from_bytes_until_nul` relies on `const_slice_index`, which relies on `const_trait_impls`, and generally this should be avoided. After talking with Oli, it should be OK in this case because we could replace the ranges with pointer tricks if needed (worst case being those feature gates disappear). https://github.com/rust-lang/rust/pull/107624#discussion_r1101468480
- Making `from_ptr` const is deferred because it depends on `const_eval_select`. I have moved this under the new flag `const_cstr_from_ptr` https://github.com/rust-lang/rust/pull/107624#discussion_r1101555239
cc ``@oli-obk`` I think you're the const expert
``@rustbot`` modify labels: +T-libs-api +needs-fcp
Allow comparing `Box`es with different allocators
Currently, comparing `Box`es over different allocators is not allowed:
```Rust
error[E0308]: mismatched types
--> library/alloc/tests/boxed.rs:22:20
|
22 | assert_eq!(b1, b2);
| ^^ expected `Box<{integer}, ConstAllocator>`, found `Box<{integer}, AnotherAllocator>`
|
= note: expected struct `Box<{integer}, ConstAllocator>`
found struct `Box<{integer}, AnotherAllocator>`
For more information about this error, try `rustc --explain E0308`.
error: could not compile `alloc` (test "collectionstests") due to previous error
```
This PR lifts this limitation
remove unused field
Followup to #104455. The field is no longer needed since ExtractIf (previously DrainFilter) doesn't keep draining in its drop impl.
Specialize `StepBy<Range<{integer}>>`
OLD
iter::bench_range_step_by_fold_u16 700.00ns/iter +/- 10.00ns
iter::bench_range_step_by_fold_usize 519.00ns/iter +/- 6.00ns
iter::bench_range_step_by_loop_u32 555.00ns/iter +/- 7.00ns
iter::bench_range_step_by_sum_reducible 37.00ns/iter +/- 0.00ns
NEW
iter::bench_range_step_by_fold_u16 49.00ns/iter +/- 0.00ns
iter::bench_range_step_by_fold_usize 194.00ns/iter +/- 1.00ns
iter::bench_range_step_by_loop_u32 98.00ns/iter +/- 0.00ns
iter::bench_range_step_by_sum_reducible 1.00ns/iter +/- 0.00ns
NEW + `-Ctarget-cpu=x86-64-v3`
iter::bench_range_step_by_fold_u16 22.00ns/iter +/- 0.00ns
iter::bench_range_step_by_fold_usize 80.00ns/iter +/- 1.00ns
iter::bench_range_step_by_loop_u32 41.00ns/iter +/- 0.00ns
iter::bench_range_step_by_sum_reducible 1.00ns/iter +/- 0.00ns
I have only optimized for walltime of those methods, I haven't tested whether it eliminates bounds checks when indexing into slices via things like `(0..slice.len()).step_by(16)`.
Move windows-sys arm32 shim to c.rs
This moves the arm32 shim in to c.rs instead of appending to the generated file itself.
This makes it simpler to change these workarounds if/when needed. The downside is we need to exclude a couple of functions from being generated (see the comment). A metadata solution could help here but they'll be easy enough to add back if that happens.
Remove unnecessary `path` attribute
Follow up to #111401. I missed this at the time but it should now be totally unnecessary since the other include was removed.
r? `@workingjubilee`
Expose `compiler-builtins-weak-intrinsics` feature for `-Zbuild-std`
This was added in rust-lang/compiler-builtins#526 to force all compiler-builtins intrinsics to use weak linkage.
Implement `Sync` for `mpsc::Sender`
`mpsc::Sender` is currently `!Sync` because the previous implementation contained an optimization where the channel started out as single-producer and was dynamically upgraded on the first clone, which relied on a unique reference to the sender. This optimization is one of the main reasons the old implementation was so complex and was removed in #93563. `mpsc::Sender` can now soundly implement `Sync`.
Note for any potential confusion, this chance does *not* add MPMC behavior. This only affects the already `Send + Clone` *sender*, not *receiver*.
It's technically possible to rely on the `!Sync` behavior in the same way as a `PhantomData<*mut T>`, but that seems very unlikely in practice. Either way, this change is insta-stable and needs an FCP.
`@rustbot` label +T-libs-api -T-libs
slice::from_raw_parts: mention no-wrap-around condition
Cc https://github.com/rust-lang/rust/issues/83996. This probably needs to be mentioned in more places, so I am not closing that issue, but this here should help at least.
For ranges < usize we determine the number of items
StepBy would yield and then store that in the range.end
instead of the actual end. This significantly
simplifies calculation of the loop induction variable
especially in cases where StepBy::step (an usize)
could overflow the Range's item type