Stabilize `const_cstr_methods`
This PR seeks to stabilize `const_cstr_methods`. Fixes most of #101719
## New const stable API
```rust
impl CStr {
// depends: memchr
pub const fn from_bytes_with_nul(bytes: &[u8]) -> Result<&Self, FromBytesWithNulError> {...}
// depends: const_slice_index
pub const fn to_bytes(&self) -> &[u8] {}
// depends: pointer casts
pub const fn to_bytes_with_nul(&self) -> &[u8] {}
// depends: str::from_utf8
pub const fn to_str(&self) -> Result<&str, str::Utf8Error> {}
}
```
I don't think any of these methods will have any issue when `CStr` becomes a thin pointer as long as `memchr` is const (which also allows for const `strlen`) .
## Notes
- `from_bytes_until_nul` relies on `const_slice_index`, which relies on `const_trait_impls`, and generally this should be avoided. After talking with Oli, it should be OK in this case because we could replace the ranges with pointer tricks if needed (worst case being those feature gates disappear). https://github.com/rust-lang/rust/pull/107624#discussion_r1101468480
- Making `from_ptr` const is deferred because it depends on `const_eval_select`. I have moved this under the new flag `const_cstr_from_ptr` https://github.com/rust-lang/rust/pull/107624#discussion_r1101555239
cc ``@oli-obk`` I think you're the const expert
``@rustbot`` modify labels: +T-libs-api +needs-fcp
Allow comparing `Box`es with different allocators
Currently, comparing `Box`es over different allocators is not allowed:
```Rust
error[E0308]: mismatched types
--> library/alloc/tests/boxed.rs:22:20
|
22 | assert_eq!(b1, b2);
| ^^ expected `Box<{integer}, ConstAllocator>`, found `Box<{integer}, AnotherAllocator>`
|
= note: expected struct `Box<{integer}, ConstAllocator>`
found struct `Box<{integer}, AnotherAllocator>`
For more information about this error, try `rustc --explain E0308`.
error: could not compile `alloc` (test "collectionstests") due to previous error
```
This PR lifts this limitation
remove unused field
Followup to #104455. The field is no longer needed since ExtractIf (previously DrainFilter) doesn't keep draining in its drop impl.
Specialize `StepBy<Range<{integer}>>`
OLD
iter::bench_range_step_by_fold_u16 700.00ns/iter +/- 10.00ns
iter::bench_range_step_by_fold_usize 519.00ns/iter +/- 6.00ns
iter::bench_range_step_by_loop_u32 555.00ns/iter +/- 7.00ns
iter::bench_range_step_by_sum_reducible 37.00ns/iter +/- 0.00ns
NEW
iter::bench_range_step_by_fold_u16 49.00ns/iter +/- 0.00ns
iter::bench_range_step_by_fold_usize 194.00ns/iter +/- 1.00ns
iter::bench_range_step_by_loop_u32 98.00ns/iter +/- 0.00ns
iter::bench_range_step_by_sum_reducible 1.00ns/iter +/- 0.00ns
NEW + `-Ctarget-cpu=x86-64-v3`
iter::bench_range_step_by_fold_u16 22.00ns/iter +/- 0.00ns
iter::bench_range_step_by_fold_usize 80.00ns/iter +/- 1.00ns
iter::bench_range_step_by_loop_u32 41.00ns/iter +/- 0.00ns
iter::bench_range_step_by_sum_reducible 1.00ns/iter +/- 0.00ns
I have only optimized for walltime of those methods, I haven't tested whether it eliminates bounds checks when indexing into slices via things like `(0..slice.len()).step_by(16)`.
Move windows-sys arm32 shim to c.rs
This moves the arm32 shim in to c.rs instead of appending to the generated file itself.
This makes it simpler to change these workarounds if/when needed. The downside is we need to exclude a couple of functions from being generated (see the comment). A metadata solution could help here but they'll be easy enough to add back if that happens.
Remove unnecessary `path` attribute
Follow up to #111401. I missed this at the time but it should now be totally unnecessary since the other include was removed.
r? `@workingjubilee`
Expose `compiler-builtins-weak-intrinsics` feature for `-Zbuild-std`
This was added in rust-lang/compiler-builtins#526 to force all compiler-builtins intrinsics to use weak linkage.
Implement `Sync` for `mpsc::Sender`
`mpsc::Sender` is currently `!Sync` because the previous implementation contained an optimization where the channel started out as single-producer and was dynamically upgraded on the first clone, which relied on a unique reference to the sender. This optimization is one of the main reasons the old implementation was so complex and was removed in #93563. `mpsc::Sender` can now soundly implement `Sync`.
Note for any potential confusion, this chance does *not* add MPMC behavior. This only affects the already `Send + Clone` *sender*, not *receiver*.
It's technically possible to rely on the `!Sync` behavior in the same way as a `PhantomData<*mut T>`, but that seems very unlikely in practice. Either way, this change is insta-stable and needs an FCP.
`@rustbot` label +T-libs-api -T-libs
slice::from_raw_parts: mention no-wrap-around condition
Cc https://github.com/rust-lang/rust/issues/83996. This probably needs to be mentioned in more places, so I am not closing that issue, but this here should help at least.
For ranges < usize we determine the number of items
StepBy would yield and then store that in the range.end
instead of the actual end. This significantly
simplifies calculation of the loop induction variable
especially in cases where StepBy::step (an usize)
could overflow the Range's item type
Warn on unused `offset_of!()` result
The usage of `core::hint::must_use()` means that we don't get a specialized message. I figured out that since there are plenty of other methods that just have `#[must_use]` with no message it'll be fine, but it is a bit unfortunate that the error mentions `must_use` and not `offset_of!`.
Fixes#111669.
Document memory orderings of `thread::{park, unpark}`
Document `thread::park/unpark` as having acquire/release synchronization. Without that guarantee, even the example in the documentation can deadlock:
```rust
let flag = Arc::new(AtomicBool::new(false));
let t2 = thread::spawn(move || {
while !flag.load(Ordering::Acquire) {
thread::park();
}
});
flag.store(true, Ordering::Release);
t2.thread().unpark();
// t1: flag.store(true)
// t1: thread.unpark()
// t2: flag.load() == false
// t2 now parks, is immediately unblocked but never
// acquires the flag, and thus spins forever
```
Multiple calls to `unpark` should also maintain a release sequence to make sure operations released by previous `unpark`s are not lost:
```rust
let a = Arc::new(AtomicBool::new(false));
let b = Arc::new(AtomicBool::new(false));
let t2 = thread::spawn(move || {
while !a.load(Ordering::Acquire) || !b.load(Ordering::Acquire) {
thread::park();
}
});
thread::spawn(move || {
a.store(true, Ordering::Release);
t2.thread().unpark();
});
b.store(true, Ordering::Release);
t2.thread().unpark();
// t1: a.store(true)
// t1: t2.unpark()
// t3: b.store(true)
// t3: t2.unpark()
// t2 now parks, is immediately unblocked but never
// acquires the store of `a`, only the store of `b` which
// was released by the most recent unpark, and thus spins forever
```
This is of course a contrived example, but is reasonable to rely upon in real code.
Note that all implementations of park/unpark already comply with the rules, it's just undocumented.
Implement PartialOrd for `Vec`s over different allocators
It is already possible to `PartialEq` `Vec`s with different allocators, but that is not the case with `PartialOrd`.
Add `implement_via_object` to `rustc_deny_explicit_impl` to control object candidate assembly
Some built-in traits are special, since they are used to prove facts about the program that are important for later phases of compilation such as codegen and CTFE. For example, the `Unsize` trait is used to assert to the compiler that we are able to unsize a type into another type. It doesn't have any methods because it doesn't actually *instruct* the compiler how to do this unsizing, but this is later used (alongside an exhaustive match of combinations of unsizeable types) during codegen to generate unsize coercion code.
Due to this, these built-in traits are incompatible with the type erasure provided by object types. For example, the existence of `dyn Unsize<T>` does not mean that the compiler is able to unsize `Box<dyn Unsize<T>>` into `Box<T>`, since `Unsize` is a *witness* to the fact that a type can be unsized, and it doesn't actually encode that unsizing operation in its vtable as mentioned above.
The old trait solver gets around this fact by having complex control flow that never considers object bounds for certain built-in traits:
2f896da247/compiler/rustc_trait_selection/src/traits/select/candidate_assembly.rs (L61-L132)
However, candidate assembly in the new solver is much more lovely, and I'd hate to add this list of opt-out cases into the new solver. Instead of maintaining this complex and hard-coded control flow, instead we can make this a property of the trait via a built-in attribute. We already have such a build attribute that's applied to every single trait that we care about: `rustc_deny_explicit_impl`. This PR adds `implement_via_object` as a meta-item to that attribute that allows us to opt a trait out of object-bound candidate assembly as well.
r? `@lcnr`