Implement `signum` with `Ord`
Rather than needing to do things like #105840 for `signum` too, might as well just implement that method using `Ord`, since it's doing the same "I need `-1`/`0`/`+1`" behaviour that `cmp` is already doing.
This also seems to slightly improve the assembly: <https://rust.godbolt.org/z/5oEEqbxK1>
Skip possible where_clause_object_safety lints when checking `multiple_supertrait_upcastable`
Fix#106247
To achieve this, I lifted the `WhereClauseReferencesSelf` out from `object_safety_violations` and move it into `is_object_safe` (which is changed to a new query).
cc `@dtolnay`
r? `@compiler-errors`
Remove `ControlFlow::{BREAK, CONTINUE}`
Libs-API decided to remove these in #102697.
Follow-up to #107023, which removed them from `compiler/`, but a couple new ones showed up since that was merged.
r? libs
Libs-API decided to remove these in #102697.
Follow-up to #107023, which removed them from `compiler/`, but a couple new ones showed up since that was merged.
core: Support variety of atomic widths in width-agnostic functions
Before this change, the following functions and macros were annotated with `#[cfg(target_has_atomic = "8")]` or
`#[cfg(target_has_atomic_load_store = "8")]`:
* `atomic_int`
* `strongest_failure_ordering`
* `atomic_swap`
* `atomic_add`
* `atomic_sub`
* `atomic_compare_exchange`
* `atomic_compare_exchange_weak`
* `atomic_and`
* `atomic_nand`
* `atomic_or`
* `atomic_xor`
* `atomic_max`
* `atomic_min`
* `atomic_umax`
* `atomic_umin`
However, none of those functions and macros actually depend on 8-bit width and they are needed for all atomic widths (16-bit, 32-bit, 64-bit etc.). Some targets might not support 8-bit atomics (i.e. BPF, if we would enable atomic CAS for it).
This change fixes that by removing the `"8"` argument from annotations, which results in accepting the whole variety of widths.
Fixes#106845Fixes#106795
Signed-off-by: Michal Rostecki <vadorovsky@gmail.com>
Rollup of 8 pull requests
Successful merges:
- #106904 (Preserve split DWARF files when building archives.)
- #106971 (Handle diagnostics customization on the fluent side (for one specific diagnostic))
- #106978 (Migrate mir_build's borrow conflicts)
- #107150 (`ty::tls` cleanups)
- #107168 (Use a type-alias-impl-trait in `ObligationForest`)
- #107189 (Encode info for Adt in a single place.)
- #107322 (Custom mir: Add support for some remaining, easy to support constructs)
- #107323 (Disable ConstGoto opt in cleanup blocks)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Custom mir: Add support for some remaining, easy to support constructs
Some documentation for previous changes and support for `Deinit`, checked binops, len, and array repetition
r? ```@oli-obk``` or ```@tmiasko```
Move format_args!() into AST (and expand it during AST lowering)
Implements https://github.com/rust-lang/compiler-team/issues/541
This moves FormatArgs from rustc_builtin_macros to rustc_ast_lowering. For now, the end result is the same. But this allows for future changes to do smarter things with format_args!(). It also allows Clippy to directly access the ast::FormatArgs, making things a lot easier.
This change turns the format args types into lang items. The builtin macro used to refer to them by their path. After this change, the path is no longer relevant, making it easier to make changes in `core`.
This updates clippy to use the new language items, but this doesn't yet make clippy use the ast::FormatArgs structure that's now available. That should be done after this is merged.
impl DispatchFromDyn for Cell and UnsafeCell
After some fruitful discussion on [Internals](https://internals.rust-lang.org/t/impl-dispatchfromdyn-for-cell-2/16520) here's my first PR to rust-lang/rust 🎉
Please let me know if there's something I missed.
This adds `DispatchFromDyn` impls for `Cell`, `UnsafeCell` and `SyncUnsafeCell`.
An existing test is also expanded to test the `Cell` impl (which requires the `UnsafeCell` impl)
The different `RefCell` types can not implement `DispatchFromDyn` since they have more than one (non ZST) field.
**Edit:**
### What:
These changes allow one to make types like `MyRc`(code below), to be object safe method receivers after implementing `DispatchFromDyn` and `Deref` for them.
This allows for code like this:
```rust
struct MyRc<T: ?Sized>(Cell<NonNull<RcBox<T>>>);
/* impls for DispatchFromDyn, CoerceUnsized and Deref for MyRc*/
trait Trait {
fn foo(self: MyRc<Self>);
}
let impls_trait = ...;
let rc = MyRc::new(impls_trait) as MyRc<dyn Trait>;
rc.foo();
```
Note: `Cell` and `UnsafeCell` won't directly become valid method receivers since they don't implement `Deref`. Making use of these changes requires a wrapper type and nightly features.
### Why:
A custom pointer type with interior mutability allows one to store extra information in the pointer itself.
These changes allow for such a type to be a method receiver.
### Examples:
My use case is a cycle aware custom `Rc` implementation that when dropping a cycle marks some references dangling.
On the [forum](https://internals.rust-lang.org/t/impl-dispatchfromdyn-for-cell/14762/8) andersk mentioned that they track if a `Gc` reference is rooted with an extra bit in the reference itself.
Suggest using a lock for `*Cell: Sync` bounds
I mostly did this for `OnceCell<T>` at first because users will be confused to see that the `OnceCell<T>` in `std` isn't `Sync` but then extended it to `Cell<T>` and `RefCell<T>` as well.
`sub_ptr()` is equivalent to `usize::try_from().unwrap_unchecked()`, not `usize::from().unwrap_unchecked()`
`usize::from()` gives a `usize`, not `Result<usize>`, and `usize: From<isize>` is not implemented.
Allow fmt::Arguments::as_str() to return more Some(_).
This adjusts the documentation to allow optimization of format_args!() to be visible through fmt::Arguments::as_str().
This allows for future changes like https://github.com/rust-lang/rust/pull/106824.
Before this change, the following functions and macros were annotated
with `#[cfg(target_has_atomic = "8")]` or
`#[cfg(target_has_atomic_load_store = "8")]`:
* `atomic_int`
* `strongest_failure_ordering`
* `atomic_swap`
* `atomic_add`
* `atomic_sub`
* `atomic_compare_exchange`
* `atomic_compare_exchange_weak`
* `atomic_and`
* `atomic_nand`
* `atomic_or`
* `atomic_xor`
* `atomic_max`
* `atomic_min`
* `atomic_umax`
* `atomic_umin`
However, none of those functions and macros actually depend on 8-bit
width and they are needed for all atomic widths (16-bit, 32-bit, 64-bit
etc.). Some targets might not support 8-bit atomics (i.e. BPF, if we
would enable atomic CAS for it).
This change fixes that by removing the `"8"` argument from annotations,
which results in accepting the whole variety of widths.
Fixes#106845Fixes#106795
Signed-off-by: Michal Rostecki <vadorovsky@gmail.com>
Improve the documentation of `black_box`
There don't seem to be many great resources on how `black_box` should be used, so I added some information here
Unify stable and unstable sort implementations in same core module
This moves the stable sort implementation to the core::slice::sort module. By virtue of being in core it can't access `Vec`. The two `Vec` used by merge sort, `buf` and `runs`, are modelled as custom types that implement the very limited required `Vec` interface with the help of provided allocation and free functions. This is done to allow future re-use of functions and logic between stable and unstable sort. Such as `insert_head`.
This is in preparation of #100856 and #104116. It only moves code, it *doesn't* change any of the sort related logic. This unlocks the ability to share `insert_head`, `insert_tail`, `swap_if_less` `merge` and more.
Tagging ````@Mark-Simulacrum```` I hope this allows progress on #100856, by moving `merge_sort` here I hope future changes will be easier to review.
- Eliminates all the `get_context` calls that async lowering created.
- Replace all `Local` `ResumeTy` types with `&mut Context<'_>`.
The `Local`s that have their types replaced are:
- The `resume` argument itself.
- The argument to `get_context`.
- The yielded value of a `yield`.
The `ResumeTy` hides a `&mut Context<'_>` behind an unsafe raw pointer, and the
`get_context` function is being used to convert that back to a `&mut Context<'_>`.
Ideally the async lowering would not use the `ResumeTy`/`get_context` indirection,
but rather directly use `&mut Context<'_>`, however that would currently
lead to higher-kinded lifetime errors.
See <https://github.com/rust-lang/rust/issues/105501>.
The async lowering step and the type / lifetime inference / checking are
still using the `ResumeTy` indirection for the time being, and that indirection
is removed here. After this transform, the generator body only knows about `&mut Context<'_>`.
Lift `T: Sized` bounds from some `strict_provenance` pointer methods
This PR removes requirement for `T` (pointee type) to be `Sized` to call `pointer::{addr, expose_addr, with_addr, map_addr}`. These functions don't use `T`'s size, so there is no reason for them to require this. Updated public API:
cc ``@Gankra,`` #95228
r? libs-api
Add heapsort fallback in `select_nth_unstable`
Addresses #102451 and #106933.
`slice::select_nth_unstable` uses a quick select implementation based on the same pattern defeating quicksort algorithm that `slice::sort_unstable` uses. `slice::sort_unstable` uses a recursion limit and falls back to heapsort if there were too many bad pivot choices, to ensure O(n log n) worst case running time (known as introsort). However, `slice::select_nth_unstable` does not have such a fallback strategy, which leads to it having a worst case running time of O(n²) instead. #102451 links to a playground which generates pathological inputs that show this quadratic behavior. On my machine, a randomly generated slice of length `1 << 19` takes ~200µs to calculate its median, whereas a pathological input of the same length takes over 2.5s. This PR adds an iteration limit to `select_nth_unstable`, falling back to heapsort, which ensures an O(n log n) worst case running time (introselect). With this change, there was no noticable slowdown for the random input, but the same pathological input now takes only ~1.2ms. In the future it might be worth implementing something like Median of Medians or Fast Deterministic Selection instead, which guarantee O(n) running time for all possible inputs. I've left this as a `FIXME` for now and only implemented the heapsort fallback to minimize the needed code changes.
I still think we should clarify in the `select_nth_unstable` docs that the worst case running time isn't currently O(n) (the original reason that #102451 was opened), but I think it's a lot better to be able to guarantee O(n log n) instead of O(n²) for the worst case.
Remove various double spaces in the libraries.
I was just pretty bothered by this when reading the source for a function, and was suggested to check if this happened elsewhere.
reword Option::as_ref and Option::map examples
The description for the examples of `Option::as_ref` and `Option::map` imply that the example is only doing type conversion, when it is actually finding the length of a string.
Changes the wording to imply that some operation is being run on the value contained in the `Option`
closes#104476