Convert all the crates that have had their diagnostic migration
completed (except save_analysis because that will be deleted soon and
apfloat because of the licensing problem).
Implement allow-by-default `multiple_supertrait_upcastable` lint
The lint detects when an object-safe trait has multiple supertraits.
Enabled in libcore and liballoc as they are low-level enough that many embedded programs will use them.
r? `@nikomatsakis`
Stabilize default_alloc_error_handler
Tracking issue: #66741
This turns `feature(default_alloc_error_handler)` on by default, which causes the compiler to automatically generate a default OOM handler which panics if `#[alloc_error_handler]` is not provided.
The FCP completed over 2 years ago but the stabilization was blocked due to an issue with unwinding. This was fixed by #88098 so stabilization can be unblocked.
Closes#66741
Add LLVM KCFI support to the Rust compiler
This PR adds LLVM Kernel Control Flow Integrity (KCFI) support to the Rust compiler. It initially provides forward-edge control flow protection for operating systems kernels for Rust-compiled code only by aggregating function pointers in groups identified by their return and parameter types. (See llvm/llvm-project@cff5bef.)
Forward-edge control flow protection for C or C++ and Rust -compiled code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code share the same virtual address space) will be provided in later work as part of this project by identifying C char and integer type uses at the time types are encoded (see Type metadata in the design document in the tracking issue #89653).
LLVM KCFI can be enabled with -Zsanitizer=kcfi.
Thank you again, `@bjorn3,` `@eddyb,` `@nagisa,` and `@ojeda,` for all the help!
This commit adds LLVM Kernel Control Flow Integrity (KCFI) support to
the Rust compiler. It initially provides forward-edge control flow
protection for operating systems kernels for Rust-compiled code only by
aggregating function pointers in groups identified by their return and
parameter types. (See llvm/llvm-project@cff5bef.)
Forward-edge control flow protection for C or C++ and Rust -compiled
code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code
share the same virtual address space) will be provided in later work as
part of this project by identifying C char and integer type uses at the
time types are encoded (see Type metadata in the design document in the
tracking issue #89653).
LLVM KCFI can be enabled with -Zsanitizer=kcfi.
Co-authored-by: bjorn3 <17426603+bjorn3@users.noreply.github.com>
Support using `Self` or projections inside an RPIT/async fn
I reuse the same idea as https://github.com/rust-lang/rust/pull/103449 to use variances to encode whether a lifetime parameter is captured by impl-trait.
The current implementation of async and RPIT replace all lifetimes from the parent generics by `'static`. This PR changes the scheme
```rust
impl<'a> Foo<'a> {
fn foo<'b, T>() -> impl Into<Self> + 'b { ... }
}
opaque Foo::<'_a>::foo::<'_b, T>::opaque<'b>: Into<Foo<'_a>> + 'b;
impl<'a> Foo<'a> {
// OLD
fn foo<'b, T>() -> Foo::<'static>::foo::<'static, T>::opaque::<'b> { ... }
^^^^^^^ the `Self` becomes `Foo<'static>`
// NEW
fn foo<'b, T>() -> Foo::<'a>::foo::<'b, T>::opaque::<'b> { ... }
^^ the `Self` stays `Foo<'a>`
}
```
There is the same issue with projections. In the example, substitute `Self` by `<T as Trait<'b>>::Assoc` in the sugared version, and `Foo<'_a>` by `<T as Trait<'_b>>::Assoc` in the desugared one.
This allows to support `Self` in impl-trait, since we do not replace lifetimes by `'static` any more. The same trick allows to use projections like `T::Assoc` where `Self` is allowed. The feature is gated behind a `impl_trait_projections` feature gate.
The implementation relies on 2 tweaking rules for opaques in 2 places:
- we only relate substs that correspond to captured lifetimes during TypeRelation;
- we only list captured lifetimes in choice region computation.
For simplicity, I encoded the "capturedness" of lifetimes as a variance, `Bivariant` vs `Invariant` for unused vs captured lifetimes. The `variances_of` query used to ICE for opaques.
Impl-trait that do not reference `Self` or projections will have their variances as:
- `o` (invariant) for each parent type or const;
- `*` (bivariant) for each parent lifetime --> will not participate in borrowck;
- `o` (invariant) for each own lifetime.
Impl-trait that does reference `Self` and/or projections will have some parent lifetimes marked as `o` (as the example above), and participate in type relation and borrowck. In the example above, `variances_of(opaque) = ['_a: o, '_b: *, T: o, 'b: o]`.
r? types
cc `@compiler-errors` , as you asked about the issue with `Self` and projections.
Add `rustc_deny_explicit_impl`
Also adjust `E0322` error message to be more general, since it's used for `DiscriminantKind` and `Pointee` as well.
Also add `rustc_deny_explicit_impl` on the `Tuple` and `Destruct` marker traits.
Mark `trait_upcasting` feature no longer incomplete.
This marks the `trait_upcasting` feature no longer incomplete since #101336 has been settled for a little while.
r? ``````@jackh726``````
The new implementation doesn't use weak lang items and instead changes
`#[alloc_error_handler]` to an attribute macro just like
`#[global_allocator]`.
The attribute will generate the `__rg_oom` function which is called by
the compiler-generated `__rust_alloc_error_handler`. If no `__rg_oom`
function is defined in any crate then the compiler shim will call
`__rdl_oom` in the alloc crate which will simply panic.
This also fixes link errors with `-C link-dead-code` with
`default_alloc_error_handler`: `__rg_oom` was previously defined in the
alloc crate and would attempt to reference the `oom` lang item, even if
it didn't exist. This worked as long as `__rg_oom` was excluded from
linking since it was not called.
This is a prerequisite for the stabilization of
`default_alloc_error_handler` (#102318).
Enable varargs support for calling conventions other than C or cdecl
This patch makes it possible to use varargs for calling conventions,
which are either based on C (efiapi) or C is based on them (sysv64 and win64).
Also pinging ``@phlopsi,`` because he noticed first this oversight when writing a library for UEFI.
Allow `impl Fn() -> impl Trait` in return position
_This was originally proposed as part of #93082 which was [closed](https://github.com/rust-lang/rust/pull/93082#issuecomment-1027225715) due to allowing `impl Fn() -> impl Trait` in argument position._
This allows writing the following function signatures:
```rust
fn f0() -> impl Fn() -> impl Trait;
fn f3() -> &'static dyn Fn() -> impl Trait;
```
These signatures were already allowed for common traits and associated types, there is no reason why `Fn*` traits should be special in this regard.
`impl Trait` in both `f0` and `f3` means "new existential type", just like with `-> impl Iterator<Item = impl Trait>` and such.
Arrow in `impl Fn() ->` is right-associative and binds from right to left, it's tested by [this test](a819fecb8d/src/test/ui/impl-trait/impl_fn_associativity.rs).
There even is a test that `f0` compiles:
2f004d2d40/src/test/ui/impl-trait/nested_impl_trait.rs (L25-L28)
But it was changed in [PR 48084 (lines)](https://github.com/rust-lang/rust/pull/48084/files#diff-ccecca938872d65ffe8cd1c3ef1956e309fac83bcda547d8b16b89257e53a437R37) to test the opposite, probably unintentionally given [PR 48084 (lines)](https://github.com/rust-lang/rust/pull/48084/files#diff-5a02f1ed43debed1fd24f7aad72490064f795b9420f15d847bac822aa4621a1cR476-R477).
r? `@nikomatsakis`
----
This limitation is especially annoying with async code, since it forces one to write this:
```rust
trait AsyncFn3<A, B, C>: Fn(A, B, C) -> <Self as AsyncFn3<A, B, C>>::Future {
type Future: Future<Output = Self::Out>;
type Out;
}
impl<A, B, C, Fut, F> AsyncFn3<A, B, C> for F
where
F: Fn(A, B, C) -> Fut,
Fut: Future,
{
type Future = Fut;
type Out = Fut::Output;
}
fn async_closure() -> impl AsyncFn3<i32, i32, i32, Out = u32> {
|a, b, c| async move { (a + b + c) as u32 }
}
```
Instead of:
```rust
fn async_closure() -> impl Fn(i32, i32, i32) -> impl Future<Output = u32> {
|a, b, c| async move { (a + b + c) as u32 }
}
```
Sort tests at compile time, not at startup
Recently, another Miri user was trying to run `cargo miri test` on the crate `iced-x86` with `--features=code_asm,mvex`. This configuration has a startup time of ~18 minutes. That's ~18 minutes before any tests even start to run. The fact that this crate has over 26,000 tests and Miri is slow makes a lot of code which is otherwise a bit sloppy but fine into a huge runtime issue.
Sorting the tests when the test harness is created instead of at startup time knocks just under 4 minutes out of those ~18 minutes. I have ways to remove most of the rest of the startup time, but this change requires coordinating changes of both the compiler and libtest, so I'm sending it separately.
(except for doctests, because there is no compile-time harness)
This patch makes it possible to use varargs for calling conventions,
which are either based on C (like efiapi) or C is based
on them (for example sysv64 and win64).
Split out async_fn_in_trait into a separate feature
PR #101224 added support for async fn in trait desuraging behind the `return_position_impl_trait_in_trait` feature.
Split this out so that it's behind its own feature gate, since async fn in trait doesn't need to follow the same stabilization schedule.
PR #101224 added support for async fn in trait desuraging behind the
return_position_impl_trait_in_trait feature.
Split this out so that it's behind its own feature gate, since async fn
in trait doesn't need to follow the same stabilization schedule.