Rollup of 8 pull requests
Successful merges:
- #114009 (compiler: allow transmute of ZST arrays with generics)
- #122195 (Note that the caller chooses a type for type param)
- #122651 (Suggest `_` for missing generic arguments in turbofish)
- #122784 (Add `tag_for_variant` query)
- #122839 (Split out `PredicatePolarity` from `ImplPolarity`)
- #122873 (Merge my contributor emails into one using mailmap)
- #122885 (Adjust better spastorino membership to triagebot's adhoc_groups)
- #122888 (add a couple more tests)
r? `@ghost`
`@rustbot` modify labels: rollup
Remove `TypeAndMut` from `ty::RawPtr` variant, make it take `Ty` and `Mutability`
Pretty much mechanically converting `ty::RawPtr(ty::TypeAndMut { ty, mutbl })` to `ty::RawPtr(ty, mutbl)` and its fallout.
r? lcnr
cc rust-lang/types-team#124
Split out `PredicatePolarity` from `ImplPolarity`
Because having to deal with a third `Reservation` level in all the trait solver code is kind of weird.
r? `@lcnr` or `@oli-obk`
Add `tag_for_variant` query
This query allows for sharing code between `rustc_const_eval` and `rustc_transmutability`. It's a precursor to a PR I'm working on to entirely replace the bespoke layout computations in `rustc_transmutability`.
r? `@compiler-errors`
compiler: allow transmute of ZST arrays with generics
Extend the `SizeSkeleton` evaluator to shortcut zero-sized arrays, thus considering `[T; 0]` to have a compile-time fixed-size of 0.
The existing evaluator already deals with generic arrays under the feature-guard `transmute_const_generics`. However, it merely allows comparing fixed-size types with fixed-size types, and generic types with generic types. For generic types, it merely compares whether their arguments match (ordering them first). Even if their exact sizes are not known at compile time, it can ensure that they will eventually be the same.
This patch extends this by shortcutting the size-evaluation of zero sized arrays and thus allowing size comparisons of `()` with `[T; 0]`, where one contains generics and the other does not.
This code is guarded by `transmute_const_generics` (#109929), even though it is unclear whether it should be. However, this assumes that a separate stabilization PR is required to move this out of the feature guard.
Initially reported in #98104.
"Handle" calls to upstream monomorphizations in compiler_builtins
This is pretty cooked, but I think it works.
compiler-builtins has a long-standing problem that at link time, its rlib cannot contain any calls to `core`. And yet, in codegen we _love_ inserting calls to symbols in `core`, generally from various panic entrypoints.
I intend this PR to attack that problem as completely as possible. When we generate a function call, we now check if we are generating a function call from `compiler_builtins` and whether the callee is a function which was not lowered in the current crate, meaning we will have to link to it.
If those conditions are met, actually generating the call is asking for a linker error. So we don't. If the callee diverges, we lower to an abort with the same behavior as `core::intrinsics::abort`. If the callee does not diverge, we produce an error. This means that compiler-builtins can contain panics, but they'll SIGILL instead of panicking. I made non-diverging calls a compile error because I'm guessing that they'd mostly get into compiler-builtins by someone making a mistake while working on the crate, and compile errors are better than linker errors. We could turn such calls into aborts as well if that's preferred.
coverage: Clean up marker statements that aren't needed later
Some of the marker statements used by coverage are added during MIR building for use by the InstrumentCoverage pass (during analysis), and are not needed afterwards.
```@rustbot``` label +A-code-coverage
interpret/allocation: fix aliasing issue in interpreter and refactor getters a bit
That new raw getter will be needed to let Miri pass pointers to natively executed FFI code ("extern-so" mode).
While doing that I realized our get_bytes_mut are named less scary than get_bytes_unchecked so I rectified that. Also I realized `mem_copy_repeatedly` would break if we called it for multiple overlapping copies so I made sure this does not happen.
And I realized that we are actually [violating Stacked Borrows in the interpreter](https://rust-lang.zulipchat.com/#narrow/stream/136281-t-opsem/topic/I.20think.20Miri.20violates.20Stacked.20Borrows.20.F0.9F.99.88).^^ That was introduced in https://github.com/rust-lang/rust/pull/87777.
r? ```@oli-obk```
Some of the marker statements used by coverage are added during MIR building
for use by the InstrumentCoverage pass (during analysis), and are not needed
afterwards.
Implement macro-based deref!() syntax for deref patterns
Stop using `box PAT` syntax for deref patterns, and instead use a perma-unstable macro.
Blocked on #122222
r? `@Nadrieril`
Don't ICE when encountering bound regions in generator interior type
I'm pretty sure this meant to say "`has_free_regions`", probably just a typo in 4a4fc3bb5b. We can have bound regions (because we only convert non-bound regions into existential regions in generator interiors), but we can't have (non-ReErased) free regions.
r? lcnr
deref patterns: bare-bones feature gate and typechecking
I am restarting the deref patterns experimentation. This introduces a feature gate under the lang-team [experimental feature](https://github.com/rust-lang/lang-team/blob/master/src/how_to/experiment.md) process, with [````@cramertj```` as lang-team liaison](https://github.com/rust-lang/lang-team/issues/88) (it's been a while though, you still ok with this ````@cramertj?).```` Tracking issue: https://github.com/rust-lang/rust/issues/87121.
This is the barest-bones implementation I could think of:
- explicit syntax, reusing `box <pat>` because that saves me a ton of work;
- use `Deref` as a marker trait (instead of a yet-to-design `DerefPure`);
- no support for mutable patterns with `DerefMut` for now;
- MIR lowering will come in the next PR. It's the trickiest part.
My goal is to let us figure out the MIR lowering part, which might take some work. And hopefully get something working for std types soon.
This is in large part salvaged from ````@fee1-dead's```` https://github.com/rust-lang/rust/pull/119467.
r? ````@compiler-errors````
recursively evaluate the constants in everything that is 'mentioned'
This is another attempt at fixing https://github.com/rust-lang/rust/issues/107503. The previous attempt at https://github.com/rust-lang/rust/pull/112879 seems stuck in figuring out where the [perf regression](https://perf.rust-lang.org/compare.html?start=c55d1ee8d4e3162187214692229a63c2cc5e0f31&end=ec8de1ebe0d698b109beeaaac83e60f4ef8bb7d1&stat=instructions:u) comes from. In https://github.com/rust-lang/rust/pull/122258 I learned some things, which informed the approach this PR is taking.
Quoting from the new collector docs, which explain the high-level idea:
```rust
//! One important role of collection is to evaluate all constants that are used by all the items
//! which are being collected. Codegen can then rely on only encountering constants that evaluate
//! successfully, and if a constant fails to evaluate, the collector has much better context to be
//! able to show where this constant comes up.
//!
//! However, the exact set of "used" items (collected as described above), and therefore the exact
//! set of used constants, can depend on optimizations. Optimizing away dead code may optimize away
//! a function call that uses a failing constant, so an unoptimized build may fail where an
//! optimized build succeeds. This is undesirable.
//!
//! To fix this, the collector has the concept of "mentioned" items. Some time during the MIR
//! pipeline, before any optimization-level-dependent optimizations, we compute a list of all items
//! that syntactically appear in the code. These are considered "mentioned", and even if they are in
//! dead code and get optimized away (which makes them no longer "used"), they are still
//! "mentioned". For every used item, the collector ensures that all mentioned items, recursively,
//! do not use a failing constant. This is reflected via the [`CollectionMode`], which determines
//! whether we are visiting a used item or merely a mentioned item.
//!
//! The collector and "mentioned items" gathering (which lives in `rustc_mir_transform::mentioned_items`)
//! need to stay in sync in the following sense:
//!
//! - For every item that the collector gather that could eventually lead to build failure (most
//! likely due to containing a constant that fails to evaluate), a corresponding mentioned item
//! must be added. This should use the exact same strategy as the ecollector to make sure they are
//! in sync. However, while the collector works on monomorphized types, mentioned items are
//! collected on generic MIR -- so any time the collector checks for a particular type (such as
//! `ty::FnDef`), we have to just onconditionally add this as a mentioned item.
//! - In `visit_mentioned_item`, we then do with that mentioned item exactly what the collector
//! would have done during regular MIR visiting. Basically you can think of the collector having
//! two stages, a pre-monomorphization stage and a post-monomorphization stage (usually quite
//! literally separated by a call to `self.monomorphize`); the pre-monomorphizationn stage is
//! duplicated in mentioned items gathering and the post-monomorphization stage is duplicated in
//! `visit_mentioned_item`.
//! - Finally, as a performance optimization, the collector should fill `used_mentioned_item` during
//! its MIR traversal with exactly what mentioned item gathering would have added in the same
//! situation. This detects mentioned items that have *not* been optimized away and hence don't
//! need a dedicated traversal.
enum CollectionMode {
/// Collect items that are used, i.e., actually needed for codegen.
///
/// Which items are used can depend on optimization levels, as MIR optimizations can remove
/// uses.
UsedItems,
/// Collect items that are mentioned. The goal of this mode is that it is independent of
/// optimizations: the set of "mentioned" items is computed before optimizations are run.
///
/// The exact contents of this set are *not* a stable guarantee. (For instance, it is currently
/// computed after drop-elaboration. If we ever do some optimizations even in debug builds, we
/// might decide to run them before computing mentioned items.) The key property of this set is
/// that it is optimization-independent.
MentionedItems,
}
```
And the `mentioned_items` MIR body field docs:
```rust
/// Further items that were mentioned in this function and hence *may* become monomorphized,
/// depending on optimizations. We use this to avoid optimization-dependent compile errors: the
/// collector recursively traverses all "mentioned" items and evaluates all their
/// `required_consts`.
///
/// This is *not* soundness-critical and the contents of this list are *not* a stable guarantee.
/// All that's relevant is that this set is optimization-level-independent, and that it includes
/// everything that the collector would consider "used". (For example, we currently compute this
/// set after drop elaboration, so some drop calls that can never be reached are not considered
/// "mentioned".) See the documentation of `CollectionMode` in
/// `compiler/rustc_monomorphize/src/collector.rs` for more context.
pub mentioned_items: Vec<Spanned<MentionedItem<'tcx>>>,
```
Fixes#107503
Split an item bounds and an item's super predicates
This is the moral equivalent of #107614, but instead for predicates this applies to **item bounds**. This PR splits out the item bounds (i.e. *all* predicates that are assumed to hold for the alias) from the item *super predicates*, which are the subset of item bounds which share the same self type as the alias.
## Why?
Much like #107614, there are places in the compiler where we *only* care about super-predicates, and considering predicates that possibly don't have anything to do with the alias is problematic. This includes things like closure signature inference (which is at its core searching for `Self: Fn(..)` style bounds), but also lints like `#[must_use]`, error reporting for aliases, computing type outlives predicates.
Even in cases where considering all of the `item_bounds` doesn't lead to bugs, unnecessarily considering irrelevant bounds does lead to a regression (#121121) due to doing extra work in the solver.
## Example 1 - Trait Aliases
This is best explored via an example:
```
type TAIT<T> = impl TraitAlias<T>;
trait TraitAlias<T> = A + B where T: C;
```
The item bounds list for `Tait<T>` will include:
* `Tait<T>: A`
* `Tait<T>: B`
* `T: C`
While `item_super_predicates` query will include just the first two predicates.
Side-note: You may wonder why `T: C` is included in the item bounds for `TAIT`? This is because when we elaborate `TraitAlias<T>`, we will also elaborate all the predicates on the trait.
## Example 2 - Associated Type Bounds
```
type TAIT<T> = impl Iterator<Item: A>;
```
The `item_bounds` list for `TAIT<T>` will include:
* `Tait<T>: Iterator`
* `<Tait<T> as Iterator>::Item: A`
But the `item_super_predicates` will just include the first bound, since that's the only bound that is relevant to the *alias* itself.
## So what
This leads to some diagnostics duplication just like #107614, but none of it will be user-facing. We only see it in the UI test suite because we explicitly disable diagnostic deduplication.
Regarding naming, I went with `super_predicates` kind of arbitrarily; this can easily be changed, but I'd consider better names as long as we don't block this PR in perpetuity.
For async closures, cap closure kind, get rid of `by_mut_body`
Right now we have three `AsyncFn*` traits, and three corresponding futures that are returned by the `call_*` functions for them. This is fine, but it is a bit excessive, since the future returned by `AsyncFn` and `AsyncFnMut` are identical. Really, the only distinction we need to make with these bodies is "by ref" and "by move".
This PR removes `AsyncFn::CallFuture` and renames `AsyncFnMut::CallMutFuture` to `AsyncFnMut::CallRefFuture`. This simplifies MIR building for async closures, since we don't need to build an extra "by mut" body, but just a "by move" body which is materially different.
We need to do a bit of delicate handling of the ClosureKind for async closures, since we need to "cap" it to `AsyncFnMut` in some cases when we only care about what body we're looking for.
This also fixes a bug where `<{async closure} as Fn>::call` was returning a body that takes the async-closure receiver *by move*.
This also helps align the `AsyncFn` traits to the `LendingFn` traits' eventual designs.
Extend the `SizeSkeleton` evaluator to shortcut zero-sized arrays, thus
considering `[T; 0]` to have a compile-time fixed-size of 0.
The existing evaluator already deals with generic arrays under the
feature-guard `transmute_const_generics`. However, it merely allows
comparing fixed-size types with fixed-size types, and generic types with
generic types. For generic types, it merely compares whether their
arguments match (ordering them first). Even if their exact sizes are not
known at compile time, it can ensure that they will eventually be the
same.
This patch extends this by shortcutting the size-evaluation of zero
sized arrays and thus allowing size comparisons of `()` with `[T; 0]`,
where one contains generics and the other does not.
This code is guarded by `transmute_const_generics` (#109929), even
though it is unclear whether it should be. However, this assumes that a
separate stabilization PR is required to move this out of the feature
guard.
Initially reported in #98104.
various clippy fixes
We need to keep the order of the given clippy lint rules before passing them.
Since clap doesn't offer any useful interface for this purpose out of the box,
we have to handle it manually.
Additionally, this PR makes `-D` rules work as expected. Previously, lint rules were limited to `-W`. By enabling `-D`, clippy began to complain numerous lines in the tree, all of which have been resolved in this PR as well.
Fixes#121481
cc `@matthiaskrgr`
Silence unecessary !Sized binding error
When gathering locals, we introduce a `Sized` obligation for each
binding in the pattern. *After* doing so, we typecheck the init
expression. If this has a type failure, we store `{type error}`, for
both the expression and the pattern. But later we store an inference
variable for the pattern.
We now avoid any override of an existing type on a hir node when they've
already been marked as `{type error}`, and on E0277, when it comes from
`VariableType` we silence the error in support of the type error.
Fix https://github.com/rust-lang/rust/issues/117846
When gathering locals, we introduce a `Sized` obligation for each
binding in the pattern. *After* doing so, we typecheck the init
expression. If this has a type failure, we store `{type error}`, for
both the expression and the pattern. But later we store an inference
variable for the pattern.
We now avoid any override of an existing type on a hir node when they've
already been marked as `{type error}`, and on E0277, when it comes from
`VariableType` we silence the error in support of the type error.
Fix#117846.