This handles using deref patterns to choose the correct match arm. This
does not handle bindings or guards.
Co-authored-by: Deadbeef <ent3rm4n@gmail.com>
Move confusing comment about otherwise blocks in `lower_match_tree`
This comment was historically inside a block guarded by `if let Some(otherwise_block) = otherwise`.
When #120978 made the “otherwise block” non-optional, it also flattened that region of code. Doing so left this comment awkwardly stranded above an unrelated line of code, without its original context.
We can restore that context by moving it above the declaration of `otherwise`.
r? ``@Nadrieril``
This comment was historically inside a block guarded by
`if let Some(otherwise_block) = otherwise`.
When #120978 made the otherwise block non-optional, it also flattened that
region of code. Doing so left this comment awkwardly stranded above an
unrelated line of code, without its original context.
We can restore that context by moving it above the declaration of `otherwise`.
The suggestion to use `let else` with an uninitialized refutable `let`
statement was erroneous: `let else` cannot be used with deferred
initialization.
Match ergonomics: implement "`&`pat everywhere"
Implements the eat-two-layers (feature gate `and_pat_everywhere`, all editions) ~and the eat-one-layer (feature gate `and_eat_one_layer_2024`, edition 2024 only, takes priority on that edition when both feature gates are active)~ (EDIT: will be done in later PR) semantics.
cc #123076
r? ``@Nadrieril``
``@rustbot`` label A-patterns A-edition-2024
match lowering: make false edges more precise
When lowering match expressions, we add false edges to hide details of the lowering from borrowck. Morally we pretend we're testing the patterns (and guards) one after the other in order. See the tests for examples. Problem is, the way we implement this today is too coarse for deref patterns.
In deref patterns, a pattern like `deref [1, x]` matches on a `Vec` by creating a temporary to store the output of the call to `deref()` and then uses that to continue matching. Here the pattern has a binding, which we set up after the pre-binding block. Problem is, currently the false edges tell borrowck that the pre-binding block can be reached from a previous arm as well, so the `deref()` temporary may not be initialized. This triggers an error when we try to use the binding `x`.
We could call `deref()` a second time, but this opens the door to soundness issues if the deref impl is weird. Instead in this PR I rework false edges a little bit.
What we need from false edges is a (fake) path from each candidate to the next, specifically from candidate C's pre-binding block to next candidate D's pre-binding block. Today, we link the pre-binding blocks directly. In this PR, I link them indirectly by choosing an earlier node on D's success path. Specifically, I choose the earliest block on D's success path that doesn't make a loop (if I chose e.g. the start block of the whole match (which is on the success path of all candidates), that would make a loop). This turns out to be rather straightforward to implement.
r? `@matthewjasper` if you have the bandwidth, otherwise let me know
match lowering: handle or-patterns one layer at a time
`create_or_subcandidates` and `merge_trivial_subcandidates` both call themselves recursively to handle nested or-patterns, which is hard to follow. In this PR I avoid the need for that; we now process a single "layer" of or-patterns at a time.
By calling back into `match_candidates`, we only need to expand one layer at a time. Conversely, since we always try to simplify a layer that we just expanded (thanks to https://github.com/rust-lang/rust/pull/123067), we only have to merge one layer at a time.
r? `@matthewjasper`
The original proposal allows reference patterns
with "compatible" mutability, however it's not clear
what that means so for now we require an exact match.
I don't know the type system code well, so if something
seems to not make sense it's probably because I made a
mistake
compiler: fix few unused_peekable and needless_pass_by_ref_mut clippy lints
This fixes few instances of `unused_peekable` and `needless_pass_by_ref_mut`. While i expected to fix more warnings, `needless_pass_by_ref_mut` produced too much for one PR, so i stopped here.
Better reviewed commit by commit, as fixes splitted by chunks.
warning: this argument is a mutable reference, but not used mutably
--> compiler\rustc_mir_transform\src\coroutine.rs:1229:11
|
1229 | body: &mut Body<'tcx>,
| ^^^^^^^^^^^^^^^ help: consider changing to: `&Body<'tcx>`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_pass_by_ref_mut
warning: this argument is a mutable reference, but not used mutably
--> compiler\rustc_mir_transform\src\nrvo.rs:123:11
|
123 | body: &mut mir::Body<'_>,
| ^^^^^^^^^^^^^^^^^^ help: consider changing to: `&mir::Body<'_>`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_pass_by_ref_mut
warning: this argument is a mutable reference, but not used mutably
--> compiler\rustc_mir_transform\src\nrvo.rs:87:34
|
87 | fn local_eligible_for_nrvo(body: &mut mir::Body<'_>) -> Option<Local> {
| ^^^^^^^^^^^^^^^^^^ help: consider changing to: `&mir::Body<'_>`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_pass_by_ref_mut
By calling back into `match_candidates`, we only need to expand one
layer at a time. Conversely, since we always try to simplify a layer
that we just expanded, we only have to merge one layer at a time.
match lowering: build the `Place` instead of keeping a `PlaceBuilder` around
Outside of `MatchPair::new` we don't construct new places, so we don't need to keep a `PlaceBuilder` around.
A bit annoyingly we have to store an `Option<Place>` even though it's never `None` after simplification, but the alternative would be to re-entangle `MatchPair` construction and simplification and I'd rather not do that.
Replace `mir_built` query with a hook and use mir_const everywhere instead
A small perf improvement due to less dep graph handling.
Mostly just a cleanup to get rid of one of our many mir queries
The payload of coverage statements was historically a structure with several
fields, so it was boxed to avoid bloating `StatementKind`.
Now that the payload is a single relatively-small enum, we can replace
`Box<Coverage>` with just `CoverageKind`.
This patch also adds a size assertion for `StatementKind`, to avoid
accidentally bloating it in the future.
Experimental feature postfix match
This has a basic experimental implementation for the RFC postfix match (rust-lang/rfcs#3295, #121618). [Liaison is](https://rust-lang.zulipchat.com/#narrow/stream/213817-t-lang/topic/Postfix.20Match.20Liaison/near/423301844) ```@scottmcm``` with the lang team's [experimental feature gate process](https://github.com/rust-lang/lang-team/blob/master/src/how_to/experiment.md).
This feature has had an RFC for a while, and there has been discussion on it for a while. It would probably be valuable to see it out in the field rather than continue discussing it. This feature also allows to see how popular postfix expressions like this are for the postfix macros RFC, as those will take more time to implement.
It is entirely implemented in the parser, so it should be relatively easy to remove if needed.
This PR is split in to 5 commits to ease review.
1. The implementation of the feature & gating.
2. Add a MatchKind field, fix uses, fix pretty.
3. Basic rustfmt impl, as rustfmt crashes upon seeing this syntax without a fix.
4. Add new MatchSource to HIR for Clippy & other HIR consumers
deref patterns: bare-bones feature gate and typechecking
I am restarting the deref patterns experimentation. This introduces a feature gate under the lang-team [experimental feature](https://github.com/rust-lang/lang-team/blob/master/src/how_to/experiment.md) process, with [````@cramertj```` as lang-team liaison](https://github.com/rust-lang/lang-team/issues/88) (it's been a while though, you still ok with this ````@cramertj?).```` Tracking issue: https://github.com/rust-lang/rust/issues/87121.
This is the barest-bones implementation I could think of:
- explicit syntax, reusing `box <pat>` because that saves me a ton of work;
- use `Deref` as a marker trait (instead of a yet-to-design `DerefPure`);
- no support for mutable patterns with `DerefMut` for now;
- MIR lowering will come in the next PR. It's the trickiest part.
My goal is to let us figure out the MIR lowering part, which might take some work. And hopefully get something working for std types soon.
This is in large part salvaged from ````@fee1-dead's```` https://github.com/rust-lang/rust/pull/119467.
r? ````@compiler-errors````
recursively evaluate the constants in everything that is 'mentioned'
This is another attempt at fixing https://github.com/rust-lang/rust/issues/107503. The previous attempt at https://github.com/rust-lang/rust/pull/112879 seems stuck in figuring out where the [perf regression](https://perf.rust-lang.org/compare.html?start=c55d1ee8d4e3162187214692229a63c2cc5e0f31&end=ec8de1ebe0d698b109beeaaac83e60f4ef8bb7d1&stat=instructions:u) comes from. In https://github.com/rust-lang/rust/pull/122258 I learned some things, which informed the approach this PR is taking.
Quoting from the new collector docs, which explain the high-level idea:
```rust
//! One important role of collection is to evaluate all constants that are used by all the items
//! which are being collected. Codegen can then rely on only encountering constants that evaluate
//! successfully, and if a constant fails to evaluate, the collector has much better context to be
//! able to show where this constant comes up.
//!
//! However, the exact set of "used" items (collected as described above), and therefore the exact
//! set of used constants, can depend on optimizations. Optimizing away dead code may optimize away
//! a function call that uses a failing constant, so an unoptimized build may fail where an
//! optimized build succeeds. This is undesirable.
//!
//! To fix this, the collector has the concept of "mentioned" items. Some time during the MIR
//! pipeline, before any optimization-level-dependent optimizations, we compute a list of all items
//! that syntactically appear in the code. These are considered "mentioned", and even if they are in
//! dead code and get optimized away (which makes them no longer "used"), they are still
//! "mentioned". For every used item, the collector ensures that all mentioned items, recursively,
//! do not use a failing constant. This is reflected via the [`CollectionMode`], which determines
//! whether we are visiting a used item or merely a mentioned item.
//!
//! The collector and "mentioned items" gathering (which lives in `rustc_mir_transform::mentioned_items`)
//! need to stay in sync in the following sense:
//!
//! - For every item that the collector gather that could eventually lead to build failure (most
//! likely due to containing a constant that fails to evaluate), a corresponding mentioned item
//! must be added. This should use the exact same strategy as the ecollector to make sure they are
//! in sync. However, while the collector works on monomorphized types, mentioned items are
//! collected on generic MIR -- so any time the collector checks for a particular type (such as
//! `ty::FnDef`), we have to just onconditionally add this as a mentioned item.
//! - In `visit_mentioned_item`, we then do with that mentioned item exactly what the collector
//! would have done during regular MIR visiting. Basically you can think of the collector having
//! two stages, a pre-monomorphization stage and a post-monomorphization stage (usually quite
//! literally separated by a call to `self.monomorphize`); the pre-monomorphizationn stage is
//! duplicated in mentioned items gathering and the post-monomorphization stage is duplicated in
//! `visit_mentioned_item`.
//! - Finally, as a performance optimization, the collector should fill `used_mentioned_item` during
//! its MIR traversal with exactly what mentioned item gathering would have added in the same
//! situation. This detects mentioned items that have *not* been optimized away and hence don't
//! need a dedicated traversal.
enum CollectionMode {
/// Collect items that are used, i.e., actually needed for codegen.
///
/// Which items are used can depend on optimization levels, as MIR optimizations can remove
/// uses.
UsedItems,
/// Collect items that are mentioned. The goal of this mode is that it is independent of
/// optimizations: the set of "mentioned" items is computed before optimizations are run.
///
/// The exact contents of this set are *not* a stable guarantee. (For instance, it is currently
/// computed after drop-elaboration. If we ever do some optimizations even in debug builds, we
/// might decide to run them before computing mentioned items.) The key property of this set is
/// that it is optimization-independent.
MentionedItems,
}
```
And the `mentioned_items` MIR body field docs:
```rust
/// Further items that were mentioned in this function and hence *may* become monomorphized,
/// depending on optimizations. We use this to avoid optimization-dependent compile errors: the
/// collector recursively traverses all "mentioned" items and evaluates all their
/// `required_consts`.
///
/// This is *not* soundness-critical and the contents of this list are *not* a stable guarantee.
/// All that's relevant is that this set is optimization-level-independent, and that it includes
/// everything that the collector would consider "used". (For example, we currently compute this
/// set after drop elaboration, so some drop calls that can never be reached are not considered
/// "mentioned".) See the documentation of `CollectionMode` in
/// `compiler/rustc_monomorphize/src/collector.rs` for more context.
pub mentioned_items: Vec<Spanned<MentionedItem<'tcx>>>,
```
Fixes#107503