This draws a clear distinction between the fields/methods that are needed by
initial span extraction and preprocessing, and those that are needed by the
main "refinement" loop.
Reevaluate `body.should_skip()` after updating the MIR phase to ensure
that injected MIR is processed correctly.
Update a few custom MIR tests that were ill-formed for the injected
phase.
`Diagnostic` has 40 methods that return `&mut Self` and could be
considered setters. Four of them have a `set_` prefix. This doesn't seem
necessary for a type that implements the builder pattern. This commit
removes the `set_` prefixes on those four methods.
coverage: Prepare mappings separately from injecting statements
These two tasks historically needed to be interleaved, but after various recent changes (including #116046 and #116917) they can now be fully separated.
---
`@rustbot` label +A-code-coverage
These two tasks historically needed to be interleaved, but after various recent
changes (including #116046 and #116917) they can now be fully separated.
Implement constant propagation on top of MIR SSA analysis
This implements the idea I proposed in https://github.com/rust-lang/rust/pull/110719#issuecomment-1718324700
Based on https://github.com/rust-lang/rust/pull/109597
The value numbering "GVN" pass formulates each rvalue that appears in MIR with an abstract form (the `Value` enum), and assigns an integer `VnIndex` to each. This abstract form can be used to deduplicate values, reusing an earlier local that holds the same value instead of recomputing. This part is proposed in #109597.
From this abstract representation, we can perform more involved simplifications, for example in https://github.com/rust-lang/rust/pull/111344.
With the abstract representation `Value`, we can also attempt to evaluate each to a constant using the interpreter. This builds a `VnIndex -> OpTy` map. From this map, we can opportunistically replace an operand or a rvalue with a constant if their value has an associated `OpTy`.
The most relevant commit is [Evaluated computed values to constants.](2767c4912e)"
r? `@oli-obk`
Make closures carry their own ClosureKind
Right now, we use the "`movability`" field of `hir::Closure` to distinguish a closure and a coroutine. This is paired together with the `CoroutineKind`, which is located not in the `hir::Closure`, but the `hir::Body`. This is strange and redundant.
This PR introduces `ClosureKind` with two variants -- `Closure` and `Coroutine`, which is put into `hir::Closure`. The `CoroutineKind` is thus removed from `hir::Body`, and `Option<Movability>` no longer needs to be a stand-in for "is this a closure or a coroutine".
r? eholk
Split coroutine desugaring kind from source
What a coroutine is desugared from (gen/async gen/async) should be separate from where it comes (fn/block/closure).
Separate MIR lints from validation
Add a MIR lint pass, enabled with -Zlint-mir, which identifies undefined or
likely erroneous behaviour.
The initial implementation mostly migrates existing checks of this nature from
MIR validator, where they did not belong (those checks have false positives and
there is nothing inherently invalid about MIR with undefined behaviour).
Fixes#104736Fixes#104843Fixes#116079Fixes#116736Fixes#118990
The old code used a heuristic to detect async functions and adjust their
coverage spans to produce better output. But there's no need to resort to a
heuristic when we can just check whether the current function is actually an
`async fn`.
And make all hand-written `IntoDiagnostic` impls generic, by using
`DiagnosticBuilder::new(dcx, level, ...)` instead of e.g.
`dcx.struct_err(...)`.
This means the `create_*` functions are the source of the error level.
This change will let us remove `struct_diagnostic`.
Note: `#[rustc_lint_diagnostics]` is added to `DiagnosticBuilder::new`,
it's necessary to pass diagnostics tests now that it's used in
`into_diagnostic` functions.
coverage: Skip instrumenting a function if no spans were extracted from MIR
The immediate symptoms of #118643 were fixed by #118666, but some users reported that their builds now encounter another coverage-related ICE:
```
error: internal compiler error: compiler/rustc_codegen_llvm/src/coverageinfo/mapgen.rs:98:17: A used function should have had coverage mapping data but did not: (...)
```
I was able to reproduce at least one cause of this error: if no relevant spans could be extracted from a function, but the function contains `CoverageKind::SpanMarker` statements, then codegen still thinks the function is instrumented and complains about the fact that it has no coverage spans.
This PR prevents that from happening in two ways:
- If we didn't extract any relevant spans from MIR, skip instrumenting the entire function and don't create a `FunctionCoverateInfo` for it.
- If coverage codegen sees a `CoverageKind::SpanMarker` statement, skip it early and avoid creating `func_coverage`.
---
Fixes#118850.
Rollup of 3 pull requests
Successful merges:
- #116888 (Add discussion that concurrent access to the environment is unsafe)
- #118888 (Uplift `TypeAndMut` and `ClosureKind` to `rustc_type_ir`)
- #118929 (coverage: Tidy up early parts of the instrumentor pass)
r? `@ghost`
`@rustbot` modify labels: rollup
coverage: Tidy up early parts of the instrumentor pass
This is extracted from #118237, which needed to be manually rebased anyway.
Unlike that PR, this one only affects the coverage instrumentor, and doesn't attempt to move any code into the MIR builder. That can be left to a future version of #118305, which can still benefit from these improvements.
So this is now mostly a refactoring of some internal parts of the instrumentor.
Fix cases where std accidentally relied on inline(never)
This PR increases the power of `-Zcross-crate-inline-threshold=always` so that it applies through `#[inline(never)]`. Note that though this is called "cross-crate-inlining" in this case especially it is _just_ lazy per-CGU codegen. The MIR inliner and LLVM still respect the attribute as much as they ever have.
Trying to bootstrap with the new `-Zcross-crate-inline-threshold=always` change revealed two bugs:
We have special intrinsics `assert_inhabited`, `assert_zero_valid`, and `assert_mem_uniniitalized_valid` which codegen backends will lower to nothing or a call to `panic_nounwind`. Since we may not have any call to `panic_nounwind` in MIR but emit one anyway, we need to specially tell `MirUsedCollector` about this situation.
`#[lang = "start"]` is special-cased already so that `MirUsedCollector` will collect it, but then when we make it cross-crate-inlinable it is only assigned to a CGU based on whether `MirUsedCollector` saw a call to it, which of course we didn't.
---
I started looking into this because https://github.com/rust-lang/rust/pull/118683 revealed a case where we were accidentally relying on a function being `#[inline(never)]`, and cranking up cross-crate-inlinability seems like a way to find other situations like that.
r? `@nnethercote` because I don't like what I'm doing to the CGU partitioning code here but I can't come up with something much better
If we want to know whether two byte positions are in the same file, we don't
need to clone and compare `Lrc<SourceFile>`; we can just get their indices and
compare those instead.
Changes in this patch:
- Extract local variable `def_id`
- Check `is_fn_like` without retrieving HIR
- Inline some locals that are used once and aren't needed for clarity
It's necessary for `derive(Diagnostic)`, but is best avoided elsewhere
because there are clearer alternatives.
This required adding `Handler::struct_almost_fatal`.
Renamings:
- find -> opt_hir_node
- get -> hir_node
- find_by_def_id -> opt_hir_node_by_def_id
- get_by_def_id -> hir_node_by_def_id
Fix rebase changes using removed methods
Use `tcx.hir_node_by_def_id()` whenever possible in compiler
Fix clippy errors
Fix compiler
Apply suggestions from code review
Co-authored-by: Vadim Petrochenkov <vadim.petrochenkov@gmail.com>
Add FIXME for `tcx.hir()` returned type about its removal
Simplify with with `tcx.hir_node_by_def_id`
State transforms retains storage statements for locals that are not
stored inside a coroutine. It ensures those locals are live when
resuming by inserting StorageLive as appropriate. It forgot to end the
storage of those locals when suspending, which is fixed here.
While the end of live range is implicit when executing return, it is
nevertheless useful for inliner which would otherwise extend the live
range beyond return.
detects redundant imports that can be eliminated.
for #117772 :
In order to facilitate review and modification, split the checking code and
removing redundant imports code into two PR.
Make async generators fused by default
I actually changed my mind about this since the implementation PR landed. I think it's beneficial for `async gen` blocks to be "fused" by default -- i.e., for them to repeatedly return `Poll::Ready(None)` -- rather than panic.
We have [`FusedStream`](https://docs.rs/futures/latest/futures/stream/trait.FusedStream.html) in futures-rs to represent streams with this capability already anyways.
r? eholk
cc ```@rust-lang/wg-async,``` would like to know if anyone else has opinions about this.
coverage: Simplify the heuristic for ignoring `async fn` return spans
The code for extracting coverage spans from MIR has a special heuristic for dealing with `async fn`, so that the function's closing brace does not have a confusing double count.
The code implementing that heuristic is currently mixed in with the code for flushing remaining spans after the main refinement loop, making the refinement code harder to understand.
We can solve that by hoisting the heuristic to an earlier stage, after the spans have been extracted and sorted but before they have been processed by the refinement loop.
The coverage tests verify that the heuristic is still effective, so coverage mappings/reports for `async fn` have not changed.
---
This PR also has the side-effect of fixing the `None some_prev` panic that started appearing after #118525.
The old code assumed that `prev` would always be present after the refinement loop. That was only true if the list of collected spans was non-empty, but prior to #118525 that didn't seem to come up in practice. After that change, the list of collected spans could be empty in some specific circumstances, leading to panics.
The new code uses an `if let` to inspect `prev`, which correctly does nothing if there is no span present.
coverage: Use `SpanMarker` to improve coverage spans for `if !` expressions
Coverage instrumentation works by extracting source code spans from MIR. However, some kinds of syntax are effectively erased during MIR building, so their spans don't necessarily exist anywhere in MIR, making them invisible to the coverage instrumentor (unless we resort to various heuristics and hacks to recover them).
This PR introduces `CoverageKind::SpanMarker`, which is a new variant of `StatementKind::Coverage`. Its sole purpose is to represent spans that would otherwise not appear in MIR, so that the coverage instrumentor can extract them.
When coverage is enabled, the MIR builder can insert these dummy statements as needed, to improve the accuracy of spans used by coverage mappings.
Fixes#115468.
---
```@rustbot``` label +A-code-coverage
There are cases where coverage instrumentation wants to show a span for some
syntax element, but there is no MIR node that naturally carries that span, so
the instrumentor can't see it.
MIR building can now use this new kind of coverage statement to deliberately
include those spans in MIR, attached to a dummy statement that has no other
effect.
coverage: Merge refined spans in a separate final pass
Pulling this merge step out of `push_refined_span` and into a separate pass lets us push directly to `refined_spans` instead of calling a helper method.
Because the compiler can now see partial borrows of `refined_spans`, we can remove some extra code that was jumping through hoops to satisfy the borrow checker.
---
``@rustbot`` label +A-code-coverage
coverage: Avoid unnecessary macros in unit tests
These macros don't provide enough value to justify their complexity, when they can just as easily be functions instead.
---
`@rustbot` label +A-code-coverage
compile-time evaluation: detect writes through immutable pointers
This has two motivations:
- it unblocks https://github.com/rust-lang/rust/pull/116745 (and therefore takes a big step towards `const_mut_refs` stabilization), because we can now detect if the memory that we find in `const` can be interned as "immutable"
- it would detect the UB that was uncovered in https://github.com/rust-lang/rust/pull/117905, which was caused by accidental stabilization of `copy` functions in `const` that can only be called with UB
When UB is detected, we emit a future-compat warn-by-default lint. This is not a breaking change, so completely in line with [the const-UB RFC](https://rust-lang.github.io/rfcs/3016-const-ub.html), meaning we don't need t-lang FCP here. I made the lint immediately show up for dependencies since it is nearly impossible to even trigger this lint without `const_mut_refs` -- the accidentally stabilized `copy` functions are the only way this can happen, so the crates that popped up in #117905 are the only causes of such UB (in the code that crater covers), and the three cases of UB that we know about have all been fixed in their respective crates already.
The way this is implemented is by making use of the fact that our interpreter is already generic over the notion of provenance. For CTFE we now use the new `CtfeProvenance` type which is conceptually an `AllocId` plus a boolean `immutable` flag (but packed for a more efficient representation). This means we can mark a pointer as immutable when it is created as a shared reference. The flag will be propagated to all pointers derived from this one. We can then check the immutable flag on each write to reject writes through immutable pointers.
I just hope perf works out.
coverage: Be more strict about what counts as a "visible macro"
This is a follow-up to the workaround in #117827, and I believe it now properly fixes#117788.
The old code treats a span as having a “visible macro” if it is part of a macro-expansion, and its parent callsite's context is the same as the body span's context. But if the body span is itself part of an expansion, the macro in question might not actually be visible from the body span. That results in the macro name's length being meaningless as a span offset.
We now only consider spans whose parent callsite is the same as the source callsite, i.e. the parent has no parent.
---
I've also included some related cleanup for the code added by #117827. That code was more complicated than normal, because I wanted it to be easy to backport to stable/beta.
Streamline MIR dataflow cursors
`rustc_mir_dataflow` has two kinds of results (`Results` and `ResultsCloned`) and three kinds of results cursor (`ResultsCursor`, `ResultsClonedCursor`, `ResultsRefCursor`). I found this quite confusing.
This PR removes `ResultsCloned`, `ResultsClonedCursor`, and `ResultsRefCursor`, leaving just `Results` and `ResultsCursor`. This makes the relevant code shorter and easier to read, and there is no performance penalty.
r? `@cjgillot`
These impls are all needed for just a single `IntoDiagnostic` type, not
a family of them.
Note that `ErrorGuaranteed` is the default type parameter for
`IntoDiagnostic`.
When we extract coverage spans from MIR, we try to "un-expand" them back to
spans that are inside the function's body span.
In cases where that doesn't succeed, the current code just swaps in the entire
body span instead. But that tends to result in coverage spans that are
completely unrelated to the control flow of the affected code, so it's better
to just discard those spans.
ConstProp: Correctly remove const if unknown value assigned to it.
Closes#118328
The problematic sequence of MIR is:
```rust
_1 = const 0_usize;
_1 = const _; // This is an associated constant we can't know before monomorphization.
_0 = _1;
```
1. When `ConstProp::visit_assign` happens on `_1 = const 0_usize;`, it records that `0x0usize` is the value for `_1`.
2. Next `visit_assign` happens on `_1 = const _;`. Because the rvalue `.has_param()`, it can't be const evaled.
3. Finaly, `visit_assign` happens on `_0 = _1;`. Here it would think the value of `_1` was `0x0usize` from step 1.
The solution is to remove consts when checking the RValue fails, as they may have contained values that should now be invalidated, as that local was overwritten.
This should probably be back-ported to beta. Stable is more iffy, as it's gone unidentified since 1.70, so I only think it's worthwhile if there's another reason for a 1.74.1 release anyway.
Validation introduced in #113124 allows UnwindAction::Continue and
TerminatorKind::Resume to occur only in functions with ABI that can
unwind. The function ABI depends on the panic strategy, which can vary
across crates.
Usually MIR is built and validated in the same crate. The coroutine drop
glue thus far was an exception. As a result validation could fail when
mixing different panic strategies.
Avoid the problem by executing AbortUnwindingCalls along with the
validation.
By just cloning the entire `Results` in the one place where
`ResultsClonedCursor` was used. This is extra allocations but the
performance effect is negligible.
It's currently used because `requires_storage_results` is used in two
locations: once with a cursor, and once later on without a cursor. The
non-consuming `as_results_cursor` is used for the first location.
But we can instead use the consuming `into_results_cursor` and then use
`into_results` to extract the `Results` from the finished-with cursor
for use at the second location.
coverage: Simplify building coverage expressions based on sums
This is a combination of some interlinked changes to the code that creates coverage counters/expressions for nodes and edges in the coverage graph:
- Some preparatory cleanups in `MakeBcbCounters::make_branch_counters`
- Use `BcbCounter` (instead of `CovTerm`) when building coverage expressions
- This makes it easier to introduce a fold for building sums
- Simplify the creation of coverage expressions based on sums, by having `Iterator::fold` do much of the work
- Get rid of the awkward `BcbBranch` enum, and replace it with graph edges represented as `(from_bcb, to_bcb)`
- This further simplifies the body of the fold
Currently we always do this:
```
use rustc_fluent_macro::fluent_messages;
...
fluent_messages! { "./example.ftl" }
```
But there is no need, we can just do this everywhere:
```
rustc_fluent_macro::fluent_messages! { "./example.ftl" }
```
which is shorter.
The `fluent_messages!` macro produces uses of
`crate::{D,Subd}iagnosticMessage`, which means that every crate using
the macro must have this import:
```
use rustc_errors::{DiagnosticMessage, SubdiagnosticMessage};
```
This commit changes the macro to instead use
`rustc_errors::{D,Subd}iagnosticMessage`, which avoids the need for the
imports.
`BcbBranch` represented an out-edge of a coverage graph node, but would
silently refer to a node instead in cases where that node only had one in-edge.
Instead we now refer to a graph edge as a `(from_bcb, to_bcb)` pair, or
sometimes as just one of those nodes when the other node is implied by the
surrounding context. The case of sole in-edges is handled by special code added
directly to `get_or_make_edge_counter_operand`.
This was previously a helper method in `MakeBcbCounters`, but putting it in the
graph lets us call it from `BcbBranch`, and gives us a more fine-grained
borrow.
In some cases we need to prepare a coverage expression that is the sum of an
arbitrary number of other terms. This patch simplifies the code paths that
build those sums.
This causes some churn in the mappings, because the previous code was building
its sums in a somewhat idiosyncratic order.
Now that this code path unconditionally calls `make_branch_counters`, we might
as well make that method responsible for creating the node's counter as well,
since it needs the resulting term anyway.
There were three issues previously:
* The self argument was pinned, despite Iterator::next taking an
unpinned mutable reference.
* A resume argument was passed, despite Iterator::next not having one.
* The return value was CoroutineState<Item, ()> rather than Option<Item>
While these things just so happened to work with the LLVM backend,
cg_clif does much stricter checks when trying to assign a value to a
place. In addition it can't handle the mismatch between the amount of
arguments specified by the FnAbi and the FnSig.
`on_all_children_bits` has two arguments that are unused: `tcx` and
`body`. This was not detected by the compiler because it's a recursive
function.
This commit removes them, and removes lots of other arguments and fields
that are no longer necessary.
Fix insertion of statements to be executed along return edge in inlining
Inlining creates additional statements to be executed along the return
edge: an assignment to the destination, storage end for temporaries.
Previously those statements where inserted directly into a call target,
but this is incorrect when the target has other predecessors.
Avoid the issue by creating a new dedicated block for those statements.
When the block happens to be redundant it will be removed by CFG
simplification that follows inlining.
Fixes#117355
Inlining creates additional statements to be executed along the return
edge: an assignment to the destination, storage end for temporaries.
Previously those statements where inserted directly into a call target,
but this is incorrect when the target has other predecessors.
Avoid the issue by creating a new dedicated block for those statements.
When the block happens to be redundant it will be removed by CFG
simplification that follows inlining.
Fixes#117355
Begin to abstract `rustc_type_ir` for rust-analyzer
This adds the "nightly" feature which is used by the compiler, and falls back to more simple implementations when that is not active.
r? `@lcnr` or `@jackh726`
Remove option_payload_ptr; redundant to offset_of
The `option_payload_ptr` intrinsic is no longer required as `offset_of` supports traversing enums (#114208). This PR removes it in order to dogfood offset_of (as suggested at https://github.com/rust-lang/rust/issues/106655#issuecomment-1790907626). However, it will not build until those changes reach beta (which I think is within the next 8 days?) so I've opened it as a draft.
Remove incorrect transformation from RemoveZsts
Partial removal of storage statements for a local is incorrect, so a decision to optimize cannot be make independently for each statement.
Avoid the issue by performing the transformation completely or not at all.
This method is trying to detect macro invocations, so that it can split a span
into two parts just after the `!` of the invocation.
Under some circumstances (probably involving nested macros), it gets confused
and produces a span that is larger than the original span, and possibly extends
outside its enclosing function and even into an adjacent file.
In extreme cases, that can result in malformed coverage mappings that cause
`llvm-cov` to fail. For now, we at least want to detect these egregious cases
and avoid them, so that coverage reports can still be produced.
Partial removal of storage statements for a local is incorrect, so a
decision to optimize cannot be make independently for each statement.
Avoid the issue by performing the transformation completely or not at
all.
generator layout: ignore fake borrows
fixes#117059
We emit fake shallow borrows in case the scrutinee place uses a `Deref` and there is a match guard. This is necessary to prevent the match guard from mutating the scrutinee: fab1054e17/compiler/rustc_mir_build/src/build/matches/mod.rs (L1250-L1265)
These fake borrows end up impacting the generator witness computation in `mir_generator_witnesses`, which causes the issue in #117059. This PR now completely ignores fake borrows during this computation. This is sound as thse are always removed after analysis and the actual computation of the generator layout happens afterwards.
Only the second commit impacts behavior, and could be backported by itself.
r? types
Update the alignment checks to match rust-lang/reference#1387
Previously, we had a special case to not check `Rvalue::AddressOf` in this pass because we weren't quite sure if pointers needed to be aligned in the Place passed to it: https://github.com/rust-lang/rust/pull/112026
Since https://github.com/rust-lang/reference/pull/1387 merged, this PR updates this pass to match. The behavior of the check is nearly unchanged, except we also avoid inserting a check for creating references. Most of the changes in this PR are cleanup and new tests.
Support enum variants in offset_of!
This MR implements support for navigating through enum variants in `offset_of!`, placing the enum variant name in the second argument to `offset_of!`. The RFC placed it in the first argument, but I think it interacts better with nested field access in the second, as you can then write things like
```rust
offset_of!(Type, field.Variant.field)
```
Alternatively, a syntactic distinction could be made between variants and fields (e.g. `field::Variant.field`) but I'm not convinced this would be helpful.
[RFC 3308 # Enum Support](https://rust-lang.github.io/rfcs/3308-offset_of.html#enum-support-offset_ofsomeenumstructvariant-field_on_variant)
Tracking Issue #106655.
Replace switch to unreachable by assume statements
`UnreachablePropagation` currently keeps some switch terminators alive in order to ensure codegen can infer the inequalities on the discriminants.
This PR proposes to encode those inequalities as `Assume` statements.
This allows to simplify MIR further by removing some useless terminators.
Historically, these errors existed so that the coverage debug code could dump
additional information before reporting a compiler bug. That debug code was
removed by #115962, so we can now simplify these methods by making them panic
when they detect a bug.
Enable cross-crate-inlining when MIR inlining is enabled
This would make https://github.com/rust-lang/rust/issues/117355 generally less obscure, and also seems like a good idea, even if for some reason someone wants MIR opts but no codegen opts.
See through aggregates in GVN
This PR is extracted from https://github.com/rust-lang/rust/pull/111344
The first 2 commit are cleanups to avoid repeated work. I propose to stop removing useless assignments as part of this pass, and let a later `SimplifyLocals` do it. This makes tests easier to read (among others).
The next 3 commits add a constant folding mechanism to the GVN pass, presented in https://github.com/rust-lang/rust/pull/116012. ~This pass is designed to only use global allocations, to avoid any risk of accidental modification of the stored state.~
The following commits implement opportunistic simplifications, in particular:
- projections of aggregates: `MyStruct { x: a }.x` gets replaced by `a`, works with enums too;
- projections of arrays: `[a, b][0]` becomes `a`;
- projections of repeat expressions: `[a; N][x]` becomes `a`;
- transform arrays of equal operands into a repeat rvalue.
Fixes https://github.com/rust-lang/miri/issues/3090
r? `@oli-obk`
Rollup of 5 pull requests
Successful merges:
- #115773 (tvOS simulator support on Apple Silicon for rustc)
- #117162 (Remove `cfg_match` from the prelude)
- #117311 (-Zunpretty help: add missing possible values)
- #117316 (Mark constructor of `BinaryHeap` as const fn)
- #117319 (explain why we don't inline when target features differ)
r? `@ghost`
`@rustbot` modify labels: rollup
Implement `gen` blocks in the 2024 edition
Coroutines tracking issue https://github.com/rust-lang/rust/issues/43122
`gen` block tracking issue https://github.com/rust-lang/rust/issues/117078
This PR implements `gen` blocks that implement `Iterator`. Most of the logic with `async` blocks is shared, and thus I renamed various types that were referring to `async` specifically.
An example usage of `gen` blocks is
```rust
fn foo() -> impl Iterator<Item = i32> {
gen {
yield 42;
for i in 5..18 {
if i.is_even() { continue }
yield i * 2;
}
}
}
```
The limitations (to be resolved) of the implementation are listed in the tracking issue
Only call `mir_const_qualif` if absolutely necessary
Pull the perf change out of https://github.com/rust-lang/rust/pull/113617
This should not have any impact on behaviour (if it does, we'll see an ICE)
Require target features to match exactly during inlining
In general it is not correct to inline a callee with a target features
that are subset of the callee. Require target features to match exactly
during inlining.
The exact match could be potentially relaxed, but this would require
identifying specific feature that are allowed to differ, those that need
to match, and those that can be present in caller but not in callee.
This resolves MIR part of #116573. For other concerns with respect to
the previous implementation also see areInlineCompatible in LLVM.
In general it is not correct to inline a callee with a target features
that are subset of the callee. Require target features to match exactly
during inlining.
The exact match could be potentially relaxed, but this would require
identifying specific feature that are allowed to differ, those that need
to match, and those that can be present in caller but not in callee.
This resolves MIR part of #116573. For other concerns with respect to
the previous implementation also see areInlineCompatible in LLVM.
Separate move path tracking between borrowck and drop elaboration.
The primary goal of this PR is to skip creating a `MovePathIndex` for path that do not need dropping in drop elaboration.
The 2 first commits are cleanups.
The next 2 commits displace `move` errors from move-path builder to borrowck. Move-path builder keeps the same logic, but does not carry error information any more.
The remaining commits allow to filter `MovePathIndex` creation according to types. This is used in drop elaboration, to avoid computing dataflow for paths that do not need dropping.
Implement jump threading MIR opt
This pass is an attempt to generalize `ConstGoto` and `SeparateConstSwitch` passes into a more complete jump threading pass.
This pass is rather heavy, as it performs a truncated backwards DFS on MIR starting from each `SwitchInt` terminator. This backwards DFS remains very limited, as it only walks through `Goto` terminators.
It is build to support constants and discriminants, and a propagating through a very limited set of operations.
The pass successfully manages to disentangle the `Some(x?)` use case and the DFA use case. It still needs a few tests before being ready.
coverage: Fix inconsistent handling of function signature spans
While doing some more cleanup of `spans`, I noticed a strange inconsistency in how function signatures are handled. Normally the function signature span is treated as though it were executable as part of the start of the function, but in some cases the signature span disappears entirely from coverage, for no obvious reason.
This is caused by the fact that spans created by `CoverageSpan::for_fn_sig` don't add the span to their `merged_spans` field (unlike normal statement/terminator spans). In cases where the span-processing code looks at those merged spans, it thinks the signature span is no longer visible and deletes it.
Adding the signature span to `merged_spans` resolves the inconsistency.
(Prior to #116409 this wouldn't have been possible, because there was no case in the old `CoverageStatement` enum representing a signature. Now that `merged_spans` is just a list of spans, that's no longer an obstacle.)
Implement rustc part of RFC 3127 trim-paths
This PR implements (or at least tries to) [RFC 3127 trim-paths](https://github.com/rust-lang/rust/issues/111540), the rustc part. That is `-Zremap-path-scope` with all of it's components/scopes.
`@rustbot` label: +F-trim-paths
Even though expression details are now stored in the info structure, we still
need to inject `ExpressionUsed` statements into MIR, because if one is missing
during codegen then we know that it was optimized out and we can remap all of
its associated code regions to zero.
Previously, mappings were attached to individual coverage statements in MIR.
That necessitated special handling in MIR optimizations to avoid deleting those
statements, since otherwise codegen would be unable to reassemble the original
list of mappings.
With this change, a function's list of mappings is now attached to its MIR
body, and survives intact even if individual statements are deleted by
optimizations.
Coverage codegen can now allocate arrays based on the number of
counters/expressions originally used by the instrumentor.
The existing query that inspects coverage statements is still used for
determining the number of counters passed to `llvm.instrprof.increment`. If
some high-numbered counters were removed by MIR optimizations, the instrumented
binary can potentially use less memory and disk space at runtime.
This allows coverage information to be attached to the function as a whole when
appropriate, instead of being smuggled through coverage statements in the
function's basic blocks.
As an example, this patch moves the `function_source_hash` value out of
individual `CoverageKind::Counter` statements and into the per-function info.
When synthesizing unused functions for coverage purposes, the absence of this
info is taken to indicate that a function was not eligible for coverage and
should not be synthesized.
Interacting with `basic_coverage_blocks` directly makes it easier to satisfy
the borrow checker when mutating `pending_dups` while reading other fields.
Format all the let-chains in compiler crates
Since rust-lang/rustfmt#5910 has landed, soon we will have support for formatting let-chains (as soon as rustfmt syncs and beta gets bumped).
This PR applies the changes [from master rustfmt to rust-lang/rust eagerly](https://rust-lang.zulipchat.com/#narrow/stream/122651-general/topic/out.20formatting.20of.20prs/near/374997516), so that the next beta bump does not have to deal with a 200+ file diff and can remain concerned with other things like `cfg(bootstrap)` -- #113637 was a pain to land, for example, because of let-else.
I will also add this commit to the ignore list after it has landed.
The commands that were run -- I'm not great at bash-foo, but this applies rustfmt to every compiler crate, and then reverts the two crates that should probably be formatted out-of-tree.
```
~/rustfmt $ ls -1d ~/rust/compiler/* | xargs -I@ cargo run --bin rustfmt -- `@/src/lib.rs` --config-path ~/rust --edition=2021 # format all of the compiler crates
~/rust $ git checkout HEAD -- compiler/rustc_codegen_{gcc,cranelift} # revert changes to cg-gcc and cg-clif
```
cc `@rust-lang/rustfmt`
r? `@WaffleLapkin` or `@Nilstrieb` who said they may be able to review this purely mechanical PR :>
cc `@Mark-Simulacrum` and `@petrochenkov,` who had some thoughts on the order of operations with big formatting changes in https://github.com/rust-lang/rust/pull/95262#issue-1178993801. I think the situation has changed since then, given that let-chains support exists on master rustfmt now, and I'm fairly confident that this formatting PR should land even if *bootstrap* rustfmt doesn't yet format let-chains in order to lessen the burden of the next beta bump.
const-eval: make misalignment a hard error
It's been a future-incompat error (showing up in cargo's reports) since https://github.com/rust-lang/rust/pull/104616, Rust 1.68, released in March. That should be long enough.
The question for the lang team is simply -- should we move ahead with this, making const-eval alignment failures a hard error? (It turns out some of them accidentally already were hard errors since #104616. But not all so this is still a breaking change. Crater found no regression.)
Having to keep passing in a graph reference was a holdover from when the graph
was partly mutated during traversal. As of #114354 that is no longer necessary,
so we can simplify the traversal code by storing a graph reference as a field
in `TraverseCoverageGraphWithLoops`.
The previous code was storing the worklist in a vector, and then selectively
adding items to the start or end of the vector. That's a perfect use-case for a
double-ended queue.
This change also reveals that the existing code was a bit confused about which
end of the worklist is the front or back. For now, items are always removed
from the front of the queue (instead of the back), and code that adds items to
the queue has been flipped, to preserve the existing behaviour.
Do not check for impossible predicates in const-prop lint.
The enclosing query already checks for them, and replaces the body with a single `unreachable` if they are indeed impossible.
Also consider call and yield as MIR SSA.
The SSA analysis on MIR only considered `Assign` statements as defining a SSA local.
This PR adds assignments as part of a `Call` or `Yield` terminator in that category.
This mainly allows to perform CopyProp on a call return place.
The only subtlety is in the dominance property: the assignment is only complete at the beginning of the target block.
coverage: Unbox and simplify `bcb_filtered_successors`
This is a small cleanup in the coverage instrumentor's graph-building code.
---
This function already has access to the MIR body, so instead of taking a reference to a terminator, it's simpler and easier to pass in a basic block index.
There is no need to box the returned iterator if we instead add appropriate lifetime captures, and make `short_circuit_preorder` generic over the type of iterator it expects.
We can also greatly simplify the function's implementation by observing that the only difference between its two cases is whether we take all of a BB's successors, or just the first one.
---
`@rustbot` label +A-code-coverage
This function already has access to the MIR body, so instead of taking a
reference to a terminator, it's simpler and easier to pass in a basic block
index.
There is no need to box the returned iterator if we instead add appropriate
lifetime captures, since `short_circuit_preorder` is now generic over the type
of iterator it expects.
We can also greatly simplify the function's implementation by observing that
the only difference between its two cases is whether we take all of a BB's
successors, or just the first one.
This enum was mainly needed to track the precise origin of a span in MIR, for
debug printing purposes. Since the old debug code was removed in #115962, we
can replace it with just the span itself.
Generalize small dominators optimization
* Use small dominators optimization from 640ede7b0a more generally.
* Merge `DefLocation` and `LocationExtended` since they serve the same purpose.
coverage: Allow each coverage statement to have multiple code regions
The original implementation of coverage instrumentation was built around the assumption that a coverage counter/expression would be associated with *up to one* code region. When it was discovered that *multiple* regions would sometimes need to share a counter, a workaround was found: for the remaining regions, the instrumentor would create a fresh expression that adds zero to the existing counter/expression.
That got the job done, but resulted in some awkward code, and produces unnecessarily complicated coverage maps in the final binary.
---
This PR removes that tension by changing `StatementKind::Coverage`'s code region field from `Option<CodeRegion>` to `Vec<CodeRegion>`.
The changes on the codegen side are fairly straightforward. As long as each `CoverageKind::Counter` only injects one `llvm.instrprof.increment`, the rest of coverage codegen is happy to handle multiple regions mapped to the same counter/expression, with only minor option-to-vec adjustments.
On the instrumentor/mir-transform side, we can get rid of the code that creates extra (x + 0) expressions. Instead we gather all of the code regions associated with a single BCB, and inject them all into one coverage statement.
---
There are several patches here but they can be divided in to three phases:
- Preparatory work
- Actually switching over to multiple regions per coverage statement
- Cleaning up
So viewing the patches individually may be easier.
When these methods were originally written, I wasn't aware that
`newtype_index!` already supports addition with ordinary numbers, without
needing to unwrap and re-wrap.
If a BCB has more than one code region, those extra regions can now all be
stored in the same coverage statement, instead of being stored in additional
statements.
The concrete type `CoverageSpan` is no longer used outside of the `spans`
module.
This is a separate patch to avoid noise in the preceding patch that actually
encapsulates coverage spans.
By encapsulating the coverage spans in a struct, we can change the internal
representation without disturbing existing call sites. This will be useful for
grouping coverage spans by BCB.
This patch includes some changes that were originally in #115912, which avoid
the need for a particular test to deal with coverage spans at all.
(Comments/logs referring to `CoverageSpan` are updated in a subsequent patch.)
Reveal opaque types before drop elaboration
fixes https://github.com/rust-lang/rust/issues/113594
r? `@cjgillot`
cc `@JakobDegen`
This pass was introduced in https://github.com/rust-lang/rust/pull/110714
I moved it before drop elaboration (which only cares about the hidden types of things, not the opaque TAIT or RPIT type) and set it to run unconditionally (instead of depending on the optimization level and whether the inliner is active)
Implement a global value numbering MIR optimization
The aim of this pass is to avoid repeated computations by reusing past assignments. It is based on an analysis of SSA locals, in order to perform a restricted form of common subexpression elimination.
By opportunity, this pass allows for some simplifications by combining assignments. For instance, this pass could be able to see through projections of aggregates to directly reuse the aggregate field (not in this PR).
We handle references by assigning a different "provenance" index to each `Ref`/`AddressOf` rvalue. This ensure that we do not spuriously merge borrows that should not be merged. Meanwhile, we consider all the derefs of an immutable reference to a freeze type to give the same value:
```rust
_a = *_b // _b is &Freeze
_c = *_b // replaced by _c = _a
```
Skip MIR pass `UnreachablePropagation` when coverage is enabled
When coverage instrumentation and MIR opts are both enabled, coverage relies on two assumptions:
- MIR opts that would delete `StatementKind::Coverage` statements instead move them into bb0 and change them to `CoverageKind::Unreachable`.
- MIR opts won't delete all `CoverageKind::Counter` statements from an instrumented function.
Most MIR opts naturally satisfy the second assumption, because they won't remove coverage statements from bb0, but `UnreachablePropagation` can do so if it finds that bb0 is unreachable. If this happens, LLVM thinks the function isn't instrumented, and it vanishes from coverage reports.
A proper solution won't be possible until after per-function coverage info lands in #116046, but for now we can avoid the problem by turning off this particular pass when coverage instrumentation is enabled.
---
cc `@cjgillot` since I found this while investigating coverage problems encountered by #113970
`@rustbot` label +A-code-coverage +A-mir-opt
subst -> instantiate
continues #110793, there are still quite a few uses of `subst` and `substitute`, but changing them all in the same PR was a bit too much, so I've stopped here for now.
When coverage instrumentation and MIR opts are both enabled, coverage relies on
two assumptions:
- MIR opts that would delete `StatementKind::Coverage` statements instead move
them into bb0 and change them to `CoverageKind::Unreachable`.
- MIR opts won't delete all `CoverageKind::Counter` statements from an
instrumented function.
Most MIR opts naturally satisfy the second assumption, because they won't
remove coverage statements from bb0, but `UnreachablePropagation` can do so if
it finds that bb0 is unreachable. If this happens, LLVM thinks the function
isn't instrumented, and it vanishes from coverage reports.
A proper solution won't be possible until after per-function coverage info
lands in #116046, but for now we can avoid the problem by turning off this
particular pass when coverage instrumentation is enabled.
interpret: more consistently use ImmTy in operators and casts
The diff in src/tools/miri/src/shims/x86/sse2.rs should hopefully suffice to explain why this is nicer. :)
rename mir::Constant -> mir::ConstOperand, mir::ConstKind -> mir::Const
Also, be more consistent with the `to/eval_bits` methods... we had some that take a type and some that take a size, and then sometimes the one that takes a type is called `bits_for_ty`.
Turns out that `ty::Const`/`mir::ConstKind` carry their type with them, so we don't need to even pass the type to those `eval_bits` functions at all.
However this is not properly consistent yet: in `ty` we have most of the methods on `ty::Const`, but in `mir` we have them on `mir::ConstKind`. And indeed those two types are the ones that correspond to each other. So `mir::ConstantKind` should actually be renamed to `mir::Const`. But what to do with `mir::Constant`? It carries around a span, that's really more like a constant operand that appears as a MIR operand... it's more suited for `syntax.rs` than `consts.rs`, but the bigger question is, which name should it get if we want to align the `mir` and `ty` types? `ConstOperand`? `ConstOp`? `Literal`? It's not a literal but it has a field called `literal` so it would at least be consistently wrong-ish...
``@oli-obk`` any ideas?
coverage: Fix an unstable-sort inconsistency in coverage spans
This code was calling `sort_unstable_by`, but failed to impose a total order on the initial spans. That resulted in unpredictable handling of closure spans, producing inconsistencies in the coverage maps and in user-visible coverage reports.
This PR fixes the problem by always sorting closure spans before otherwise-identical non-closure spans, and also switches to a stable sort in case the ordering is still not total.
---
In addition to the fix itself, this PR also contains a cleanup to the comparison function that I was working on when I discovered the bug.
This code was calling `sort_unstable_by`, but failed to impose a total order on
the initial spans. That resulted in unpredictable handling of closure spans,
producing inconsistencies in the coverage maps and in user-visible coverage
reports.
This patch fixes the problem by always sorting closure spans before
otherwise-identical non-closure spans, and also switches to a stable sort in
case the ordering is still not total.
treat host effect params as erased in codegen
This fixes the changes brought to codegen tests when effect params are added to libcore, by not attempting to monomorphize functions that get the host param by being `const fn`.
r? `@oli-obk`
This fixes the changes brought to codegen tests when effect params are
added to libcore, by not attempting to monomorphize functions that get
the host param by being `const fn`.
Rollup of 6 pull requests
Successful merges:
- #115736 (Remove `verbose_generic_activity_with_arg`)
- #115771 (cleanup leftovers of const_err lint)
- #115798 (add helper method for finding the one non-1-ZST field)
- #115812 (Merge settings.css into rustdoc.css)
- #115815 (fix: return early when has tainted in mir pass)
- #115816 (Disabled socketpair for Vita)
r? `@ghost`
`@rustbot` modify labels: rollup
Remove `verbose_generic_activity_with_arg`
This removes `verbose_generic_activity_with_arg` and changes users to `generic_activity_with_arg`. This keeps the output of `-Z time` readable while these repeated events are still available with the self profiling mechanism.
fix: return early when has tainted in mir-lint
Fixes#115203
`a[..]` is of indeterminate size, it had been reported error during borrow check, therefore we skip the mir lint process.
coverage: Simplify the `coverageinfo` query
The `coverageinfo` query walks through a `mir::Body`'s statements to find the total number of coverage counter IDs and coverage expression IDs that have been used, as this information is needed by coverage codegen.
This PR makes 3 nice simplifications to that query:
- Extract a common iterator over coverage statements, shared by both coverage-related queries
- Simplify the query's visitor from two passes to just one pass
- Explicitly track the highest seen IDs in the visitor, and only convert to a count right at the end
I also updated some related comments. Some had been invalidated by these changes, while others had already been invalidated by previous coverage changes.
Don't report any errors in `lower_intrinsics`.
Intrinsics should have been type checked earlier.
This is part of moving all mir-opt diagnostics early enough so that they are reliably emitted even in check builds: https://github.com/rust-lang/rust/issues/49292#issuecomment-1692212095
Both of the coverage queries can now use this one helper function to iterate
over all of the `mir::Coverage` payloads in the statements of a `mir::Body`.
Represent MIR composite debuginfo as projections instead of aggregates
Composite debuginfo for MIR is currently represented as
```
debug name => Type { projection1 => place1, projection2 => place2 };
```
ie. a single `VarDebugInfo` object with that name, and its value a `VarDebugInfoContents::Composite`.
This PR proposes to reverse the representation to be
```
debug name.projection1 => place1;
debug name.projection2 => place2;
```
ie. multiple `VarDebugInfo` objects with each their projection.
This simplifies the handling of composite debuginfo by the compiler by avoiding weird nesting.
Based on https://github.com/rust-lang/rust/pull/115139
Use relative positions inside a SourceFile.
This allows to remove the normalization of start positions for hashing, and simplify allocation of global address space.
cc `@Zoxc`
Rollup of 5 pull requests
Successful merges:
- #115353 (Emit error instead of ICE when optimized MIR is missing)
- #115488 (Take `&mut Results` in `ResultsVisitor`)
- #115492 (Allow `large_assignments` for Box/Arc/Rc initialization)
- #115519 (Don't ICE on associated type projection without feature gate in new solver)
- #115534 (Expose more information with DefId in smir)
r? `@ghost`
`@rustbot` modify labels: rollup
Fix inlining with -Zalways-encode-mir
Only inline functions that are considered eligible for inlining
by the reachability pass.
This constraint was previously indirectly enforced by only exporting MIR
of eligible functions, but that approach doesn't work with
-Zalways-encode-mir enabled.
Only inline functions that are considered eligible for inlining
by the reachability pass.
This constraint was previously indirectly enforced by only exporting MIR
of eligible functions, but that approach doesn't work with
-Zalways-encode-mir enabled.
Don't do intra-pass validation on MIR shims
Fixes#114375
In the test that was committed, we end up generating the drop shim for `struct Foo` that looks like:
```
fn std::ptr::drop_in_place(_1: *mut Foo) -> () {
let mut _0: ();
bb0: {
goto -> bb5;
}
bb1: {
return;
}
bb2 (cleanup): {
resume;
}
bb3: {
goto -> bb1;
}
bb4 (cleanup): {
drop(((*_1).0: foo::WrapperWithDrop<()>)) -> [return: bb2, unwind terminate];
}
bb5: {
drop(((*_1).0: foo::WrapperWithDrop<()>)) -> [return: bb3, unwind: bb2];
}
}
```
In `bb4` and `bb5`, we assert that `(*_1).0` has type `WrapperWithDrop<()>`. However, In a user-facing param env, the type is actually `WrapperWithDrop<Tait>`. These types are not equal in a user-facing param-env (and can't be made equal even if we use `DefiningAnchor::Bubble`, since it's a non-local TAIT).
coverage: Give the instrumentor its own counter type, separate from MIR
Within the MIR representation of coverage data, `CoverageKind` is an important part of `StatementKind::Coverage`, but the `InstrumentCoverage` pass also uses it heavily as an internal data structure. This means that any change to `CoverageKind` also needs to update all of the internal parts of `InstrumentCoverage` that manipulate it directly, making the MIR representation difficult to modify.
---
This change fixes that by giving the instrumentor its own `BcbCounter` type for internal use, which is then converted to a `CoverageKind` when injecting coverage information into MIR.
The main change is mostly mechanical, because the initial `BcbCounter` is drop-in compatible with `CoverageKind`, minus the unnecessary `CoverageKind::Unreachable` variant.
I've then removed the `function_source_hash` field from `BcbCounter::Counter`, as a small example of how the two types can now usefully differ from each other. Every counter in a MIR-level function should have the same source hash, so we can supply the hash during the conversion to `CoverageKind::Counter` instead.
---
*Background:* BCB stands for “basic coverage block”, which is a node in the simplified control-flow graph used by coverage instrumentation. The instrumentor pass uses the function's actual MIR control-flow graph to build a simplified BCB graph, then assigns coverage counters and counter expressions to various nodes/edges in that simplified graph, and then finally injects corresponding coverage information into the underlying MIR.
Add MIR validation for unwind out from nounwind functions + fixes to make validation pass
`@Nilstrieb` This is the MIR validation you asked in https://github.com/rust-lang/rust/pull/112403#discussion_r1222739722.
Two passes need to be fixed to get the validation to pass:
* `RemoveNoopLandingPads` currently unconditionally introduce a resume block (even there is none to begin with!), changed to not do that
* Generator state transform introduces a `assert` which may unwind, and its drop elaboration also introduces many new `UnwindAction`s, so in this case run the AbortUnwindingCalls after the transformation.
I believe this PR should also fixRust-for-Linux/linux#1016, cc `@ojeda`
r? `@Nilstrieb`
This shows one small benefit of separating `BcbCounter` from `CoverageKind`.
The function source hash will be the same for all counters within a function,
so instead of passing it through `CoverageCounters` and storing it in every
counter, we can just supply it during the final conversion to `CoverageKind`.
Otherwise the file name generated for generator_drop will become
core.ptr-drop_in_place.[generator@<FILEPATH>_<NUMBERS>].generator_drop.0.mir
instead of main-{closure#0}.generator_drop.0.mir which breaks a mir-opt
test.
Normalize before checking if local is freeze in `deduced_param_attrs`
Not normalizing the local type eagerly results in possibly exponential amounts of normalization happening downstream in `is_freeze_raw`.
Fixes#113372
Storing coverage counter information in `CoverageCounters` has a few advantages
over storing it directly inside BCB graph nodes:
- The graph doesn't need to be mutable when making the counters, making it
easier to see that the graph itself is not modified during this step.
- All of the counter data is clearly visible in one place.
- It becomes possible to use a representation that doesn't correspond 1:1 to
graph nodes, e.g. storing all the edge counters in a single hashmap instead of
several.
feat: `riscv-interrupt-{m,s}` calling conventions
Similar to prior support added for the mips430, avr, and x86 targets this change implements the rough equivalent of clang's [`__attribute__((interrupt))`][clang-attr] for riscv targets, enabling e.g.
```rust
static mut CNT: usize = 0;
pub extern "riscv-interrupt-m" fn isr_m() {
unsafe {
CNT += 1;
}
}
```
to produce highly effective assembly like:
```asm
pub extern "riscv-interrupt-m" fn isr_m() {
420003a0: 1141 addi sp,sp,-16
unsafe {
CNT += 1;
420003a2: c62a sw a0,12(sp)
420003a4: c42e sw a1,8(sp)
420003a6: 3fc80537 lui a0,0x3fc80
420003aa: 63c52583 lw a1,1596(a0) # 3fc8063c <_ZN12esp_riscv_rt3CNT17hcec3e3a214887d53E.0>
420003ae: 0585 addi a1,a1,1
420003b0: 62b52e23 sw a1,1596(a0)
}
}
420003b4: 4532 lw a0,12(sp)
420003b6: 45a2 lw a1,8(sp)
420003b8: 0141 addi sp,sp,16
420003ba: 30200073 mret
```
(disassembly via `riscv64-unknown-elf-objdump -C -S --disassemble ./esp32c3-hal/target/riscv32imc-unknown-none-elf/release/examples/gpio_interrupt`)
This outcome is superior to hand-coded interrupt routines which, lacking visibility into any non-assembly body of the interrupt handler, have to be very conservative and save the [entire CPU state to the stack frame][full-frame-save]. By instead asking LLVM to only save the registers that it uses, we defer the decision to the tool with the best context: it can more accurately account for the cost of spills if it knows that every additional register used is already at the cost of an implicit spill.
At the LLVM level, this is apparently [implemented by] marking every register as "[callee-save]," matching the semantics of an interrupt handler nicely (it has to leave the CPU state just as it found it after its `{m|s}ret`).
This approach is not suitable for every interrupt handler, as it makes no attempt to e.g. save the state in a user-accessible stack frame. For a full discussion of those challenges and tradeoffs, please refer to [the interrupt calling conventions RFC][rfc].
Inside rustc, this implementation differs from prior art because LLVM does not expose the "all-saved" function flavor as a calling convention directly, instead preferring to use an attribute that allows for differentiating between "machine-mode" and "superivsor-mode" interrupts.
Finally, some effort has been made to guide those who may not yet be aware of the differences between machine-mode and supervisor-mode interrupts as to why no `riscv-interrupt` calling convention is exposed through rustc, and similarly for why `riscv-interrupt-u` makes no appearance (as it would complicate future LLVM upgrades).
[clang-attr]: https://clang.llvm.org/docs/AttributeReference.html#interrupt-risc-v
[full-frame-save]: 9281af2ecf/src/lib.rs (L440-L469)
[implemented by]: b7fb2a3fec/llvm/lib/Target/RISCV/RISCVRegisterInfo.cpp (L61-L67)
[callee-save]: 973f1fe7a8/llvm/lib/Target/RISCV/RISCVCallingConv.td (L30-L37)
[rfc]: https://github.com/rust-lang/rfcs/pull/3246
Similar to prior support added for the mips430, avr, and x86 targets
this change implements the rough equivalent of clang's
[`__attribute__((interrupt))`][clang-attr] for riscv targets, enabling
e.g.
```rust
static mut CNT: usize = 0;
pub extern "riscv-interrupt-m" fn isr_m() {
unsafe {
CNT += 1;
}
}
```
to produce highly effective assembly like:
```asm
pub extern "riscv-interrupt-m" fn isr_m() {
420003a0: 1141 addi sp,sp,-16
unsafe {
CNT += 1;
420003a2: c62a sw a0,12(sp)
420003a4: c42e sw a1,8(sp)
420003a6: 3fc80537 lui a0,0x3fc80
420003aa: 63c52583 lw a1,1596(a0) # 3fc8063c <_ZN12esp_riscv_rt3CNT17hcec3e3a214887d53E.0>
420003ae: 0585 addi a1,a1,1
420003b0: 62b52e23 sw a1,1596(a0)
}
}
420003b4: 4532 lw a0,12(sp)
420003b6: 45a2 lw a1,8(sp)
420003b8: 0141 addi sp,sp,16
420003ba: 30200073 mret
```
(disassembly via `riscv64-unknown-elf-objdump -C -S --disassemble ./esp32c3-hal/target/riscv32imc-unknown-none-elf/release/examples/gpio_interrupt`)
This outcome is superior to hand-coded interrupt routines which, lacking
visibility into any non-assembly body of the interrupt handler, have to
be very conservative and save the [entire CPU state to the stack
frame][full-frame-save]. By instead asking LLVM to only save the
registers that it uses, we defer the decision to the tool with the best
context: it can more accurately account for the cost of spills if it
knows that every additional register used is already at the cost of an
implicit spill.
At the LLVM level, this is apparently [implemented by] marking every
register as "[callee-save]," matching the semantics of an interrupt
handler nicely (it has to leave the CPU state just as it found it after
its `{m|s}ret`).
This approach is not suitable for every interrupt handler, as it makes
no attempt to e.g. save the state in a user-accessible stack frame. For
a full discussion of those challenges and tradeoffs, please refer to
[the interrupt calling conventions RFC][rfc].
Inside rustc, this implementation differs from prior art because LLVM
does not expose the "all-saved" function flavor as a calling convention
directly, instead preferring to use an attribute that allows for
differentiating between "machine-mode" and "superivsor-mode" interrupts.
Finally, some effort has been made to guide those who may not yet be
aware of the differences between machine-mode and supervisor-mode
interrupts as to why no `riscv-interrupt` calling convention is exposed
through rustc, and similarly for why `riscv-interrupt-u` makes no
appearance (as it would complicate future LLVM upgrades).
[clang-attr]: https://clang.llvm.org/docs/AttributeReference.html#interrupt-risc-v
[full-frame-save]: 9281af2ecf/src/lib.rs (L440-L469)
[implemented by]: b7fb2a3fec/llvm/lib/Target/RISCV/RISCVRegisterInfo.cpp (L61-L67)
[callee-save]: 973f1fe7a8/llvm/lib/Target/RISCV/RISCVCallingConv.td (L30-L37)
[rfc]: https://github.com/rust-lang/rfcs/pull/3246
Make module inner and function run_analysis_to_runtime_passes in
rustc_mir_transform public to allow re-implementing the query from the
rust compiler interface.
Make `unconditional_recursion` warning detect recursive drops
Closes#55388
Also closes#50049 unless we want to keep it for the second example which this PR does not solve, but I think it is better to track that work in #57965.
r? `@oli-obk` since you are the mentor for #55388
Unresolved questions:
- [x] There are two false positives that must be fixed before merging (see diff). I suspect the best way to solve them is to perform analysis after drop elaboration instead of before, as now, but I have not explored that any further yet. Could that be an option? **Answer:** Yes, that solved the problem.
`@rustbot` label +T-compiler +C-enhancement +A-lint
Steal MIR for CTFE when possible.
Some bodies, like constants, have CTFE MIR but no optimized MIR.
In that case, have `mir_for_ctfe` steal the MIR instead of cloning it.
Add documentation to has_deref
Documentation of `has_deref` needed some polish to be more clear about where it should be used and what's it's purpose.
cc https://github.com/rust-lang/rust/issues/114401
r? `@RalfJung`
Do not run ConstProp on mir_for_ctfe.
This pass does not seem to be useful any more. The const-prop lints are now run by `tcx.mir_drops_elaborated_and_const_checked`, and the const-prop opt should never emit any diagnostic.
Forbid old-style `simd_shuffleN` intrinsics
Don't merge before https://github.com/rust-lang/packed_simd/pull/350 has made its way to crates.io
We used to support specifying the lane length of simd_shuffle ops by attaching the lane length to the name of the intrinsic (like `simd_shuffle16`). After this PR, you cannot do that anymore, and need to instead either rely on inference of the `idx` argument type or specify it as `simd_shuffle::<_, [u32; 16], _>`.
r? `@workingjubilee`
Operand types are now tracked explicitly, so there is no need to reserve ID 0
for the special always-zero counter.
As part of the renumbering, this change fixes an off-by-one error in the way
counters were counted by the `coverageinfo` query. As a result, functions
should now have exactly the number of counters they actually need, instead of
always having an extra counter that is never used.
Operand types are now tracked explicitly, so there is no need for expression
IDs to avoid counter IDs by descending from `u32::MAX`. Instead they can just
count up from 0, and can be used directly as indices when necessary.
Because the three kinds of operand are now distinguished explicitly, we no
longer need fiddly code to disambiguate counter IDs and expression IDs based on
the total number of counters/expressions in a function.
This does increase the size of operands from 4 bytes to 8 bytes, but that
shouldn't be a big deal since they are mostly stored inside boxed structures,
and the current coverage code is not particularly size-optimized anyway.
Substitute types before checking inlining compatibility.
Addresses https://github.com/rust-lang/rust/issues/112332 and https://github.com/rust-lang/rust/issues/113781
I don't have a minimal test, but I this seems to remove the ICE locally.
This whole pre-inlining validation mirrors the "real" MIR validation pass to verify that inlined MIR will still pass validation.
The debuginfo loop is added because MIR validation check projections in debuginfo.
Likewise, MIR validation only checks `is_subtype`, so there is no reason for a stronger check.
The types were not being substituted in `check_equal`, so we were not bailing out of inlining if the substituted MIR callee body would not pass validation.
It makes it sound like the `ExprKind` and `Rvalue` are supposed to represent all pointer related
casts, when in reality their just used to share a some enum variants. Make it clear there these
are only coercion to make it clear why only some pointer related "casts" are in the enum.
Rewrite `UnDerefer`
Currently, `UnDerefer` is used by drop elaboration to undo the effects of the `Derefer` pass. However, it just recreates the original places with derefs in the middle of the projection. Because `ProjectionElem::Deref` is intended to be removed completely in the future, this will not work forever.
This PR introduces a `deref_chain` method that returns the places behind `DerefTemp` locals in a place and rewrites the move path code to use this. In the process, `UnDerefer` was merged into `MovePathLookup`. Now that move paths use the same places as in the MIR, the other uses of `UnDerefer` no longer require it.
See #98145
cc `@ouz-a`
r? `@oli-obk`
mir opt + codegen: handle subtyping
fixes#107205
the same issue was caused in multiple places:
- mir opts: both copy and destination propagation
- codegen: assigning operands to locals (which also propagates values)
I changed codegen to always update the type in the operands used for locals which should guard against any new occurrences of this bug going forward. I don't know how to make mir optimizations more resilient here. Hopefully the added tests will be enough to detect any trivially wrong optimizations going forward.
Use `CoverageKind::as_operand_id` instead of manually reimplementing it
These two pieces of code are functionally equivalent to the `CoverageKind::as_operand_id` method that already exists, and is already used elsewhere in this file.
This slightly reduces the amount of code that manually pattern-matches on `CoverageKind`.
Migrate `TyCtxt::predicates_of` and `ParamEnv::caller_bounds` to `Clause`
The last big change in the series.
I will follow-up with additional filed issues once this PR lands:
- [ ] Investigate making `TypeFoldable<TyCtxt<'tcx>> for ty::Clause<'tcx>` implementation less weird: 2efe091705/compiler/rustc_middle/src/ty/structural_impls.rs (L672)
- [ ] Clean up the elaborator since it should only be emitting child clauses, not predicates
- [ ] Rename identifiers like `pred` and `predicates` to `clause` if they're actually clauses around the codebase
- [ ] Validate that all of the `ToPredicate` impls are acutally still needed, or prune them if they're not
r? `@ghost` until the other branch lands
Add retag in MIR transform: `Adt` for `Unique` may contain a reference
Following #112662 , `may_contain_reference` in `rustc_mir_transform::add_retag` underapproximates too much the types that require retagging.
r? ``@RalfJung``
Add a fully fledged `Clause` type, rename old `Clause` to `ClauseKind`
Does two basic things before I put up a more delicate set of PRs (along the lines of #112714, but hopefully much cleaner) that migrate existing usages of `ty::Predicate` to `ty::Clause` (`predicates_of`/`item_bounds`/`ParamEnv::caller_bounds`).
1. Rename `Clause` to `ClauseKind`, so it's parallel with `PredicateKind`.
2. Add a new `Clause` type which is parallel to `Predicate`.
* This type exposes `Clause::kind(self) -> Binder<'tcx, ClauseKind<'tcx>>` which is parallel to `Predicate::kind` 😸
The new `Clause` type essentially acts as a newtype wrapper around `Predicate` that asserts that it is specifically a `PredicateKind::Clause`. Turns out from experimentation[^1] that this is not negative performance-wise, which is wonderful, since this a much simpler design than something that requires encoding the discriminant into the alignment bits of a predicate kind, or something else like that...
r? ``@lcnr`` or ``@oli-obk``
[^1]: https://github.com/rust-lang/rust/pull/112714#issuecomment-1595653910
Switch the BB CFG cache from postorder to RPO
The `BasicBlocks` CFG cache is interesting:
- it stores a postorder, but `traversal::postorder` doesn't use it
- `traversal::reverse_postorder` does traverse the postorder cache backwards
- we do more RPO traversals than postorder traversals (around 20x on the perf.rlo benchmarks IIRC) but it's not cached
- a couple places here and there were manually reversing the non-cached postorder traversal
This PR switches the order of the cache, and makes a bit more use of it. This is a tiny win locally, but it's also for consistency and aesthetics.
r? `@ghost`
Launch a non-unwinding panic for misaligned pointer deref
This panic already never unwinds, but that's only because it always hits the unwind guard that's created by our `UnwindAction::Terminate`. Hitting the unwind guard generates a huge double-panic backtrace. Now we generate a normal-looking panic message when this check is hit.
r? `@thomcc`
Prevent `.eh_frame` from being emitted for `-C panic=abort`
Since `CheckAlignment` pass is after the `AbortUnwindingCalls` pass, the `UnwindAction::Terminate` inserted in it has no chance to be converted to `UnwindAction::Unreachable` anymore, causing us to emit landing pads that are not necessary. Although these landing pads can themselves be eliminated by LLVM, `.eh_frame` sections are still generated. This causes trouble for Rust-for-Linux project recently.
This PR changes it to generate `UnwindAction::Terminate` when we opt for `-Cpanic=unwind`, and `UnwindAction::Unreachable` for `-Cpanic=abort`.
`@ojeda`
Add MVP suggestion for `unsafe_op_in_unsafe_fn`
Rebase of https://github.com/rust-lang/rust/pull/99827
cc tracking issue https://github.com/rust-lang/rust/issues/71668
No real changes since the original PR, just migrated the new suggestion to use fluent messages and added a couple more testcases, AFAICT from the discussion there were no outstanding changes requested.
Write to stdout if `-` is given as output file
With this PR, if `-o -` or `--emit KIND=-` is provided, output will be written to stdout instead. Binary output (those of type `obj`, `llvm-bc`, `link` and `metadata`) being written this way will result in an error unless stdout is not a tty. Multiple output types going to stdout will trigger an error too, as they will all be mixded together.
This implements https://github.com/rust-lang/compiler-team/issues/431
The idea behind the changes is to introduce an `OutFileName` enum that represents the output - be it a real path or stdout - and to use this enum along the code paths that handle different output types.
Take MIR dataflow analyses by mutable reference
The main motivation here is any analysis requiring dynamically sized scratch memory to work. One concrete example would be pointer target tracking, where tracking the results of a dereference can result in multiple possible targets. This leads to processing multi-level dereferences requiring the ability to handle a changing number of potential targets per step. A (simplified) function for this would be `fn apply_deref(potential_targets: &mut Vec<Target>)` which would use the scratch space contained in the analysis to send arguments and receive the results.
The alternative to this would be to wrap everything in a `RefCell`, which is what `MaybeRequiresStorage` currently does. This comes with a small perf cost and loses the compiler's guarantee that we don't try to take multiple borrows at the same time.
For the implementation:
* `AnalysisResults` is an unfortunate requirement to avoid an unconstrained type parameter error.
* `CloneAnalysis` could just be `Clone` instead, but that would result in more work than is required to have multiple cursors over the same result set.
* `ResultsVisitor` now takes the results type on in each function as there's no other way to have access to the analysis without cloning it. This could use an associated type rather than a type parameter, but the current approach makes it easier to not care about the type when it's not necessary.
* `MaybeRequiresStorage` now no longer uses a `RefCell`, but the graphviz formatter now does. It could be removed, but that would require even more changes and doesn't really seem necessary.
If `-o -` or `--emit KIND=-` is provided, output will be written
to stdout instead. Binary output (`obj`, `llvm-bc`, `link` and
`metadata`) being written this way will result in an error unless
stdout is not a tty. Multiple output types going to stdout will
trigger an error too, as they will all be mixded together.
Only check inlining counter after recursing.
This PR aims to reduce the strength of https://github.com/rust-lang/rust/pull/105119 even more.
In the current implementation, we check the inline count before recursing. This means that we never actually reach inlining depth 3.
This PR checks the counter after recursion, to give a chance to inline at depth >= 3.
r? `@scottmcm`
cc `@JakobDegen`
Use translatable diagnostics in `rustc_const_eval`
This PR:
* adds a `no_span` parameter to `note` / `help` attributes when using `Subdiagnostic` to allow adding notes/helps without using a span
* has minor tweaks and changes to error messages
Enable ConstGoto and SeparateConstSwitch passes by default
These 2 passes implement a limited form of jump-threading.
Filing this PR to see if enabling them would be lighter than https://github.com/rust-lang/rust/pull/107009.
Enable ScalarReplacementOfAggregates in optimized builds
Like MatchBranchSimplification, this pass is known to produce significant runtime improvements in Cranelift artifacts, and I believe based on the perf runs here that the primary effect of this pass is to empower MatchBranchSimplification. ScalarReplacementOfAggregates on its own has little effect on anything, but when this was rebased up to include https://github.com/rust-lang/rust/pull/112001 we started seeing significant and majority-positive results.
Based on the fact that we see most of the regressions in debug builds (https://github.com/rust-lang/rust/pull/112002#issuecomment-1566270144) and some rather significant ones in cycles and wall time, I'm only enabling this in optimized builds at the moment.
Only rewrite valtree-constants to patterns and keep other constants opaque
Now that we can reliably fall back to comparing constants with `PartialEq::eq` to the match scrutinee, we can
1. eagerly try to convert constants to valtrees
2. then deeply convert the valtree to a pattern
3. if the to-valtree conversion failed, create an "opaque constant" pattern.
This PR specifically avoids any behavioral changes or major cleanups. What we can now do as follow ups is
* move the two remaining call sites to `destructure_mir_constant` off that query
* make valtree to pattern conversion infallible
* this needs to be done after careful analysis of the effects. There may be user visible changes from that.
based on https://github.com/rust-lang/rust/pull/111768
change `BorrowKind::Unique` to be a mutating `PlaceContext`
fixes#112056
I believe that `BorrowKind::Unique` is a footgun in general, so I added a FIXME and opened https://github.com/rust-lang/rust/issues/112072. This is a bit too involved for this PR though.
MIR: opt-in normalization of `BasicBlock` and `Local` numbering
This doesn't matter at all for actual codegen, but after spending some time reading pre-codegen MIR, I was wishing I didn't have to jump around so much in reading post-inlining code.
So this add two passes that are off by default for every mir level, but can be enabled (`-Zmir-enable-passes=+ReorderBasicBlocks,+ReorderLocals`) for humans.
Don't check for misaligned raw pointer derefs inside Rvalue::AddressOf
From https://github.com/rust-lang/rust/pull/112026#issuecomment-1565686697:
rustc 1.70 (stable next week) added a Mir pass to add pointer alignment checks in debug mode. Adding these checks caused some crates to break, but that was expected, since they contain broken code (https://github.com/rust-lang/rust/issues/111487) for tracking that.
However, the checks added are slightly more aggressive than they should have been. Specifically, they also check the place in an `addr_of!` expression. Whether lack of alignment there is or isn't UB is unclear. This PR modifies the pass to not affect those cases.
I spot checked the crater regressions and the ones I saw were not the case that this PR is modifying. It still seems good to not land anything overaggressive though
Enable MatchBranchSimplification
This pass is one of the small number of benefits from `-Zmir-opt-level=3` that has motivated rustc_codegen_cranelift to use it:
19ed0aade6/compiler/rustc_codegen_cranelift/build_system/build_sysroot.rs (L244-L246)
Cranelift's motivation for this is _runtime_ performance improvements in debug builds. Lifting this pass all the way to `-Zmir-opt-level=1` seems to come without significant perf overhead, so that's what I'm suggesting here.
Rollup of 5 pull requests
Successful merges:
- #111741 (Use `ObligationCtxt` in custom type ops)
- #111840 (Expose more information in `get_body_with_borrowck_facts`)
- #111876 (Roll compiler_builtins to 0.1.92)
- #111912 (Use `Option::is_some_and` and `Result::is_ok_and` in the compiler )
- #111915 (libtest: Improve error when missing `-Zunstable-options`)
r? `@ghost`
`@rustbot` modify labels: rollup
Use `Option::is_some_and` and `Result::is_ok_and` in the compiler
`.is_some_and(..)`/`.is_ok_and(..)` replace `.map_or(false, ..)` and `.map(..).unwrap_or(false)`, making the code more readable.
This PR is a sibling of https://github.com/rust-lang/rust/pull/111873#issuecomment-1561316515
Work around `rust-analyzer` false-positive type errors
rust-analyzer incorrectly reports two type errors in `debug.rs`:
> expected &dyn Display, found &i32
> expected &dyn Display, found &i32
This is due to a known bug in r-a: (https://github.com/rust-lang/rust-analyzer/issues/11847).
In these particular cases, changing `&0` to `&0i32` seems to be enough to avoid the bug.
Preprocess and cache dominator tree
Preprocessing dominators has a very strong effect for https://github.com/rust-lang/rust/pull/111344.
That pass checks that assignments dominate their uses repeatedly. Using the unprocessed dominator tree caused a quadratic runtime (number of bbs x depth of the dominator tree).
This PR also caches the dominator tree and the pre-processed dominators in the MIR cfg cache.
Rebase of https://github.com/rust-lang/rust/pull/107157
cc `@tmiasko`
Stop turning transmutes into discriminant reads in mir-opt
Partially reverts #109612, as after #109993 these aren't actually equivalent any more, and I'm no longer confident this was ever an improvement in the first place.
Having this "simplification" meant that similar-looking code actually did somewhat different things. For example,
```rust
pub unsafe fn demo1(x: std::cmp::Ordering) -> u8 {
std::mem::transmute(x)
}
pub unsafe fn demo2(x: std::cmp::Ordering) -> i8 {
std::mem::transmute(x)
}
```
in nightly today is generating <https://rust.godbolt.org/z/dPK58zW18>
```llvm
define noundef i8 `@_ZN7example5demo117h341ef313673d2ee6E(i8` noundef %x) unnamed_addr #0 {
%0 = icmp uge i8 %x, -1
%1 = icmp ule i8 %x, 1
%2 = or i1 %0, %1
call void `@llvm.assume(i1` %2)
ret i8 %x
}
define noundef i8 `@_ZN7example5demo217h5ad29f361a3f5700E(i8` noundef %0) unnamed_addr #0 {
%x = alloca i8, align 1
store i8 %0, ptr %x, align 1
%1 = load i8, ptr %x, align 1, !range !2, !noundef !3
ret i8 %1
}
```
Which feels too different when the original code is essentially identical.
---
Aside: that example is different *after* optimizations too:
```llvm
define noundef i8 `@_ZN7example5demo117h341ef313673d2ee6E(i8` noundef returned %x) unnamed_addr #0 {
%0 = add i8 %x, 1
%1 = icmp ult i8 %0, 3
tail call void `@llvm.assume(i1` %1)
ret i8 %x
}
define noundef i8 `@_ZN7example5demo217h5ad29f361a3f5700E(i8` noundef returned %0) unnamed_addr #1 {
ret i8 %0
}
```
so turning the `Transmute` into a `Discriminant` was arguably just making things worse, so leaving it alone instead -- and thus having less code in rustc -- seems clearly better.
Handle error body in generator layout
Fixes#111468
I feel like making this query return `Option<GeneratorLayout>` might be better but had some issues with that approach