Implement jump threading MIR opt
This pass is an attempt to generalize `ConstGoto` and `SeparateConstSwitch` passes into a more complete jump threading pass.
This pass is rather heavy, as it performs a truncated backwards DFS on MIR starting from each `SwitchInt` terminator. This backwards DFS remains very limited, as it only walks through `Goto` terminators.
It is build to support constants and discriminants, and a propagating through a very limited set of operations.
The pass successfully manages to disentangle the `Some(x?)` use case and the DFA use case. It still needs a few tests before being ready.
coverage: Fix inconsistent handling of function signature spans
While doing some more cleanup of `spans`, I noticed a strange inconsistency in how function signatures are handled. Normally the function signature span is treated as though it were executable as part of the start of the function, but in some cases the signature span disappears entirely from coverage, for no obvious reason.
This is caused by the fact that spans created by `CoverageSpan::for_fn_sig` don't add the span to their `merged_spans` field (unlike normal statement/terminator spans). In cases where the span-processing code looks at those merged spans, it thinks the signature span is no longer visible and deletes it.
Adding the signature span to `merged_spans` resolves the inconsistency.
(Prior to #116409 this wouldn't have been possible, because there was no case in the old `CoverageStatement` enum representing a signature. Now that `merged_spans` is just a list of spans, that's no longer an obstacle.)
Implement rustc part of RFC 3127 trim-paths
This PR implements (or at least tries to) [RFC 3127 trim-paths](https://github.com/rust-lang/rust/issues/111540), the rustc part. That is `-Zremap-path-scope` with all of it's components/scopes.
`@rustbot` label: +F-trim-paths
Even though expression details are now stored in the info structure, we still
need to inject `ExpressionUsed` statements into MIR, because if one is missing
during codegen then we know that it was optimized out and we can remap all of
its associated code regions to zero.
Previously, mappings were attached to individual coverage statements in MIR.
That necessitated special handling in MIR optimizations to avoid deleting those
statements, since otherwise codegen would be unable to reassemble the original
list of mappings.
With this change, a function's list of mappings is now attached to its MIR
body, and survives intact even if individual statements are deleted by
optimizations.
Coverage codegen can now allocate arrays based on the number of
counters/expressions originally used by the instrumentor.
The existing query that inspects coverage statements is still used for
determining the number of counters passed to `llvm.instrprof.increment`. If
some high-numbered counters were removed by MIR optimizations, the instrumented
binary can potentially use less memory and disk space at runtime.
This allows coverage information to be attached to the function as a whole when
appropriate, instead of being smuggled through coverage statements in the
function's basic blocks.
As an example, this patch moves the `function_source_hash` value out of
individual `CoverageKind::Counter` statements and into the per-function info.
When synthesizing unused functions for coverage purposes, the absence of this
info is taken to indicate that a function was not eligible for coverage and
should not be synthesized.
Interacting with `basic_coverage_blocks` directly makes it easier to satisfy
the borrow checker when mutating `pending_dups` while reading other fields.
Format all the let-chains in compiler crates
Since rust-lang/rustfmt#5910 has landed, soon we will have support for formatting let-chains (as soon as rustfmt syncs and beta gets bumped).
This PR applies the changes [from master rustfmt to rust-lang/rust eagerly](https://rust-lang.zulipchat.com/#narrow/stream/122651-general/topic/out.20formatting.20of.20prs/near/374997516), so that the next beta bump does not have to deal with a 200+ file diff and can remain concerned with other things like `cfg(bootstrap)` -- #113637 was a pain to land, for example, because of let-else.
I will also add this commit to the ignore list after it has landed.
The commands that were run -- I'm not great at bash-foo, but this applies rustfmt to every compiler crate, and then reverts the two crates that should probably be formatted out-of-tree.
```
~/rustfmt $ ls -1d ~/rust/compiler/* | xargs -I@ cargo run --bin rustfmt -- `@/src/lib.rs` --config-path ~/rust --edition=2021 # format all of the compiler crates
~/rust $ git checkout HEAD -- compiler/rustc_codegen_{gcc,cranelift} # revert changes to cg-gcc and cg-clif
```
cc `@rust-lang/rustfmt`
r? `@WaffleLapkin` or `@Nilstrieb` who said they may be able to review this purely mechanical PR :>
cc `@Mark-Simulacrum` and `@petrochenkov,` who had some thoughts on the order of operations with big formatting changes in https://github.com/rust-lang/rust/pull/95262#issue-1178993801. I think the situation has changed since then, given that let-chains support exists on master rustfmt now, and I'm fairly confident that this formatting PR should land even if *bootstrap* rustfmt doesn't yet format let-chains in order to lessen the burden of the next beta bump.
const-eval: make misalignment a hard error
It's been a future-incompat error (showing up in cargo's reports) since https://github.com/rust-lang/rust/pull/104616, Rust 1.68, released in March. That should be long enough.
The question for the lang team is simply -- should we move ahead with this, making const-eval alignment failures a hard error? (It turns out some of them accidentally already were hard errors since #104616. But not all so this is still a breaking change. Crater found no regression.)
Having to keep passing in a graph reference was a holdover from when the graph
was partly mutated during traversal. As of #114354 that is no longer necessary,
so we can simplify the traversal code by storing a graph reference as a field
in `TraverseCoverageGraphWithLoops`.
The previous code was storing the worklist in a vector, and then selectively
adding items to the start or end of the vector. That's a perfect use-case for a
double-ended queue.
This change also reveals that the existing code was a bit confused about which
end of the worklist is the front or back. For now, items are always removed
from the front of the queue (instead of the back), and code that adds items to
the queue has been flipped, to preserve the existing behaviour.
Do not check for impossible predicates in const-prop lint.
The enclosing query already checks for them, and replaces the body with a single `unreachable` if they are indeed impossible.
Also consider call and yield as MIR SSA.
The SSA analysis on MIR only considered `Assign` statements as defining a SSA local.
This PR adds assignments as part of a `Call` or `Yield` terminator in that category.
This mainly allows to perform CopyProp on a call return place.
The only subtlety is in the dominance property: the assignment is only complete at the beginning of the target block.
coverage: Unbox and simplify `bcb_filtered_successors`
This is a small cleanup in the coverage instrumentor's graph-building code.
---
This function already has access to the MIR body, so instead of taking a reference to a terminator, it's simpler and easier to pass in a basic block index.
There is no need to box the returned iterator if we instead add appropriate lifetime captures, and make `short_circuit_preorder` generic over the type of iterator it expects.
We can also greatly simplify the function's implementation by observing that the only difference between its two cases is whether we take all of a BB's successors, or just the first one.
---
`@rustbot` label +A-code-coverage
This function already has access to the MIR body, so instead of taking a
reference to a terminator, it's simpler and easier to pass in a basic block
index.
There is no need to box the returned iterator if we instead add appropriate
lifetime captures, since `short_circuit_preorder` is now generic over the type
of iterator it expects.
We can also greatly simplify the function's implementation by observing that
the only difference between its two cases is whether we take all of a BB's
successors, or just the first one.
This enum was mainly needed to track the precise origin of a span in MIR, for
debug printing purposes. Since the old debug code was removed in #115962, we
can replace it with just the span itself.
Generalize small dominators optimization
* Use small dominators optimization from 640ede7b0a more generally.
* Merge `DefLocation` and `LocationExtended` since they serve the same purpose.
coverage: Allow each coverage statement to have multiple code regions
The original implementation of coverage instrumentation was built around the assumption that a coverage counter/expression would be associated with *up to one* code region. When it was discovered that *multiple* regions would sometimes need to share a counter, a workaround was found: for the remaining regions, the instrumentor would create a fresh expression that adds zero to the existing counter/expression.
That got the job done, but resulted in some awkward code, and produces unnecessarily complicated coverage maps in the final binary.
---
This PR removes that tension by changing `StatementKind::Coverage`'s code region field from `Option<CodeRegion>` to `Vec<CodeRegion>`.
The changes on the codegen side are fairly straightforward. As long as each `CoverageKind::Counter` only injects one `llvm.instrprof.increment`, the rest of coverage codegen is happy to handle multiple regions mapped to the same counter/expression, with only minor option-to-vec adjustments.
On the instrumentor/mir-transform side, we can get rid of the code that creates extra (x + 0) expressions. Instead we gather all of the code regions associated with a single BCB, and inject them all into one coverage statement.
---
There are several patches here but they can be divided in to three phases:
- Preparatory work
- Actually switching over to multiple regions per coverage statement
- Cleaning up
So viewing the patches individually may be easier.
When these methods were originally written, I wasn't aware that
`newtype_index!` already supports addition with ordinary numbers, without
needing to unwrap and re-wrap.
If a BCB has more than one code region, those extra regions can now all be
stored in the same coverage statement, instead of being stored in additional
statements.
The concrete type `CoverageSpan` is no longer used outside of the `spans`
module.
This is a separate patch to avoid noise in the preceding patch that actually
encapsulates coverage spans.
By encapsulating the coverage spans in a struct, we can change the internal
representation without disturbing existing call sites. This will be useful for
grouping coverage spans by BCB.
This patch includes some changes that were originally in #115912, which avoid
the need for a particular test to deal with coverage spans at all.
(Comments/logs referring to `CoverageSpan` are updated in a subsequent patch.)
Reveal opaque types before drop elaboration
fixes https://github.com/rust-lang/rust/issues/113594
r? `@cjgillot`
cc `@JakobDegen`
This pass was introduced in https://github.com/rust-lang/rust/pull/110714
I moved it before drop elaboration (which only cares about the hidden types of things, not the opaque TAIT or RPIT type) and set it to run unconditionally (instead of depending on the optimization level and whether the inliner is active)
Implement a global value numbering MIR optimization
The aim of this pass is to avoid repeated computations by reusing past assignments. It is based on an analysis of SSA locals, in order to perform a restricted form of common subexpression elimination.
By opportunity, this pass allows for some simplifications by combining assignments. For instance, this pass could be able to see through projections of aggregates to directly reuse the aggregate field (not in this PR).
We handle references by assigning a different "provenance" index to each `Ref`/`AddressOf` rvalue. This ensure that we do not spuriously merge borrows that should not be merged. Meanwhile, we consider all the derefs of an immutable reference to a freeze type to give the same value:
```rust
_a = *_b // _b is &Freeze
_c = *_b // replaced by _c = _a
```
Skip MIR pass `UnreachablePropagation` when coverage is enabled
When coverage instrumentation and MIR opts are both enabled, coverage relies on two assumptions:
- MIR opts that would delete `StatementKind::Coverage` statements instead move them into bb0 and change them to `CoverageKind::Unreachable`.
- MIR opts won't delete all `CoverageKind::Counter` statements from an instrumented function.
Most MIR opts naturally satisfy the second assumption, because they won't remove coverage statements from bb0, but `UnreachablePropagation` can do so if it finds that bb0 is unreachable. If this happens, LLVM thinks the function isn't instrumented, and it vanishes from coverage reports.
A proper solution won't be possible until after per-function coverage info lands in #116046, but for now we can avoid the problem by turning off this particular pass when coverage instrumentation is enabled.
---
cc `@cjgillot` since I found this while investigating coverage problems encountered by #113970
`@rustbot` label +A-code-coverage +A-mir-opt
subst -> instantiate
continues #110793, there are still quite a few uses of `subst` and `substitute`, but changing them all in the same PR was a bit too much, so I've stopped here for now.
When coverage instrumentation and MIR opts are both enabled, coverage relies on
two assumptions:
- MIR opts that would delete `StatementKind::Coverage` statements instead move
them into bb0 and change them to `CoverageKind::Unreachable`.
- MIR opts won't delete all `CoverageKind::Counter` statements from an
instrumented function.
Most MIR opts naturally satisfy the second assumption, because they won't
remove coverage statements from bb0, but `UnreachablePropagation` can do so if
it finds that bb0 is unreachable. If this happens, LLVM thinks the function
isn't instrumented, and it vanishes from coverage reports.
A proper solution won't be possible until after per-function coverage info
lands in #116046, but for now we can avoid the problem by turning off this
particular pass when coverage instrumentation is enabled.
interpret: more consistently use ImmTy in operators and casts
The diff in src/tools/miri/src/shims/x86/sse2.rs should hopefully suffice to explain why this is nicer. :)
rename mir::Constant -> mir::ConstOperand, mir::ConstKind -> mir::Const
Also, be more consistent with the `to/eval_bits` methods... we had some that take a type and some that take a size, and then sometimes the one that takes a type is called `bits_for_ty`.
Turns out that `ty::Const`/`mir::ConstKind` carry their type with them, so we don't need to even pass the type to those `eval_bits` functions at all.
However this is not properly consistent yet: in `ty` we have most of the methods on `ty::Const`, but in `mir` we have them on `mir::ConstKind`. And indeed those two types are the ones that correspond to each other. So `mir::ConstantKind` should actually be renamed to `mir::Const`. But what to do with `mir::Constant`? It carries around a span, that's really more like a constant operand that appears as a MIR operand... it's more suited for `syntax.rs` than `consts.rs`, but the bigger question is, which name should it get if we want to align the `mir` and `ty` types? `ConstOperand`? `ConstOp`? `Literal`? It's not a literal but it has a field called `literal` so it would at least be consistently wrong-ish...
``@oli-obk`` any ideas?
coverage: Fix an unstable-sort inconsistency in coverage spans
This code was calling `sort_unstable_by`, but failed to impose a total order on the initial spans. That resulted in unpredictable handling of closure spans, producing inconsistencies in the coverage maps and in user-visible coverage reports.
This PR fixes the problem by always sorting closure spans before otherwise-identical non-closure spans, and also switches to a stable sort in case the ordering is still not total.
---
In addition to the fix itself, this PR also contains a cleanup to the comparison function that I was working on when I discovered the bug.
This code was calling `sort_unstable_by`, but failed to impose a total order on
the initial spans. That resulted in unpredictable handling of closure spans,
producing inconsistencies in the coverage maps and in user-visible coverage
reports.
This patch fixes the problem by always sorting closure spans before
otherwise-identical non-closure spans, and also switches to a stable sort in
case the ordering is still not total.
treat host effect params as erased in codegen
This fixes the changes brought to codegen tests when effect params are added to libcore, by not attempting to monomorphize functions that get the host param by being `const fn`.
r? `@oli-obk`
This fixes the changes brought to codegen tests when effect params are
added to libcore, by not attempting to monomorphize functions that get
the host param by being `const fn`.