coverage: Memoize and simplify counter expressions
When creating coverage counter expressions as part of coverage instrumentation, we often end up creating obviously-redundant expressions like `c1 + (c0 - c1)`, which is equivalent to just `c0`.
To avoid doing so, this PR checks when we would create an expression matching one of 5 patterns, and uses the simplified form instead:
- `(a - b) + b` → `a`.
- `(a + b) - b` → `a`.
- `(a + b) - a` → `b`.
- `a + (b - a)` → `b`.
- `a - (a - b)` → `b`.
Of all the different ways to combine 3 operands and 2 operators, these are the patterns that allow simplification.
(Some of those patterns currently don't occur in practice, but are included anyway for completeness, to avoid having to add them later as branch coverage and MC/DC coverage support expands.)
---
This PR also adds memoization for newly-created (or newly-simplified) counter expressions, to avoid creating duplicates.
This currently makes no difference to the final mappings, but is expected to be useful for MC/DC coverage of match expressions, as proposed by https://github.com/rust-lang/rust/pull/124278#issuecomment-2106754753.
This code for recalculating `mcdc_bitmap_bytes` doesn't provide any benefit,
because its result won't have changed from the value in `FunctionCoverageInfo`
that was computed during the MIR instrumentation pass.
Some of these cases currently don't occur in practice, but are included for
completeness, and to avoid having to add them later as branch coverage and
MC/DC coverage start building more complex expressions.
Split out `ty::AliasTerm` from `ty::AliasTy`
Splitting out `AliasTerm` (for use in project and normalizes goals) and `AliasTy` (for use in `ty::Alias`)
r? lcnr
coverage: Further simplify extraction of mapping info from MIR
This is another round of rearrangement and simplification that builds on top of the changes made to mapping-extraction by #124603.
The overall theme is to take the computation of `bcb_has_mappings` and `test_vector_bitmap_bytes` out of the main body of `generate_coverage_spans`, which then lets us perform a few other small changes that had previously been held up by the need to work around those computations.
The code in `extract_mcdc_mappings` that allocates these bytes already knows
how many are needed in total, so there's no need to immediately recompute that
value in the calling function.
Now that branch and MC/DC mappings have been split out into separate types and
vectors, this enum is no longer needed, since it only represents ordinary
"code" regions.
(We can revisit this decision if we ever add support for other region kinds,
such as skipped regions or expansion regions. But at that point, we might just
add new structs/vectors for those kinds as well.)
Account for immutably borrowed locals in MIR copy-prop and GVN
For the most part, we consider that immutably borrowed `Freeze` locals still fulfill SSA conditions. As the borrow is immutable, any use of the local will have the value given by the single assignment, and there can be no surprise.
This allows copy-prop to merge a non-borrowed local with a borrowed local. We chose to keep copy-classes heads unborrowed, as those may be easier to optimize in later passes.
This also allows to GVN the value behind an immutable borrow. If a SSA local is borrowed, dereferencing that borrow is equivalent to copying the local's value: re-executing the assignment between the borrow and the dereference would be UB.
r? `@ghost` for perf
coverage: Clean up creation of MC/DC condition bitmaps
This PR improves the code for creating and initializing [MC/DC](https://en.wikipedia.org/wiki/Modified_condition/decision_coverage) condition bitmap variables, as introduced by #123409 and modified by #124255.
- The condition bitmap variables are now created eagerly at the start of per-function codegen, via a new `init_coverage` method in `CoverageInfoBuilderMethods`. This avoids having to retroactively create the bitmaps while doing codegen for an individual coverage statement.
- As a result, we can now create and initialize those bitmaps using existing safe APIs, instead of having to perform our own unsafe call to `llvm::LLVMBuildAlloca`.
- This PR also tweaks the way we count the number of condition bitmaps needed, by tracking the total number of bitmaps needed (max depth + 1), instead of only tracking the maximum depth. This reduces the potential for subtle off-by-one confusion.
Use `tcx.types.unit` instead of `Ty::new_unit(tcx)`
I don't think there is any need for the function, given that we can just access the `.types`, similarly to all other primitives?
because we are already marking unions `NoPropagation` in
`CanConstProp::check()`. That is enough to prevent any attempts
at const propagating unions and this second check is not needed.
Also improve a comment in `CanConstProp::check()`
Mark unions non-const-propagatable in `KnownPanicsLint` without calling layout
Fixes#123710
The ICE occurs during the layout calculation of the union `InvalidTag` in #123710 because the following assert fails:5fe8b697e7/compiler/rustc_abi/src/layout.rs (L289-L292)
The layout calculation is invoked by `KnownPanicsLint` when it is trying to figure out which locals it can const prop. Since `KnownPanicsLint` is never actually going to const props unions thanks to PR https://github.com/rust-lang/rust/pull/121628 there's no point calling layout to check if it can. So in this fix I skip the call to layout and just mark the local non-const propagatable if it is a union.
MCDC coverage: support nested decision coverage
#123409 provided the initial MCDC coverage implementation.
As referenced in #124144, it does not currently support "nested" decisions, like the following example :
```rust
fn nested_if_in_condition(a: bool, b: bool, c: bool) {
if a && if b || c { true } else { false } {
say("yes");
} else {
say("no");
}
}
```
Note that there is an if-expression (`if b || c ...`) embedded inside a boolean expression in the decision of an outer if-expression.
This PR proposes a workaround for this cases, by introducing a Decision context stack, and by handing several `temporary condition bitmaps` instead of just one.
When instrumenting boolean expressions, if the current node is a leaf condition (i.e. not a `||`/`&&` logical operator nor a `!` not operator), we insert a new decision context, such that if there are more boolean expressions inside the condition, they are handled as separate expressions.
On the codegen LLVM side, we allocate as many `temp_cond_bitmap`s as necessary to handle the maximum encountered decision depth.
Add decision_depth field to TVBitmapUpdate/CondBitmapUpdate statements
Add decision_depth field to BcbMappingKinds MCDCBranch and MCDCDecision
Add decision_depth field to MCDCBranchSpan and MCDCDecisionSpan
Do `check_coroutine_obligations` once per typeck root
We only need to do `check_coroutine_obligations` once per typeck root, especially since the new solver can't really (easily) associate which obligations correspond to which coroutines.
This requires us to move the checks for sized coroutine fields into `mir_coroutine_witnesses`, but that's fine imo.
r? lcnr
deref patterns: lower deref patterns to MIR
This lowers deref patterns to MIR. This is a bit tricky because this is the first kind of pattern that requires storing a value in a temporary. Thanks to https://github.com/rust-lang/rust/pull/123324 false edges are no longer a problem.
The thing I'm not confident about is the handling of fake borrows. This PR ignores any fake borrows inside a deref pattern. We are guaranteed to at least fake borrow the place of the first pointer value, which could be enough, but I'm not certain.
weak lang items are not allowed to be #[track_caller]
For instance the panic handler will be called via this import
```rust
extern "Rust" {
#[lang = "panic_impl"]
fn panic_impl(pi: &PanicInfo<'_>) -> !;
}
```
A `#[track_caller]` would add an extra argument and thus make this the wrong signature.
The 2nd commit is a consistency rename; based on the docs [here](https://doc.rust-lang.org/unstable-book/language-features/lang-items.html) and [here](https://rustc-dev-guide.rust-lang.org/lang-items.html) I figured "lang item" is more widely used. (In the compiler output, "lang item" and "language item" seem to be pretty even.)
Add simple async drop glue generation
This is a prototype of the async drop glue generation for some simple types. Async drop glue is intended to behave very similar to the regular drop glue except for being asynchronous. Currently it does not execute synchronous drops but only calls user implementations of `AsyncDrop::async_drop` associative function and awaits the returned future. It is not complete as it only recurses into arrays, slices, tuples, and structs and does not have same sensible restrictions as the old `Drop` trait implementation like having the same bounds as the type definition, while code assumes their existence (requires a future work).
This current design uses a workaround as it does not create any custom async destructor state machine types for ADTs, but instead uses types defined in the std library called future combinators (deferred_async_drop, chain, ready_unit).
Also I recommend reading my [explainer](https://zetanumbers.github.io/book/async-drop-design.html).
This is a part of the [MCP: Low level components for async drop](https://github.com/rust-lang/compiler-team/issues/727) work.
Feature completeness:
- [x] `AsyncDrop` trait
- [ ] `async_drop_in_place_raw`/async drop glue generation support for
- [x] Trivially destructible types (integers, bools, floats, string slices, pointers, references, etc.)
- [x] Arrays and slices (array pointer is unsized into slice pointer)
- [x] ADTs (enums, structs, unions)
- [x] tuple-like types (tuples, closures)
- [ ] Dynamic types (`dyn Trait`, see explainer's [proposed design](https://github.com/zetanumbers/posts/blob/main/async-drop-design.md#async-drop-glue-for-dyn-trait))
- [ ] coroutines (https://github.com/rust-lang/rust/pull/123948)
- [x] Async drop glue includes sync drop glue code
- [x] Cleanup branch generation for `async_drop_in_place_raw`
- [ ] Union rejects non-trivially async destructible fields
- [ ] `AsyncDrop` implementation requires same bounds as type definition
- [ ] Skip trivially destructible fields (optimization)
- [ ] New [`TyKind::AdtAsyncDestructor`](https://github.com/zetanumbers/posts/blob/main/async-drop-design.md#adt-async-destructor-types) and get rid of combinators
- [ ] [Synchronously undroppable types](https://github.com/zetanumbers/posts/blob/main/async-drop-design.md#exclusively-async-drop)
- [ ] Automatic async drop at the end of the scope in async context
This clears the way for larger changes to how branches are handled by the
coverage instrumentor, in order to support branch coverage for more language
constructs.
Implement Modified Condition/Decision Coverage
This is an implementation based on llvm backend support (>= 18) by `@evodius96` and branch coverage support by `@Zalathar.`
### Major changes:
* Add -Zcoverage-options=mcdc as switch. Now coverage options accept either `no-branch`, `branch`, or `mcdc`. `mcdc` also enables `branch` because it is essential to work.
* Add coverage mapping for MCDCBranch and MCDCDecision. Note that MCDCParameter evolves from llvm 18 to llvm 19. The mapping in rust side mainly references to 19 and is casted to 18 types in llvm wrapper.
* Add wrapper for mcdc instrinc functions from llvm. And inject associated statements to mir.
* Add BcbMappingKind::Decision, I'm not sure is it proper but can't find a better way temporarily.
* Let coverage-dump support parsing MCDCBranch and MCDCDecision from llvm ir.
* Add simple tests to check whether mcdc works.
* Same as clang, currently rustc does not generate instrument for decision with more than 6 condtions or only 1 condition due to considerations of resource.
### Implementation Details
1. To get information about conditions and decisions, `MCDCState` in `BranchInfoBuilder` is used during hir lowering to mir. For expressions with logical op we call `Builder::visit_coverage_branch_operation` to record its sub conditions, generate condition ids for them and save their spans (to construct the span of whole decision). This process mainly references to the implementation in clang and is described in comments over `MCDCState::record_conditions`. Also true marks and false marks introduced by branch coverage are used to detect where the decision evaluation ends: the next id of the condition == 0.
2. Once the `MCDCState::decision_stack` popped all recorded conditions, we can ensure that the decision is checked over and push it into `decision_spans`. We do not manually insert decision span to avoid complexity from then_else_break in nested if scopes.
3. When constructing CoverageSpans, add condition info to BcbMappingKind::Branch and decision info to BcbMappingKind::Decision. If the branch mapping has non-zero condition id it will be transformed to MCDCBranch mapping and insert `CondBitmapUpdate` statements to its evaluated blocks. While decision bcb mapping will insert `TestVectorBitmapUpdate` in all its end blocks.
### Usage
```bash
echo "[build]\nprofiler=true" >> config.toml
./x build --stage 1
./x test tests/coverage/mcdc_if.rs
```
to build the compiler and run tests.
```shell
export PATH=path/to/llvm-build:$PATH
rustup toolchain link mcdc build/host/stage1
cargo +mcdc rustc --bin foo -- -Cinstrument-coverage -Zcoverage-options=mcdc
cd target/debug
LLVM_PROFILE_FILE="foo.profraw" ./foo
llvm-profdata merge -sparse foo.profraw -o foo.profdata
llvm-cov show ./foo -instr-profile=foo.profdata --show-mcdc
```
to check "foo" code.
### Problems to solve
For now decision mapping will insert statements to its all end blocks, which may be optimized by inserting a final block of the decision. To do this we must also trace the evaluated value at each end of the decision and join them separately.
This implementation is not heavily tested so there should be some unrevealed issues. We are going to check our rust products in the next. Please let me know if you had any suggestions or comments.
Disable SimplifyToExp in MatchBranchSimplification
Due to the miscompilation mentioned in #124150, We need to disable MatchBranchSimplification temporarily.
To fully resolve this issue, my plan is:
1. Disable SimplifyToExp in MatchBranchSimplification (this PR).
2. Remove all potentially unclear transforms in #124122.
3. Gradually add back the removed transforms (possibly multiple PRs).
r? `@Nilstrieb` or `@oli-obk`
async closure coroutine by move body MirPass refactoring
Unsure about the last commit, but I think the other changes help in simplifying the control flow
Stop making any assumption about the projections applied to the upvars in the `ByMoveBody` pass
So it turns out that because of subtle optimizations like [`truncate_capture_for_optimization`](ab5bda1aa7/compiler/rustc_hir_typeck/src/upvar.rs (L2351)), we simply cannot make any assumptions about the shape of the projections applied to the upvar locals in a coroutine body.
So stop doing that -- the code is resilient to such projections, so the assertion really existed only to "protect against the unknown".
r? oli-obk
Fixes#123650
Re-enable the early otherwise branch optimization
Closes#95162. Fixes#119014.
This is the first part of #121397.
An invalid enum discriminant can come from anywhere. We have to check to see if all successors contain the discriminant statement. This should have a pass to hoist instructions.
r? cjgillot
Fix `ByMove` coroutine-closure shim (for 2021 precise closure capturing behavior)
This PR reworks the way that we perform the `ByMove` coroutine-closure shim to account for the fact that the upvars of the outer coroutine-closure and the inner coroutine might not line up due to edition-2021 closure capture rules changes.
Specifically, the number of upvars may differ *and/or* the inner coroutine may have additional projections applied to an upvar. This PR reworks the information we pass into the `ByMoveBody` MIR visitor to account for both of these facts.
I tried to leave comments explaining exactly what everything is doing, but let me know if you have questions.
r? oli-obk
Actually use the inferred `ClosureKind` from signature inference in coroutine-closures
A follow-up to https://github.com/rust-lang/rust/pull/123349, which fixes another subtle bug: We were not taking into account the async closure kind we infer during closure signature inference.
When I pass a closure directly to an arg like `fn(x: impl async FnOnce())`, that should have the side-effect of artificially restricting the kind of the async closure to `ClosureKind::FnOnce`. We weren't doing this -- that's a quick fix; however, it uncovers a second, more subtle bug with the way that `move`, async closures, and `FnOnce` interact.
Specifically, when we have an async closure like:
```
let x = Struct;
let c = infer_as_fnonce(async move || {
println!("{x:?}");
}
```
The outer closure captures `x` by move, but the inner coroutine still immutably borrows `x` from the outer closure. Since we've forced the closure to by `async FnOnce()`, we can't actually *do* a self borrow, since the signature of `AsyncFnOnce::call_once` doesn't have a borrowed lifetime. This means that all `async move` closures that are constrained to `FnOnce` will fail borrowck.
We can fix that by detecting this case specifically, and making the *inner* async closure `move` as well. This is always beneficial to closure analysis, since if we have an `async FnOnce()` that's `move`, there's no reason to ever borrow anything, so `move` isn't artificially restrictive.
Cleanup: Rename `HAS_PROJECTIONS` to `HAS_ALIASES` etc.
The name of the bitflag `HAS_PROJECTIONS` and of its corresponding method `has_projections` is quite historical dating back to a time when projections were the only kind of alias type.
I think it's time to update it to clear up any potential confusion for newcomers and to reduce unnecessary friction during contributor onboarding.
r? types
coverage: Remove useless constants
After #122972 and #123419, these constants don't serve any useful purpose, so get rid of them.
`@rustbot` label +A-code-coverage
CFI: Support function pointers for trait methods
Adds support for both CFI and KCFI for function pointers to trait methods by attaching both concrete and abstract types to functions.
KCFI does this through generation of a `ReifyShim` on any function pointer for a method that could go into a vtable, and keeping this separate from `ReifyShim`s that are *intended* for vtable us by setting a `ReifyReason` on them.
CFI does this by setting both the concrete and abstract type on every instance.
This should land after #123024 or a similar PR, as it diverges the implementation of CFI vs KCFI.
r? `@compiler-errors`
Remove MIR unsafe check
Now that THIR unsafeck is enabled by default in stable I think we can remove MIR unsafeck entirely. This PR also removes safety information from MIR.
Rollup of 4 pull requests
Successful merges:
- #122411 ( Provide cabi_realloc on wasm32-wasip2 by default )
- #123349 (Fix capture analysis for by-move closure bodies)
- #123359 (Link against libc++abi and libunwind as well when building LLVM wrappers on AIX)
- #123388 (use a consistent style for links)
r? `@ghost`
`@rustbot` modify labels: rollup
Rename `UninhabitedEnumBranching` to `UnreachableEnumBranching`
Per [#120268](https://github.com/rust-lang/rust/pull/120268#discussion_r1517492060), I rename `UninhabitedEnumBranching` to `UnreachableEnumBranching` .
I solved some nits to add some comments.
I adjusted the workaround restrictions. This should be useful for `a <= b` and `if let Some/Ok(v)`. For enum with few variants, `early-tailduplication` should not cause compile time overhead.
r? RalfJung
Add `Ord::cmp` for primitives as a `BinOp` in MIR
Update: most of this OP was written months ago. See https://github.com/rust-lang/rust/pull/118310#issuecomment-2016940014 below for where we got to recently that made it ready for review.
---
There are dozens of reasonable ways to implement `Ord::cmp` for integers using comparison, bit-ops, and branches. Those differences are irrelevant at the rust level, however, so we can make things better by adding `BinOp::Cmp` at the MIR level:
1. Exactly how to implement it is left up to the backends, so LLVM can use whatever pattern its optimizer best recognizes and cranelift can use whichever pattern codegens the fastest.
2. By not inlining those details for every use of `cmp`, we drastically reduce the amount of MIR generated for `derive`d `PartialOrd`, while also making it more amenable to MIR-level optimizations.
Having extremely careful `if` ordering to μoptimize resource usage on broadwell (#63767) is great, but it really feels to me like libcore is the wrong place to put that logic. Similarly, using subtraction [tricks](https://graphics.stanford.edu/~seander/bithacks.html#CopyIntegerSign) (#105840) is arguably even nicer, but depends on the optimizer understanding it (https://github.com/llvm/llvm-project/issues/73417) to be practical. Or maybe [bitor is better than add](https://discourse.llvm.org/t/representing-in-ir/67369/2?u=scottmcm)? But maybe only on a future version that [has `or disjoint` support](https://discourse.llvm.org/t/rfc-add-or-disjoint-flag/75036?u=scottmcm)? And just because one of those forms happens to be good for LLVM, there's no guarantee that it'd be the same form that GCC or Cranelift would rather see -- especially given their very different optimizers. Not to mention that if LLVM gets a spaceship intrinsic -- [which it should](https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Suboptimal.20inlining.20in.20std.20function.20.60binary_search.60/near/404250586) -- we'll need at least a rustc intrinsic to be able to call it.
As for simplifying it in Rust, we now regularly inline `{integer}::partial_cmp`, but it's quite a large amount of IR. The best way to see that is with 8811efa88b (diff-d134c32d028fbe2bf835fef2df9aca9d13332dd82284ff21ee7ebf717bfa4765R113) -- I added a new pre-codegen MIR test for a simple 3-tuple struct, and this PR change it from 36 locals and 26 basic blocks down to 24 locals and 8 basic blocks. Even better, as soon as the construct-`Some`-then-match-it-in-same-BB noise is cleaned up, this'll expose the `Cmp == 0` branches clearly in MIR, so that an InstCombine (#105808) can simplify that to just a `BinOp::Eq` and thus fix some of our generated code perf issues. (Tracking that through today's `if a < b { Less } else if a == b { Equal } else { Greater }` would be *much* harder.)
---
r? `@ghost`
But first I should check that perf is ok with this
~~...and my true nemesis, tidy.~~
KCFI needs to be able to tell which kind of `ReifyShim` it is examining
in order to decide whether to use a concrete type (`FnPtr` case) or an
abstract case (`Vtable` case). You can *almost* tell this from context,
but there is one case where you can't - if a trait has a method which is
*not* `#[track_caller]`, with an impl that *is* `#[track_caller]`, both
the vtable and a function pointer created from that method will be
`ReifyShim(def_id)`.
Currently, the reason is optional to ensure no additional unique
`ReifyShim`s are added without KCFI on. However, the case in which an
extra `ReifyShim` is created is sufficiently rare that this may be worth
revisiting to reduce complexity.
Rollup of 4 pull requests
Successful merges:
- #123176 (Normalize the result of `Fields::ty_with_args`)
- #123186 (copy any file from stage0/lib to stage0-sysroot/lib)
- #123187 (Forward port 1.77.1 release notes)
- #123188 (compiler: fix few unused_peekable and needless_pass_by_ref_mut clippy lints)
r? `@ghost`
`@rustbot` modify labels: rollup
compiler: fix few unused_peekable and needless_pass_by_ref_mut clippy lints
This fixes few instances of `unused_peekable` and `needless_pass_by_ref_mut`. While i expected to fix more warnings, `needless_pass_by_ref_mut` produced too much for one PR, so i stopped here.
Better reviewed commit by commit, as fixes splitted by chunks.
Simplify trim-paths feature by merging all debuginfo options together
This PR simplifies the trim-paths feature by merging all debuginfo options together, as described in https://github.com/rust-lang/rust/issues/111540#issuecomment-1994010274.
And also do some correctness fixes found during the review.
cc `@weihanglo`
r? `@michaelwoerister`
warning: this argument is a mutable reference, but not used mutably
--> compiler\rustc_mir_transform\src\coroutine.rs:1229:11
|
1229 | body: &mut Body<'tcx>,
| ^^^^^^^^^^^^^^^ help: consider changing to: `&Body<'tcx>`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_pass_by_ref_mut
warning: this argument is a mutable reference, but not used mutably
--> compiler\rustc_mir_transform\src\nrvo.rs:123:11
|
123 | body: &mut mir::Body<'_>,
| ^^^^^^^^^^^^^^^^^^ help: consider changing to: `&mir::Body<'_>`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_pass_by_ref_mut
warning: this argument is a mutable reference, but not used mutably
--> compiler\rustc_mir_transform\src\nrvo.rs:87:34
|
87 | fn local_eligible_for_nrvo(body: &mut mir::Body<'_>) -> Option<Local> {
| ^^^^^^^^^^^^^^^^^^ help: consider changing to: `&mir::Body<'_>`
|
= help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#needless_pass_by_ref_mut
coverage: Re-enable `UnreachablePropagation` for coverage builds
This is a sequence of 3 related changes:
- Clean up the existing code that scans for unused functions
- Detect functions that were instrumented for coverage, but have had all their coverage statements removed by later MIR transforms (e.g. `UnreachablePropagation`)
- Re-enable `UnreachablePropagation` in coverage builds
Because we now detect functions that have lost their coverage statements, and treat them as unused, we don't need to worry about `UnreachablePropagation` removing all of those statements. This is demonstrated by `tests/coverage/unreachable.rs`.
Fixes#116171.
In `ConstructCoroutineInClosureShim`, pass receiver by mut ref, not mut pointer
The receivers were compatible at codegen time, but did not necessarily have the same layouts due to niches, which was caught by miri.
Fixesrust-lang/miri#3400
r? oli-obk
Replace `mir_built` query with a hook and use mir_const everywhere instead
A small perf improvement due to less dep graph handling.
Mostly just a cleanup to get rid of one of our many mir queries
Unbox and unwrap the contents of `StatementKind::Coverage`
The payload of coverage statements was historically a structure with several fields, so it was boxed to avoid bloating `StatementKind`.
Now that the payload is a single relatively-small enum, we can replace `Box<Coverage>` with just `CoverageKind`.
This patch also adds a size assertion for `StatementKind`, to avoid accidentally bloating it in the future.
``@rustbot`` label +A-code-coverage
Fix validation on substituted callee bodies in MIR inliner
When inlining a coroutine, we will substitute the MIR body with the args of the call. There is code in the MIR validator that attempts to prevent query cycles, and will use the coroutine body directly when it detects that's the body that's being validated. That means that when inlining a coroutine body that has been substituted, it may no longer be parameterized over the original args of the coroutine, which will lead to substitution ICEs.
Fixes#119064
The payload of coverage statements was historically a structure with several
fields, so it was boxed to avoid bloating `StatementKind`.
Now that the payload is a single relatively-small enum, we can replace
`Box<Coverage>` with just `CoverageKind`.
This patch also adds a size assertion for `StatementKind`, to avoid
accidentally bloating it in the future.
Rollup of 8 pull requests
Successful merges:
- #114009 (compiler: allow transmute of ZST arrays with generics)
- #122195 (Note that the caller chooses a type for type param)
- #122651 (Suggest `_` for missing generic arguments in turbofish)
- #122784 (Add `tag_for_variant` query)
- #122839 (Split out `PredicatePolarity` from `ImplPolarity`)
- #122873 (Merge my contributor emails into one using mailmap)
- #122885 (Adjust better spastorino membership to triagebot's adhoc_groups)
- #122888 (add a couple more tests)
r? `@ghost`
`@rustbot` modify labels: rollup
Add `tag_for_variant` query
This query allows for sharing code between `rustc_const_eval` and `rustc_transmutability`. It's a precursor to a PR I'm working on to entirely replace the bespoke layout computations in `rustc_transmutability`.
r? `@compiler-errors`
Some of the marker statements used by coverage are added during MIR building
for use by the InstrumentCoverage pass (during analysis), and are not needed
afterwards.
recursively evaluate the constants in everything that is 'mentioned'
This is another attempt at fixing https://github.com/rust-lang/rust/issues/107503. The previous attempt at https://github.com/rust-lang/rust/pull/112879 seems stuck in figuring out where the [perf regression](https://perf.rust-lang.org/compare.html?start=c55d1ee8d4e3162187214692229a63c2cc5e0f31&end=ec8de1ebe0d698b109beeaaac83e60f4ef8bb7d1&stat=instructions:u) comes from. In https://github.com/rust-lang/rust/pull/122258 I learned some things, which informed the approach this PR is taking.
Quoting from the new collector docs, which explain the high-level idea:
```rust
//! One important role of collection is to evaluate all constants that are used by all the items
//! which are being collected. Codegen can then rely on only encountering constants that evaluate
//! successfully, and if a constant fails to evaluate, the collector has much better context to be
//! able to show where this constant comes up.
//!
//! However, the exact set of "used" items (collected as described above), and therefore the exact
//! set of used constants, can depend on optimizations. Optimizing away dead code may optimize away
//! a function call that uses a failing constant, so an unoptimized build may fail where an
//! optimized build succeeds. This is undesirable.
//!
//! To fix this, the collector has the concept of "mentioned" items. Some time during the MIR
//! pipeline, before any optimization-level-dependent optimizations, we compute a list of all items
//! that syntactically appear in the code. These are considered "mentioned", and even if they are in
//! dead code and get optimized away (which makes them no longer "used"), they are still
//! "mentioned". For every used item, the collector ensures that all mentioned items, recursively,
//! do not use a failing constant. This is reflected via the [`CollectionMode`], which determines
//! whether we are visiting a used item or merely a mentioned item.
//!
//! The collector and "mentioned items" gathering (which lives in `rustc_mir_transform::mentioned_items`)
//! need to stay in sync in the following sense:
//!
//! - For every item that the collector gather that could eventually lead to build failure (most
//! likely due to containing a constant that fails to evaluate), a corresponding mentioned item
//! must be added. This should use the exact same strategy as the ecollector to make sure they are
//! in sync. However, while the collector works on monomorphized types, mentioned items are
//! collected on generic MIR -- so any time the collector checks for a particular type (such as
//! `ty::FnDef`), we have to just onconditionally add this as a mentioned item.
//! - In `visit_mentioned_item`, we then do with that mentioned item exactly what the collector
//! would have done during regular MIR visiting. Basically you can think of the collector having
//! two stages, a pre-monomorphization stage and a post-monomorphization stage (usually quite
//! literally separated by a call to `self.monomorphize`); the pre-monomorphizationn stage is
//! duplicated in mentioned items gathering and the post-monomorphization stage is duplicated in
//! `visit_mentioned_item`.
//! - Finally, as a performance optimization, the collector should fill `used_mentioned_item` during
//! its MIR traversal with exactly what mentioned item gathering would have added in the same
//! situation. This detects mentioned items that have *not* been optimized away and hence don't
//! need a dedicated traversal.
enum CollectionMode {
/// Collect items that are used, i.e., actually needed for codegen.
///
/// Which items are used can depend on optimization levels, as MIR optimizations can remove
/// uses.
UsedItems,
/// Collect items that are mentioned. The goal of this mode is that it is independent of
/// optimizations: the set of "mentioned" items is computed before optimizations are run.
///
/// The exact contents of this set are *not* a stable guarantee. (For instance, it is currently
/// computed after drop-elaboration. If we ever do some optimizations even in debug builds, we
/// might decide to run them before computing mentioned items.) The key property of this set is
/// that it is optimization-independent.
MentionedItems,
}
```
And the `mentioned_items` MIR body field docs:
```rust
/// Further items that were mentioned in this function and hence *may* become monomorphized,
/// depending on optimizations. We use this to avoid optimization-dependent compile errors: the
/// collector recursively traverses all "mentioned" items and evaluates all their
/// `required_consts`.
///
/// This is *not* soundness-critical and the contents of this list are *not* a stable guarantee.
/// All that's relevant is that this set is optimization-level-independent, and that it includes
/// everything that the collector would consider "used". (For example, we currently compute this
/// set after drop elaboration, so some drop calls that can never be reached are not considered
/// "mentioned".) See the documentation of `CollectionMode` in
/// `compiler/rustc_monomorphize/src/collector.rs` for more context.
pub mentioned_items: Vec<Spanned<MentionedItem<'tcx>>>,
```
Fixes#107503
coverage: Remove incorrect assertions from counter allocation
These assertions detect situations where a BCB node (in the coverage graph) would have both a physical counter and one or more in-edge counters/expressions.
For most BCBs that situation would indicate an implementation bug. However, it's perfectly fine in the case of a BCB having an edge that loops back to itself.
Given the complexity and risk involved in fixing the assertions, and the fact that nothing relies on them actually being true, this patch just removes them instead.
Fixes#122738.
`````@rustbot````` label +A-code-coverage
These assertions detect situations where a BCB node would have both a physical
counter and one or more in-edge counters/expressions.
For most BCBs that situation would indicate an implementation bug. However,
it's perfectly fine in the case of a BCB having an edge that loops back to
itself.
Given the complexity and risk involved in fixing the assertions, and the fact
that nothing relies on them actually being true, this patch just removes them
instead.
Use hir::Node helper methods instead of repeating the same impl multiple times
I wanted to do something entirely different and stumbled upon a bunch of cleanups
add_retag: ensure box-to-raw-ptr casts are preserved for Miri
In https://github.com/rust-lang/rust/pull/122233 I added `retag_box_to_raw` not realizing that we can already do `addr_of_mut!(*bx)` to turn a box into a raw pointer without an intermediate reference. We just need to ensure this information is preserved past the ElaborateBoxDerefs pass.
r? ``@oli-obk``
simplify_cfg: rename some passes so that they make more sense
I was extremely confused by `SimplifyCfg::ElaborateDrops`, since it runs way later than drop elaboration. It is used e.g. in `mir-opt/retag.rs` even though that pass doesn't care about drop elaboration at all.
"Early opt" is also very confusing since that makes it sounds like it runs early during optimizations, i.e. on runtime MIR, but actually it runs way before that.
So I decided to rename
- early-opt -> post-analysis
- elaborate-drops -> pre-optimizations
I am open to other suggestions.
preserve span when evaluating mir::ConstOperand
This lets us show to the user where they were using the faulty const (which can be quite relevant when generics are involved).
I wonder if we should change "erroneous constant encountered" to something like "the above error was encountered while evaluating this constant" or so, to make this more similar to what the collector emits when showing a "backtrace" of where things get monomorphized? It seems a bit strange to rely on the order of emitted diagnostics for that but it seems the collector already [does that](da8a8c9223/compiler/rustc_monomorphize/src/collector.rs (L472-L475)).
coverage: Initial support for branch coverage instrumentation
(This is a review-ready version of the changes that were drafted in #118305.)
This PR adds support for branch coverage instrumentation, gated behind the unstable flag value `-Zcoverage-options=branch`. (Coverage instrumentation must also be enabled with `-Cinstrument-coverage`.)
During THIR-to-MIR lowering (MIR building), if branch coverage is enabled, we collect additional information about branch conditions and their corresponding then/else blocks. We inject special marker statements into those blocks, so that the `InstrumentCoverage` MIR pass can reliably identify them even after the initially-built MIR has been simplified and renumbered.
The rest of the changes are mostly just plumbing needed to gather up the information that was collected during MIR building, and include it in the coverage metadata that we embed in the final binary.
Note that `llvm-cov show` doesn't print branch coverage information in its source views by default; that needs to be explicitly enabled with `--show-branches=count` or similar.
---
The current implementation doesn't have any support for instrumenting `if let` or let-chains. I think it's still useful without that, and adding it would be non-trivial, so I'm happy to leave that for future work.
interpret: ensure that Place is never used for a different frame
We store the address where the stack frame stores its `locals`. The idea is that even if we pop and push, or switch to a different thread with a larger number of frames, then the `locals` address will most likely change so we'll notice that problem. This is made possible by some recent changes by `@WaffleLapkin,` where we no longer use `Place` across things that change the number of stack frames.
I made these debug assertions for now, just to make sure this can't cost us any perf.
The first commit is unrelated but it's a one-line comment change so it didn't warrant a separate PR...
r? `@oli-obk`
This will allow MIR building to check whether a function is eligible for
coverage instrumentation, and avoid collecting branch coverage info if it is
not.
Distinguish between library and lang UB in assert_unsafe_precondition
As described in https://github.com/rust-lang/rust/pull/121583#issuecomment-1963168186, `assert_unsafe_precondition` now explicitly distinguishes between language UB (conditions we explicitly optimize on) and library UB (things we document you shouldn't do, and maybe some library internals assume you don't do).
`debug_assert_nounwind` was originally added to avoid the "only at runtime" aspect of `assert_unsafe_precondition`. Since then the difference between the macros has gotten muddied. This totally revamps the situation.
Now _all_ preconditions shall be checked with `assert_unsafe_precondition`. If you have a precondition that's only checkable at runtime, do a `const_eval_select` hack, as done in this PR.
r? RalfJung
Add asm goto support to `asm!`
Tracking issue: #119364
This PR implements asm-goto support, using the syntax described in "future possibilities" section of [RFC2873](https://rust-lang.github.io/rfcs/2873-inline-asm.html#asm-goto).
Currently I have only implemented the `label` part, not the `fallthrough` part (i.e. fallthrough is implicit). This doesn't reduce the expressive though, since you can use label-break to get arbitrary control flow or simply set a value and rely on jump threading optimisation to get the desired control flow. I can add that later if deemed necessary.
r? ``@Amanieu``
cc ``@ojeda``
interpret: avoid a long-lived PlaceTy in stack frames
`PlaceTy` uses a representation that's not very stable under changes to the stack. I'd feel better if we didn't have one in the long-term machine state.
r? `@oli-obk`
Fix `async Fn` confirmation for `FnDef`/`FnPtr`/`Closure` types
Fixes three issues:
1. The code in `extract_tupled_inputs_and_output_from_async_callable` was accidentally getting the *future* type and the *output* type (returned by the future) messed up for fnptr/fndef/closure types. :/
2. We have a (class of) bug(s) in the old solver where we don't really support higher ranked built-in `Future` goals for generators. This is not possible to hit on stable code, but [can be hit with `unboxed_closures`](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=e935de7181e37e13515ad01720bcb899) (#121653).
* I'm opting not to fix that in this PR. Instead, I just instantiate placeholders when confirming `async Fn` goals.
4. Fixed a bug when generating `FnPtr` shims for `async Fn` trait goals.
r? oli-obk
Add `#[rustc_no_mir_inline]` for standard library UB checks
should help with #121110 and also with #120848
Because the MIR inliner cannot know whether the checks are enabled or not, so inlining is an unnecessary compile time pessimization when debug assertions are disabled. LLVM knows whether they are enabled or not, so it can optimize accordingly without wasting time.
r? `@saethlin`
coverage: Rename `is_closure` to `is_hole`
Extracted from #121433, since I was having second thoughts about some of the other changes bundled in that PR, but these changes are still fine.
---
When refining covspans, we don't specifically care which ones represent closures; we just want to know which ones represent "holes" that should be carved out of other spans and then discarded.
(Closures are currently the only source of hole spans, but in the future we might want to also create hole spans for nested items and inactive `#[cfg(..)]` regions.)
``@rustbot`` label +A-code-coverage
When refining covspans, we don't specifically care which ones represent
closures; we just want to know which ones represent "holes" that should be
carved out of other spans and then discarded.
(Closures are currently the only source of hole spans, but in the future we
might want to also create hole spans for nested items and inactive `#[cfg(..)]`
regions.)