Now that this code path unconditionally calls `make_branch_counters`, we might
as well make that method responsible for creating the node's counter as well,
since it needs the resulting term anyway.
There were three issues previously:
* The self argument was pinned, despite Iterator::next taking an
unpinned mutable reference.
* A resume argument was passed, despite Iterator::next not having one.
* The return value was CoroutineState<Item, ()> rather than Option<Item>
While these things just so happened to work with the LLVM backend,
cg_clif does much stricter checks when trying to assign a value to a
place. In addition it can't handle the mismatch between the amount of
arguments specified by the FnAbi and the FnSig.
`on_all_children_bits` has two arguments that are unused: `tcx` and
`body`. This was not detected by the compiler because it's a recursive
function.
This commit removes them, and removes lots of other arguments and fields
that are no longer necessary.
Fix insertion of statements to be executed along return edge in inlining
Inlining creates additional statements to be executed along the return
edge: an assignment to the destination, storage end for temporaries.
Previously those statements where inserted directly into a call target,
but this is incorrect when the target has other predecessors.
Avoid the issue by creating a new dedicated block for those statements.
When the block happens to be redundant it will be removed by CFG
simplification that follows inlining.
Fixes#117355
Inlining creates additional statements to be executed along the return
edge: an assignment to the destination, storage end for temporaries.
Previously those statements where inserted directly into a call target,
but this is incorrect when the target has other predecessors.
Avoid the issue by creating a new dedicated block for those statements.
When the block happens to be redundant it will be removed by CFG
simplification that follows inlining.
Fixes#117355
Begin to abstract `rustc_type_ir` for rust-analyzer
This adds the "nightly" feature which is used by the compiler, and falls back to more simple implementations when that is not active.
r? `@lcnr` or `@jackh726`
Remove option_payload_ptr; redundant to offset_of
The `option_payload_ptr` intrinsic is no longer required as `offset_of` supports traversing enums (#114208). This PR removes it in order to dogfood offset_of (as suggested at https://github.com/rust-lang/rust/issues/106655#issuecomment-1790907626). However, it will not build until those changes reach beta (which I think is within the next 8 days?) so I've opened it as a draft.
Remove incorrect transformation from RemoveZsts
Partial removal of storage statements for a local is incorrect, so a decision to optimize cannot be make independently for each statement.
Avoid the issue by performing the transformation completely or not at all.
This method is trying to detect macro invocations, so that it can split a span
into two parts just after the `!` of the invocation.
Under some circumstances (probably involving nested macros), it gets confused
and produces a span that is larger than the original span, and possibly extends
outside its enclosing function and even into an adjacent file.
In extreme cases, that can result in malformed coverage mappings that cause
`llvm-cov` to fail. For now, we at least want to detect these egregious cases
and avoid them, so that coverage reports can still be produced.
Partial removal of storage statements for a local is incorrect, so a
decision to optimize cannot be make independently for each statement.
Avoid the issue by performing the transformation completely or not at
all.
generator layout: ignore fake borrows
fixes#117059
We emit fake shallow borrows in case the scrutinee place uses a `Deref` and there is a match guard. This is necessary to prevent the match guard from mutating the scrutinee: fab1054e17/compiler/rustc_mir_build/src/build/matches/mod.rs (L1250-L1265)
These fake borrows end up impacting the generator witness computation in `mir_generator_witnesses`, which causes the issue in #117059. This PR now completely ignores fake borrows during this computation. This is sound as thse are always removed after analysis and the actual computation of the generator layout happens afterwards.
Only the second commit impacts behavior, and could be backported by itself.
r? types
Update the alignment checks to match rust-lang/reference#1387
Previously, we had a special case to not check `Rvalue::AddressOf` in this pass because we weren't quite sure if pointers needed to be aligned in the Place passed to it: https://github.com/rust-lang/rust/pull/112026
Since https://github.com/rust-lang/reference/pull/1387 merged, this PR updates this pass to match. The behavior of the check is nearly unchanged, except we also avoid inserting a check for creating references. Most of the changes in this PR are cleanup and new tests.
Support enum variants in offset_of!
This MR implements support for navigating through enum variants in `offset_of!`, placing the enum variant name in the second argument to `offset_of!`. The RFC placed it in the first argument, but I think it interacts better with nested field access in the second, as you can then write things like
```rust
offset_of!(Type, field.Variant.field)
```
Alternatively, a syntactic distinction could be made between variants and fields (e.g. `field::Variant.field`) but I'm not convinced this would be helpful.
[RFC 3308 # Enum Support](https://rust-lang.github.io/rfcs/3308-offset_of.html#enum-support-offset_ofsomeenumstructvariant-field_on_variant)
Tracking Issue #106655.
Replace switch to unreachable by assume statements
`UnreachablePropagation` currently keeps some switch terminators alive in order to ensure codegen can infer the inequalities on the discriminants.
This PR proposes to encode those inequalities as `Assume` statements.
This allows to simplify MIR further by removing some useless terminators.
Historically, these errors existed so that the coverage debug code could dump
additional information before reporting a compiler bug. That debug code was
removed by #115962, so we can now simplify these methods by making them panic
when they detect a bug.
Enable cross-crate-inlining when MIR inlining is enabled
This would make https://github.com/rust-lang/rust/issues/117355 generally less obscure, and also seems like a good idea, even if for some reason someone wants MIR opts but no codegen opts.
See through aggregates in GVN
This PR is extracted from https://github.com/rust-lang/rust/pull/111344
The first 2 commit are cleanups to avoid repeated work. I propose to stop removing useless assignments as part of this pass, and let a later `SimplifyLocals` do it. This makes tests easier to read (among others).
The next 3 commits add a constant folding mechanism to the GVN pass, presented in https://github.com/rust-lang/rust/pull/116012. ~This pass is designed to only use global allocations, to avoid any risk of accidental modification of the stored state.~
The following commits implement opportunistic simplifications, in particular:
- projections of aggregates: `MyStruct { x: a }.x` gets replaced by `a`, works with enums too;
- projections of arrays: `[a, b][0]` becomes `a`;
- projections of repeat expressions: `[a; N][x]` becomes `a`;
- transform arrays of equal operands into a repeat rvalue.
Fixes https://github.com/rust-lang/miri/issues/3090
r? `@oli-obk`
Rollup of 5 pull requests
Successful merges:
- #115773 (tvOS simulator support on Apple Silicon for rustc)
- #117162 (Remove `cfg_match` from the prelude)
- #117311 (-Zunpretty help: add missing possible values)
- #117316 (Mark constructor of `BinaryHeap` as const fn)
- #117319 (explain why we don't inline when target features differ)
r? `@ghost`
`@rustbot` modify labels: rollup
Implement `gen` blocks in the 2024 edition
Coroutines tracking issue https://github.com/rust-lang/rust/issues/43122
`gen` block tracking issue https://github.com/rust-lang/rust/issues/117078
This PR implements `gen` blocks that implement `Iterator`. Most of the logic with `async` blocks is shared, and thus I renamed various types that were referring to `async` specifically.
An example usage of `gen` blocks is
```rust
fn foo() -> impl Iterator<Item = i32> {
gen {
yield 42;
for i in 5..18 {
if i.is_even() { continue }
yield i * 2;
}
}
}
```
The limitations (to be resolved) of the implementation are listed in the tracking issue
Only call `mir_const_qualif` if absolutely necessary
Pull the perf change out of https://github.com/rust-lang/rust/pull/113617
This should not have any impact on behaviour (if it does, we'll see an ICE)
Require target features to match exactly during inlining
In general it is not correct to inline a callee with a target features
that are subset of the callee. Require target features to match exactly
during inlining.
The exact match could be potentially relaxed, but this would require
identifying specific feature that are allowed to differ, those that need
to match, and those that can be present in caller but not in callee.
This resolves MIR part of #116573. For other concerns with respect to
the previous implementation also see areInlineCompatible in LLVM.
In general it is not correct to inline a callee with a target features
that are subset of the callee. Require target features to match exactly
during inlining.
The exact match could be potentially relaxed, but this would require
identifying specific feature that are allowed to differ, those that need
to match, and those that can be present in caller but not in callee.
This resolves MIR part of #116573. For other concerns with respect to
the previous implementation also see areInlineCompatible in LLVM.
Separate move path tracking between borrowck and drop elaboration.
The primary goal of this PR is to skip creating a `MovePathIndex` for path that do not need dropping in drop elaboration.
The 2 first commits are cleanups.
The next 2 commits displace `move` errors from move-path builder to borrowck. Move-path builder keeps the same logic, but does not carry error information any more.
The remaining commits allow to filter `MovePathIndex` creation according to types. This is used in drop elaboration, to avoid computing dataflow for paths that do not need dropping.
Implement jump threading MIR opt
This pass is an attempt to generalize `ConstGoto` and `SeparateConstSwitch` passes into a more complete jump threading pass.
This pass is rather heavy, as it performs a truncated backwards DFS on MIR starting from each `SwitchInt` terminator. This backwards DFS remains very limited, as it only walks through `Goto` terminators.
It is build to support constants and discriminants, and a propagating through a very limited set of operations.
The pass successfully manages to disentangle the `Some(x?)` use case and the DFA use case. It still needs a few tests before being ready.
coverage: Fix inconsistent handling of function signature spans
While doing some more cleanup of `spans`, I noticed a strange inconsistency in how function signatures are handled. Normally the function signature span is treated as though it were executable as part of the start of the function, but in some cases the signature span disappears entirely from coverage, for no obvious reason.
This is caused by the fact that spans created by `CoverageSpan::for_fn_sig` don't add the span to their `merged_spans` field (unlike normal statement/terminator spans). In cases where the span-processing code looks at those merged spans, it thinks the signature span is no longer visible and deletes it.
Adding the signature span to `merged_spans` resolves the inconsistency.
(Prior to #116409 this wouldn't have been possible, because there was no case in the old `CoverageStatement` enum representing a signature. Now that `merged_spans` is just a list of spans, that's no longer an obstacle.)
Implement rustc part of RFC 3127 trim-paths
This PR implements (or at least tries to) [RFC 3127 trim-paths](https://github.com/rust-lang/rust/issues/111540), the rustc part. That is `-Zremap-path-scope` with all of it's components/scopes.
`@rustbot` label: +F-trim-paths
Even though expression details are now stored in the info structure, we still
need to inject `ExpressionUsed` statements into MIR, because if one is missing
during codegen then we know that it was optimized out and we can remap all of
its associated code regions to zero.
Previously, mappings were attached to individual coverage statements in MIR.
That necessitated special handling in MIR optimizations to avoid deleting those
statements, since otherwise codegen would be unable to reassemble the original
list of mappings.
With this change, a function's list of mappings is now attached to its MIR
body, and survives intact even if individual statements are deleted by
optimizations.
Coverage codegen can now allocate arrays based on the number of
counters/expressions originally used by the instrumentor.
The existing query that inspects coverage statements is still used for
determining the number of counters passed to `llvm.instrprof.increment`. If
some high-numbered counters were removed by MIR optimizations, the instrumented
binary can potentially use less memory and disk space at runtime.
This allows coverage information to be attached to the function as a whole when
appropriate, instead of being smuggled through coverage statements in the
function's basic blocks.
As an example, this patch moves the `function_source_hash` value out of
individual `CoverageKind::Counter` statements and into the per-function info.
When synthesizing unused functions for coverage purposes, the absence of this
info is taken to indicate that a function was not eligible for coverage and
should not be synthesized.
Interacting with `basic_coverage_blocks` directly makes it easier to satisfy
the borrow checker when mutating `pending_dups` while reading other fields.
Format all the let-chains in compiler crates
Since rust-lang/rustfmt#5910 has landed, soon we will have support for formatting let-chains (as soon as rustfmt syncs and beta gets bumped).
This PR applies the changes [from master rustfmt to rust-lang/rust eagerly](https://rust-lang.zulipchat.com/#narrow/stream/122651-general/topic/out.20formatting.20of.20prs/near/374997516), so that the next beta bump does not have to deal with a 200+ file diff and can remain concerned with other things like `cfg(bootstrap)` -- #113637 was a pain to land, for example, because of let-else.
I will also add this commit to the ignore list after it has landed.
The commands that were run -- I'm not great at bash-foo, but this applies rustfmt to every compiler crate, and then reverts the two crates that should probably be formatted out-of-tree.
```
~/rustfmt $ ls -1d ~/rust/compiler/* | xargs -I@ cargo run --bin rustfmt -- `@/src/lib.rs` --config-path ~/rust --edition=2021 # format all of the compiler crates
~/rust $ git checkout HEAD -- compiler/rustc_codegen_{gcc,cranelift} # revert changes to cg-gcc and cg-clif
```
cc `@rust-lang/rustfmt`
r? `@WaffleLapkin` or `@Nilstrieb` who said they may be able to review this purely mechanical PR :>
cc `@Mark-Simulacrum` and `@petrochenkov,` who had some thoughts on the order of operations with big formatting changes in https://github.com/rust-lang/rust/pull/95262#issue-1178993801. I think the situation has changed since then, given that let-chains support exists on master rustfmt now, and I'm fairly confident that this formatting PR should land even if *bootstrap* rustfmt doesn't yet format let-chains in order to lessen the burden of the next beta bump.
const-eval: make misalignment a hard error
It's been a future-incompat error (showing up in cargo's reports) since https://github.com/rust-lang/rust/pull/104616, Rust 1.68, released in March. That should be long enough.
The question for the lang team is simply -- should we move ahead with this, making const-eval alignment failures a hard error? (It turns out some of them accidentally already were hard errors since #104616. But not all so this is still a breaking change. Crater found no regression.)
Having to keep passing in a graph reference was a holdover from when the graph
was partly mutated during traversal. As of #114354 that is no longer necessary,
so we can simplify the traversal code by storing a graph reference as a field
in `TraverseCoverageGraphWithLoops`.
The previous code was storing the worklist in a vector, and then selectively
adding items to the start or end of the vector. That's a perfect use-case for a
double-ended queue.
This change also reveals that the existing code was a bit confused about which
end of the worklist is the front or back. For now, items are always removed
from the front of the queue (instead of the back), and code that adds items to
the queue has been flipped, to preserve the existing behaviour.
Do not check for impossible predicates in const-prop lint.
The enclosing query already checks for them, and replaces the body with a single `unreachable` if they are indeed impossible.
Also consider call and yield as MIR SSA.
The SSA analysis on MIR only considered `Assign` statements as defining a SSA local.
This PR adds assignments as part of a `Call` or `Yield` terminator in that category.
This mainly allows to perform CopyProp on a call return place.
The only subtlety is in the dominance property: the assignment is only complete at the beginning of the target block.
coverage: Unbox and simplify `bcb_filtered_successors`
This is a small cleanup in the coverage instrumentor's graph-building code.
---
This function already has access to the MIR body, so instead of taking a reference to a terminator, it's simpler and easier to pass in a basic block index.
There is no need to box the returned iterator if we instead add appropriate lifetime captures, and make `short_circuit_preorder` generic over the type of iterator it expects.
We can also greatly simplify the function's implementation by observing that the only difference between its two cases is whether we take all of a BB's successors, or just the first one.
---
`@rustbot` label +A-code-coverage
This function already has access to the MIR body, so instead of taking a
reference to a terminator, it's simpler and easier to pass in a basic block
index.
There is no need to box the returned iterator if we instead add appropriate
lifetime captures, since `short_circuit_preorder` is now generic over the type
of iterator it expects.
We can also greatly simplify the function's implementation by observing that
the only difference between its two cases is whether we take all of a BB's
successors, or just the first one.
This enum was mainly needed to track the precise origin of a span in MIR, for
debug printing purposes. Since the old debug code was removed in #115962, we
can replace it with just the span itself.
Generalize small dominators optimization
* Use small dominators optimization from 640ede7b0a more generally.
* Merge `DefLocation` and `LocationExtended` since they serve the same purpose.
coverage: Allow each coverage statement to have multiple code regions
The original implementation of coverage instrumentation was built around the assumption that a coverage counter/expression would be associated with *up to one* code region. When it was discovered that *multiple* regions would sometimes need to share a counter, a workaround was found: for the remaining regions, the instrumentor would create a fresh expression that adds zero to the existing counter/expression.
That got the job done, but resulted in some awkward code, and produces unnecessarily complicated coverage maps in the final binary.
---
This PR removes that tension by changing `StatementKind::Coverage`'s code region field from `Option<CodeRegion>` to `Vec<CodeRegion>`.
The changes on the codegen side are fairly straightforward. As long as each `CoverageKind::Counter` only injects one `llvm.instrprof.increment`, the rest of coverage codegen is happy to handle multiple regions mapped to the same counter/expression, with only minor option-to-vec adjustments.
On the instrumentor/mir-transform side, we can get rid of the code that creates extra (x + 0) expressions. Instead we gather all of the code regions associated with a single BCB, and inject them all into one coverage statement.
---
There are several patches here but they can be divided in to three phases:
- Preparatory work
- Actually switching over to multiple regions per coverage statement
- Cleaning up
So viewing the patches individually may be easier.
When these methods were originally written, I wasn't aware that
`newtype_index!` already supports addition with ordinary numbers, without
needing to unwrap and re-wrap.
If a BCB has more than one code region, those extra regions can now all be
stored in the same coverage statement, instead of being stored in additional
statements.
The concrete type `CoverageSpan` is no longer used outside of the `spans`
module.
This is a separate patch to avoid noise in the preceding patch that actually
encapsulates coverage spans.
By encapsulating the coverage spans in a struct, we can change the internal
representation without disturbing existing call sites. This will be useful for
grouping coverage spans by BCB.
This patch includes some changes that were originally in #115912, which avoid
the need for a particular test to deal with coverage spans at all.
(Comments/logs referring to `CoverageSpan` are updated in a subsequent patch.)
Reveal opaque types before drop elaboration
fixes https://github.com/rust-lang/rust/issues/113594
r? `@cjgillot`
cc `@JakobDegen`
This pass was introduced in https://github.com/rust-lang/rust/pull/110714
I moved it before drop elaboration (which only cares about the hidden types of things, not the opaque TAIT or RPIT type) and set it to run unconditionally (instead of depending on the optimization level and whether the inliner is active)
Implement a global value numbering MIR optimization
The aim of this pass is to avoid repeated computations by reusing past assignments. It is based on an analysis of SSA locals, in order to perform a restricted form of common subexpression elimination.
By opportunity, this pass allows for some simplifications by combining assignments. For instance, this pass could be able to see through projections of aggregates to directly reuse the aggregate field (not in this PR).
We handle references by assigning a different "provenance" index to each `Ref`/`AddressOf` rvalue. This ensure that we do not spuriously merge borrows that should not be merged. Meanwhile, we consider all the derefs of an immutable reference to a freeze type to give the same value:
```rust
_a = *_b // _b is &Freeze
_c = *_b // replaced by _c = _a
```
Skip MIR pass `UnreachablePropagation` when coverage is enabled
When coverage instrumentation and MIR opts are both enabled, coverage relies on two assumptions:
- MIR opts that would delete `StatementKind::Coverage` statements instead move them into bb0 and change them to `CoverageKind::Unreachable`.
- MIR opts won't delete all `CoverageKind::Counter` statements from an instrumented function.
Most MIR opts naturally satisfy the second assumption, because they won't remove coverage statements from bb0, but `UnreachablePropagation` can do so if it finds that bb0 is unreachable. If this happens, LLVM thinks the function isn't instrumented, and it vanishes from coverage reports.
A proper solution won't be possible until after per-function coverage info lands in #116046, but for now we can avoid the problem by turning off this particular pass when coverage instrumentation is enabled.
---
cc `@cjgillot` since I found this while investigating coverage problems encountered by #113970
`@rustbot` label +A-code-coverage +A-mir-opt
subst -> instantiate
continues #110793, there are still quite a few uses of `subst` and `substitute`, but changing them all in the same PR was a bit too much, so I've stopped here for now.
When coverage instrumentation and MIR opts are both enabled, coverage relies on
two assumptions:
- MIR opts that would delete `StatementKind::Coverage` statements instead move
them into bb0 and change them to `CoverageKind::Unreachable`.
- MIR opts won't delete all `CoverageKind::Counter` statements from an
instrumented function.
Most MIR opts naturally satisfy the second assumption, because they won't
remove coverage statements from bb0, but `UnreachablePropagation` can do so if
it finds that bb0 is unreachable. If this happens, LLVM thinks the function
isn't instrumented, and it vanishes from coverage reports.
A proper solution won't be possible until after per-function coverage info
lands in #116046, but for now we can avoid the problem by turning off this
particular pass when coverage instrumentation is enabled.
interpret: more consistently use ImmTy in operators and casts
The diff in src/tools/miri/src/shims/x86/sse2.rs should hopefully suffice to explain why this is nicer. :)
rename mir::Constant -> mir::ConstOperand, mir::ConstKind -> mir::Const
Also, be more consistent with the `to/eval_bits` methods... we had some that take a type and some that take a size, and then sometimes the one that takes a type is called `bits_for_ty`.
Turns out that `ty::Const`/`mir::ConstKind` carry their type with them, so we don't need to even pass the type to those `eval_bits` functions at all.
However this is not properly consistent yet: in `ty` we have most of the methods on `ty::Const`, but in `mir` we have them on `mir::ConstKind`. And indeed those two types are the ones that correspond to each other. So `mir::ConstantKind` should actually be renamed to `mir::Const`. But what to do with `mir::Constant`? It carries around a span, that's really more like a constant operand that appears as a MIR operand... it's more suited for `syntax.rs` than `consts.rs`, but the bigger question is, which name should it get if we want to align the `mir` and `ty` types? `ConstOperand`? `ConstOp`? `Literal`? It's not a literal but it has a field called `literal` so it would at least be consistently wrong-ish...
``@oli-obk`` any ideas?
coverage: Fix an unstable-sort inconsistency in coverage spans
This code was calling `sort_unstable_by`, but failed to impose a total order on the initial spans. That resulted in unpredictable handling of closure spans, producing inconsistencies in the coverage maps and in user-visible coverage reports.
This PR fixes the problem by always sorting closure spans before otherwise-identical non-closure spans, and also switches to a stable sort in case the ordering is still not total.
---
In addition to the fix itself, this PR also contains a cleanup to the comparison function that I was working on when I discovered the bug.
This code was calling `sort_unstable_by`, but failed to impose a total order on
the initial spans. That resulted in unpredictable handling of closure spans,
producing inconsistencies in the coverage maps and in user-visible coverage
reports.
This patch fixes the problem by always sorting closure spans before
otherwise-identical non-closure spans, and also switches to a stable sort in
case the ordering is still not total.
treat host effect params as erased in codegen
This fixes the changes brought to codegen tests when effect params are added to libcore, by not attempting to monomorphize functions that get the host param by being `const fn`.
r? `@oli-obk`
This fixes the changes brought to codegen tests when effect params are
added to libcore, by not attempting to monomorphize functions that get
the host param by being `const fn`.
Rollup of 6 pull requests
Successful merges:
- #115736 (Remove `verbose_generic_activity_with_arg`)
- #115771 (cleanup leftovers of const_err lint)
- #115798 (add helper method for finding the one non-1-ZST field)
- #115812 (Merge settings.css into rustdoc.css)
- #115815 (fix: return early when has tainted in mir pass)
- #115816 (Disabled socketpair for Vita)
r? `@ghost`
`@rustbot` modify labels: rollup
Remove `verbose_generic_activity_with_arg`
This removes `verbose_generic_activity_with_arg` and changes users to `generic_activity_with_arg`. This keeps the output of `-Z time` readable while these repeated events are still available with the self profiling mechanism.
fix: return early when has tainted in mir-lint
Fixes#115203
`a[..]` is of indeterminate size, it had been reported error during borrow check, therefore we skip the mir lint process.
coverage: Simplify the `coverageinfo` query
The `coverageinfo` query walks through a `mir::Body`'s statements to find the total number of coverage counter IDs and coverage expression IDs that have been used, as this information is needed by coverage codegen.
This PR makes 3 nice simplifications to that query:
- Extract a common iterator over coverage statements, shared by both coverage-related queries
- Simplify the query's visitor from two passes to just one pass
- Explicitly track the highest seen IDs in the visitor, and only convert to a count right at the end
I also updated some related comments. Some had been invalidated by these changes, while others had already been invalidated by previous coverage changes.
Don't report any errors in `lower_intrinsics`.
Intrinsics should have been type checked earlier.
This is part of moving all mir-opt diagnostics early enough so that they are reliably emitted even in check builds: https://github.com/rust-lang/rust/issues/49292#issuecomment-1692212095
Both of the coverage queries can now use this one helper function to iterate
over all of the `mir::Coverage` payloads in the statements of a `mir::Body`.
Represent MIR composite debuginfo as projections instead of aggregates
Composite debuginfo for MIR is currently represented as
```
debug name => Type { projection1 => place1, projection2 => place2 };
```
ie. a single `VarDebugInfo` object with that name, and its value a `VarDebugInfoContents::Composite`.
This PR proposes to reverse the representation to be
```
debug name.projection1 => place1;
debug name.projection2 => place2;
```
ie. multiple `VarDebugInfo` objects with each their projection.
This simplifies the handling of composite debuginfo by the compiler by avoiding weird nesting.
Based on https://github.com/rust-lang/rust/pull/115139
Use relative positions inside a SourceFile.
This allows to remove the normalization of start positions for hashing, and simplify allocation of global address space.
cc `@Zoxc`
Rollup of 5 pull requests
Successful merges:
- #115353 (Emit error instead of ICE when optimized MIR is missing)
- #115488 (Take `&mut Results` in `ResultsVisitor`)
- #115492 (Allow `large_assignments` for Box/Arc/Rc initialization)
- #115519 (Don't ICE on associated type projection without feature gate in new solver)
- #115534 (Expose more information with DefId in smir)
r? `@ghost`
`@rustbot` modify labels: rollup
Fix inlining with -Zalways-encode-mir
Only inline functions that are considered eligible for inlining
by the reachability pass.
This constraint was previously indirectly enforced by only exporting MIR
of eligible functions, but that approach doesn't work with
-Zalways-encode-mir enabled.
Only inline functions that are considered eligible for inlining
by the reachability pass.
This constraint was previously indirectly enforced by only exporting MIR
of eligible functions, but that approach doesn't work with
-Zalways-encode-mir enabled.
Don't do intra-pass validation on MIR shims
Fixes#114375
In the test that was committed, we end up generating the drop shim for `struct Foo` that looks like:
```
fn std::ptr::drop_in_place(_1: *mut Foo) -> () {
let mut _0: ();
bb0: {
goto -> bb5;
}
bb1: {
return;
}
bb2 (cleanup): {
resume;
}
bb3: {
goto -> bb1;
}
bb4 (cleanup): {
drop(((*_1).0: foo::WrapperWithDrop<()>)) -> [return: bb2, unwind terminate];
}
bb5: {
drop(((*_1).0: foo::WrapperWithDrop<()>)) -> [return: bb3, unwind: bb2];
}
}
```
In `bb4` and `bb5`, we assert that `(*_1).0` has type `WrapperWithDrop<()>`. However, In a user-facing param env, the type is actually `WrapperWithDrop<Tait>`. These types are not equal in a user-facing param-env (and can't be made equal even if we use `DefiningAnchor::Bubble`, since it's a non-local TAIT).
coverage: Give the instrumentor its own counter type, separate from MIR
Within the MIR representation of coverage data, `CoverageKind` is an important part of `StatementKind::Coverage`, but the `InstrumentCoverage` pass also uses it heavily as an internal data structure. This means that any change to `CoverageKind` also needs to update all of the internal parts of `InstrumentCoverage` that manipulate it directly, making the MIR representation difficult to modify.
---
This change fixes that by giving the instrumentor its own `BcbCounter` type for internal use, which is then converted to a `CoverageKind` when injecting coverage information into MIR.
The main change is mostly mechanical, because the initial `BcbCounter` is drop-in compatible with `CoverageKind`, minus the unnecessary `CoverageKind::Unreachable` variant.
I've then removed the `function_source_hash` field from `BcbCounter::Counter`, as a small example of how the two types can now usefully differ from each other. Every counter in a MIR-level function should have the same source hash, so we can supply the hash during the conversion to `CoverageKind::Counter` instead.
---
*Background:* BCB stands for “basic coverage block”, which is a node in the simplified control-flow graph used by coverage instrumentation. The instrumentor pass uses the function's actual MIR control-flow graph to build a simplified BCB graph, then assigns coverage counters and counter expressions to various nodes/edges in that simplified graph, and then finally injects corresponding coverage information into the underlying MIR.
Add MIR validation for unwind out from nounwind functions + fixes to make validation pass
`@Nilstrieb` This is the MIR validation you asked in https://github.com/rust-lang/rust/pull/112403#discussion_r1222739722.
Two passes need to be fixed to get the validation to pass:
* `RemoveNoopLandingPads` currently unconditionally introduce a resume block (even there is none to begin with!), changed to not do that
* Generator state transform introduces a `assert` which may unwind, and its drop elaboration also introduces many new `UnwindAction`s, so in this case run the AbortUnwindingCalls after the transformation.
I believe this PR should also fixRust-for-Linux/linux#1016, cc `@ojeda`
r? `@Nilstrieb`
This shows one small benefit of separating `BcbCounter` from `CoverageKind`.
The function source hash will be the same for all counters within a function,
so instead of passing it through `CoverageCounters` and storing it in every
counter, we can just supply it during the final conversion to `CoverageKind`.
Otherwise the file name generated for generator_drop will become
core.ptr-drop_in_place.[generator@<FILEPATH>_<NUMBERS>].generator_drop.0.mir
instead of main-{closure#0}.generator_drop.0.mir which breaks a mir-opt
test.
Normalize before checking if local is freeze in `deduced_param_attrs`
Not normalizing the local type eagerly results in possibly exponential amounts of normalization happening downstream in `is_freeze_raw`.
Fixes#113372
Storing coverage counter information in `CoverageCounters` has a few advantages
over storing it directly inside BCB graph nodes:
- The graph doesn't need to be mutable when making the counters, making it
easier to see that the graph itself is not modified during this step.
- All of the counter data is clearly visible in one place.
- It becomes possible to use a representation that doesn't correspond 1:1 to
graph nodes, e.g. storing all the edge counters in a single hashmap instead of
several.
feat: `riscv-interrupt-{m,s}` calling conventions
Similar to prior support added for the mips430, avr, and x86 targets this change implements the rough equivalent of clang's [`__attribute__((interrupt))`][clang-attr] for riscv targets, enabling e.g.
```rust
static mut CNT: usize = 0;
pub extern "riscv-interrupt-m" fn isr_m() {
unsafe {
CNT += 1;
}
}
```
to produce highly effective assembly like:
```asm
pub extern "riscv-interrupt-m" fn isr_m() {
420003a0: 1141 addi sp,sp,-16
unsafe {
CNT += 1;
420003a2: c62a sw a0,12(sp)
420003a4: c42e sw a1,8(sp)
420003a6: 3fc80537 lui a0,0x3fc80
420003aa: 63c52583 lw a1,1596(a0) # 3fc8063c <_ZN12esp_riscv_rt3CNT17hcec3e3a214887d53E.0>
420003ae: 0585 addi a1,a1,1
420003b0: 62b52e23 sw a1,1596(a0)
}
}
420003b4: 4532 lw a0,12(sp)
420003b6: 45a2 lw a1,8(sp)
420003b8: 0141 addi sp,sp,16
420003ba: 30200073 mret
```
(disassembly via `riscv64-unknown-elf-objdump -C -S --disassemble ./esp32c3-hal/target/riscv32imc-unknown-none-elf/release/examples/gpio_interrupt`)
This outcome is superior to hand-coded interrupt routines which, lacking visibility into any non-assembly body of the interrupt handler, have to be very conservative and save the [entire CPU state to the stack frame][full-frame-save]. By instead asking LLVM to only save the registers that it uses, we defer the decision to the tool with the best context: it can more accurately account for the cost of spills if it knows that every additional register used is already at the cost of an implicit spill.
At the LLVM level, this is apparently [implemented by] marking every register as "[callee-save]," matching the semantics of an interrupt handler nicely (it has to leave the CPU state just as it found it after its `{m|s}ret`).
This approach is not suitable for every interrupt handler, as it makes no attempt to e.g. save the state in a user-accessible stack frame. For a full discussion of those challenges and tradeoffs, please refer to [the interrupt calling conventions RFC][rfc].
Inside rustc, this implementation differs from prior art because LLVM does not expose the "all-saved" function flavor as a calling convention directly, instead preferring to use an attribute that allows for differentiating between "machine-mode" and "superivsor-mode" interrupts.
Finally, some effort has been made to guide those who may not yet be aware of the differences between machine-mode and supervisor-mode interrupts as to why no `riscv-interrupt` calling convention is exposed through rustc, and similarly for why `riscv-interrupt-u` makes no appearance (as it would complicate future LLVM upgrades).
[clang-attr]: https://clang.llvm.org/docs/AttributeReference.html#interrupt-risc-v
[full-frame-save]: 9281af2ecf/src/lib.rs (L440-L469)
[implemented by]: b7fb2a3fec/llvm/lib/Target/RISCV/RISCVRegisterInfo.cpp (L61-L67)
[callee-save]: 973f1fe7a8/llvm/lib/Target/RISCV/RISCVCallingConv.td (L30-L37)
[rfc]: https://github.com/rust-lang/rfcs/pull/3246
Similar to prior support added for the mips430, avr, and x86 targets
this change implements the rough equivalent of clang's
[`__attribute__((interrupt))`][clang-attr] for riscv targets, enabling
e.g.
```rust
static mut CNT: usize = 0;
pub extern "riscv-interrupt-m" fn isr_m() {
unsafe {
CNT += 1;
}
}
```
to produce highly effective assembly like:
```asm
pub extern "riscv-interrupt-m" fn isr_m() {
420003a0: 1141 addi sp,sp,-16
unsafe {
CNT += 1;
420003a2: c62a sw a0,12(sp)
420003a4: c42e sw a1,8(sp)
420003a6: 3fc80537 lui a0,0x3fc80
420003aa: 63c52583 lw a1,1596(a0) # 3fc8063c <_ZN12esp_riscv_rt3CNT17hcec3e3a214887d53E.0>
420003ae: 0585 addi a1,a1,1
420003b0: 62b52e23 sw a1,1596(a0)
}
}
420003b4: 4532 lw a0,12(sp)
420003b6: 45a2 lw a1,8(sp)
420003b8: 0141 addi sp,sp,16
420003ba: 30200073 mret
```
(disassembly via `riscv64-unknown-elf-objdump -C -S --disassemble ./esp32c3-hal/target/riscv32imc-unknown-none-elf/release/examples/gpio_interrupt`)
This outcome is superior to hand-coded interrupt routines which, lacking
visibility into any non-assembly body of the interrupt handler, have to
be very conservative and save the [entire CPU state to the stack
frame][full-frame-save]. By instead asking LLVM to only save the
registers that it uses, we defer the decision to the tool with the best
context: it can more accurately account for the cost of spills if it
knows that every additional register used is already at the cost of an
implicit spill.
At the LLVM level, this is apparently [implemented by] marking every
register as "[callee-save]," matching the semantics of an interrupt
handler nicely (it has to leave the CPU state just as it found it after
its `{m|s}ret`).
This approach is not suitable for every interrupt handler, as it makes
no attempt to e.g. save the state in a user-accessible stack frame. For
a full discussion of those challenges and tradeoffs, please refer to
[the interrupt calling conventions RFC][rfc].
Inside rustc, this implementation differs from prior art because LLVM
does not expose the "all-saved" function flavor as a calling convention
directly, instead preferring to use an attribute that allows for
differentiating between "machine-mode" and "superivsor-mode" interrupts.
Finally, some effort has been made to guide those who may not yet be
aware of the differences between machine-mode and supervisor-mode
interrupts as to why no `riscv-interrupt` calling convention is exposed
through rustc, and similarly for why `riscv-interrupt-u` makes no
appearance (as it would complicate future LLVM upgrades).
[clang-attr]: https://clang.llvm.org/docs/AttributeReference.html#interrupt-risc-v
[full-frame-save]: 9281af2ecf/src/lib.rs (L440-L469)
[implemented by]: b7fb2a3fec/llvm/lib/Target/RISCV/RISCVRegisterInfo.cpp (L61-L67)
[callee-save]: 973f1fe7a8/llvm/lib/Target/RISCV/RISCVCallingConv.td (L30-L37)
[rfc]: https://github.com/rust-lang/rfcs/pull/3246
Make module inner and function run_analysis_to_runtime_passes in
rustc_mir_transform public to allow re-implementing the query from the
rust compiler interface.
Make `unconditional_recursion` warning detect recursive drops
Closes#55388
Also closes#50049 unless we want to keep it for the second example which this PR does not solve, but I think it is better to track that work in #57965.
r? `@oli-obk` since you are the mentor for #55388
Unresolved questions:
- [x] There are two false positives that must be fixed before merging (see diff). I suspect the best way to solve them is to perform analysis after drop elaboration instead of before, as now, but I have not explored that any further yet. Could that be an option? **Answer:** Yes, that solved the problem.
`@rustbot` label +T-compiler +C-enhancement +A-lint
Steal MIR for CTFE when possible.
Some bodies, like constants, have CTFE MIR but no optimized MIR.
In that case, have `mir_for_ctfe` steal the MIR instead of cloning it.
Add documentation to has_deref
Documentation of `has_deref` needed some polish to be more clear about where it should be used and what's it's purpose.
cc https://github.com/rust-lang/rust/issues/114401
r? `@RalfJung`
Do not run ConstProp on mir_for_ctfe.
This pass does not seem to be useful any more. The const-prop lints are now run by `tcx.mir_drops_elaborated_and_const_checked`, and the const-prop opt should never emit any diagnostic.
Forbid old-style `simd_shuffleN` intrinsics
Don't merge before https://github.com/rust-lang/packed_simd/pull/350 has made its way to crates.io
We used to support specifying the lane length of simd_shuffle ops by attaching the lane length to the name of the intrinsic (like `simd_shuffle16`). After this PR, you cannot do that anymore, and need to instead either rely on inference of the `idx` argument type or specify it as `simd_shuffle::<_, [u32; 16], _>`.
r? `@workingjubilee`
Operand types are now tracked explicitly, so there is no need to reserve ID 0
for the special always-zero counter.
As part of the renumbering, this change fixes an off-by-one error in the way
counters were counted by the `coverageinfo` query. As a result, functions
should now have exactly the number of counters they actually need, instead of
always having an extra counter that is never used.
Operand types are now tracked explicitly, so there is no need for expression
IDs to avoid counter IDs by descending from `u32::MAX`. Instead they can just
count up from 0, and can be used directly as indices when necessary.
Because the three kinds of operand are now distinguished explicitly, we no
longer need fiddly code to disambiguate counter IDs and expression IDs based on
the total number of counters/expressions in a function.
This does increase the size of operands from 4 bytes to 8 bytes, but that
shouldn't be a big deal since they are mostly stored inside boxed structures,
and the current coverage code is not particularly size-optimized anyway.