Add decision_depth field to TVBitmapUpdate/CondBitmapUpdate statements
Add decision_depth field to BcbMappingKinds MCDCBranch and MCDCDecision
Add decision_depth field to MCDCBranchSpan and MCDCDecisionSpan
Rollup of 9 pull requests
Successful merges:
- #121546 (Error out of layout calculation if a non-last struct field is unsized)
- #122448 (Port hir-tree run-make test to ui test)
- #123212 (CFI: Change type transformation to use TypeFolder)
- #123218 (Add test for getting parent HIR for synthetic HIR node)
- #123324 (match lowering: make false edges more precise)
- #123389 (Avoid panicking unnecessarily on startup)
- #123397 (Fix diagnostic for qualifier in extern block)
- #123431 (Stabilize `proc_macro_byte_character` and `proc_macro_c_str_literals`)
- #123439 (coverage: Remove useless constants)
r? `@ghost`
`@rustbot` modify labels: rollup
Use the `Align` type when parsing alignment attributes
Use the `Align` type in `rustc_attr::parse_alignment`, removing the need to call `Align::from_bytes(...).unwrap()` later in the compilation process.
If a function was instrumented for coverage, but all of its coverage statements
have been removed by later MIR transforms, it should be treated as "unused"
even if the compiler generates an unreachable stub for it.
The payload of coverage statements was historically a structure with several
fields, so it was boxed to avoid bloating `StatementKind`.
Now that the payload is a single relatively-small enum, we can replace
`Box<Coverage>` with just `CoverageKind`.
This patch also adds a size assertion for `StatementKind`, to avoid
accidentally bloating it in the future.
Some of the marker statements used by coverage are added during MIR building
for use by the InstrumentCoverage pass (during analysis), and are not needed
afterwards.
When #118865 started enforcing the `rustc::potential_query_instability` lint in
`rustc_codegen_llvm`, it added an exemption for this site, arguing that the
entries are only used to create a list of filenames that is later sorted.
However, the list of entries also gets traversed when creating the function
coverage records in LLVM IR, which may be sensitive to hash-based ordering.
This patch therefore changes `function_coverage_map` to use `FxIndexMap`, which
should avoid hash-based instability by iterating in insertion order.
Coverage marker statements should have no effect on codegen, but in some cases
they could have the side-effect of creating a `func_coverage` entry for their
enclosing function. That can lead to an ICE for functions that don't actually
have any coverage spans.
Stop allowing `rustc::potential_query_instability` on all of
`rustc_codegen_llvm` and instead allow it on a case-by-case basis. In
this case, both instances are safe to allow.
There are cases where coverage instrumentation wants to show a span for some
syntax element, but there is no MIR node that naturally carries that span, so
the instrumentor can't see it.
MIR building can now use this new kind of coverage statement to deliberately
include those spans in MIR, attached to a dummy statement that has no other
effect.
By default, `newtype_index!` types get a default `Encodable`/`Decodable`
impl. You can opt out of this with `custom_encodable`. Opting out is the
opposite to how Rust normally works with autogenerated (derived) impls.
This commit inverts the behaviour, replacing `custom_encodable` with
`encodable` which opts into the default `Encodable`/`Decodable` impl.
Only 23 of the 59 `newtype_index!` occurrences need `encodable`.
Even better, there were eight crates with a dependency on
`rustc_serialize` just from unused default `Encodable`/`Decodable`
impls. This commit removes that dependency from those eight crates.
Most coverage metadata is encoded into two sections in the final executable.
The `__llvm_covmap` section mostly just contains a list of filenames, while the
`__llvm_covfun` section contains encoded coverage maps for each instrumented
function.
The catch is that each per-function record also needs to contain a hash of the
filenames list that it refers to. Historically this was handled by assembling
most of the per-function data into a temporary list, then assembling the
filenames buffer, then using the filenames hash to emit the per-function data,
and then finally emitting the filenames table itself.
However, now that we build the filenames table up-front (via a separate
traversal of the per-function data), we can hash and emit that part first, and
then emit each of the per-function records immediately after building. This
removes the awkwardness of having to temporarily store nearly-complete
per-function records.
The main change here is that `VirtualFileMapping` now uses an internal hashmap
to de-duplicate incoming global file IDs. That removes the need for
`encode_mappings_for_function` to re-sort its mappings by filename in order to
de-duplicate them.
(We still de-duplicate runs of identical filenames to save work, but this is
not load-bearing for correctness, so a sort is not necessary.)
The combined `get_expressions_and_counter_regions` method was an artifact of
having to prepare the expressions and mappings at the same time, to avoid
ownership/lifetime problems with temporary data used by both.
Now that we have an explicit transition from `FunctionCoverageCollector` to the
final `FunctionCoverage`, we can prepare any shared data during that step and
store it in the final struct.
This gives us a clearly-defined place to run code after the instance's MIR has
been traversed by codegen, but before we emit its `__llvm_covfun` record.
This query has a name that sounds general-purpose, but in fact it has
coverage-specific semantics, and (fortunately) is only used by coverage code.
Because it is only ever called once (from one designated CGU), it doesn't need
to be a query, and we can change it to a regular function instead.
Implement rustc part of RFC 3127 trim-paths
This PR implements (or at least tries to) [RFC 3127 trim-paths](https://github.com/rust-lang/rust/issues/111540), the rustc part. That is `-Zremap-path-scope` with all of it's components/scopes.
`@rustbot` label: +F-trim-paths
Even though expression details are now stored in the info structure, we still
need to inject `ExpressionUsed` statements into MIR, because if one is missing
during codegen then we know that it was optimized out and we can remap all of
its associated code regions to zero.
Previously, mappings were attached to individual coverage statements in MIR.
That necessitated special handling in MIR optimizations to avoid deleting those
statements, since otherwise codegen would be unable to reassemble the original
list of mappings.
With this change, a function's list of mappings is now attached to its MIR
body, and survives intact even if individual statements are deleted by
optimizations.
Instead of modifying the accumulated expressions in-place, we now build a set
of expressions that are known to be zero, and then consult that set on the fly
when converting the expression data for FFI.
This will be necessary when moving mappings and expression data into function
coverage info, which can't be mutated during codegen.
Coverage codegen can now allocate arrays based on the number of
counters/expressions originally used by the instrumentor.
The existing query that inspects coverage statements is still used for
determining the number of counters passed to `llvm.instrprof.increment`. If
some high-numbered counters were removed by MIR optimizations, the instrumented
binary can potentially use less memory and disk space at runtime.
This allows coverage information to be attached to the function as a whole when
appropriate, instead of being smuggled through coverage statements in the
function's basic blocks.
As an example, this patch moves the `function_source_hash` value out of
individual `CoverageKind::Counter` statements and into the per-function info.
When synthesizing unused functions for coverage purposes, the absence of this
info is taken to indicate that a function was not eligible for coverage and
should not be synthesized.