Commit Graph

142 Commits

Author SHA1 Message Date
Matthias Krüger
f4de82c2b3
Rollup merge of #116589 - Zalathar:successors, r=oli-obk
coverage: Unbox and simplify `bcb_filtered_successors`

This is a small cleanup in the coverage instrumentor's graph-building code.

---
This function already has access to the MIR body, so instead of taking a reference to a terminator, it's simpler and easier to pass in a basic block index.

There is no need to box the returned iterator if we instead add appropriate lifetime captures, and make `short_circuit_preorder` generic over the type of iterator it expects.

We can also greatly simplify the function's implementation by observing that the only difference between its two cases is whether we take all of a BB's successors, or just the first one.

---

`@rustbot` label +A-code-coverage
2023-10-10 14:07:47 +02:00
Zalathar
5d629457fd coverage: Unbox and simplify bcb_filtered_successors
This function already has access to the MIR body, so instead of taking a
reference to a terminator, it's simpler and easier to pass in a basic block
index.

There is no need to box the returned iterator if we instead add appropriate
lifetime captures, since `short_circuit_preorder` is now generic over the type
of iterator it expects.

We can also greatly simplify the function's implementation by observing that
the only difference between its two cases is whether we take all of a BB's
successors, or just the first one.
2023-10-10 18:45:29 +11:00
Zalathar
f214497d22 coverage: Replace ShortCircuitPreorder with a single function
Instead of defining a named struct, we can use `std::iter::from_fn` and store
intermediate state in a closure.
2023-10-10 18:44:16 +11:00
Zalathar
6c44425e98 coverage: Remove enum CoverageStatement
This enum was mainly needed to track the precise origin of a span in MIR, for
debug printing purposes. Since the old debug code was removed in #115962, we
can replace it with just the span itself.
2023-10-10 13:39:23 +11:00
Zalathar
4b471df25d coverage: Disconnect span extraction from CoverageSpansGenerator
By performal initial span extraction in a separate free function, we can remove
some accidental complexity from the main generator code.
2023-10-10 13:39:23 +11:00
Zalathar
972ab8863d coverage: Move initial MIR span extraction into a submodule 2023-10-10 13:39:23 +11:00
Zalathar
053c4f94a0 coverage: Remove next_id methods from counter/expression IDs
When these methods were originally written, I wasn't aware that
`newtype_index!` already supports addition with ordinary numbers, without
needing to unwrap and re-wrap.
2023-10-03 13:03:40 +11:00
Zalathar
b1cf0c8f1b coverage: Remove code for making expression copies of BCB counters
Now that coverage statements can have multiple code regions attached to them,
this code is never used.
2023-10-03 13:03:39 +11:00
Zalathar
86a66c8171 coverage: Store each BCB's code regions in one coverage statement
If a BCB has more than one code region, those extra regions can now all be
stored in the same coverage statement, instead of being stored in additional
statements.
2023-10-03 13:03:39 +11:00
Zalathar
ee9d00f6b8 coverage: Let each coverage statement hold a vector of code regions
This makes it possible for a `StatementKind::Coverage` to hold more than one
code region, but that capability is not yet used.
2023-10-03 13:03:39 +11:00
Zalathar
1355e1fc74 coverage: Update comments/logs that referred to CoverageSpan
The concrete type `CoverageSpan` is no longer used outside of the `spans`
module.

This is a separate patch to avoid noise in the preceding patch that actually
encapsulates coverage spans.
2023-10-03 13:03:39 +11:00
Zalathar
e29db47176 coverage: Encapsulate coverage spans
By encapsulating the coverage spans in a struct, we can change the internal
representation without disturbing existing call sites. This will be useful for
grouping coverage spans by BCB.

This patch includes some changes that were originally in #115912, which avoid
the need for a particular test to deal with coverage spans at all.

(Comments/logs referring to `CoverageSpan` are updated in a subsequent patch.)
2023-10-03 13:03:39 +11:00
Matthias Krüger
fd95627134 fix clippy::{redundant_guards, useless_format} 2023-09-27 23:49:15 +02:00
Zalathar
e015f5a370 coverage: Remove vestigial counter/expression debug labels 2023-09-20 17:24:10 +10:00
Zalathar
bbd347409f coverage: Remove vestigial format_counter methods 2023-09-20 17:24:10 +10:00
Zalathar
3d66513fe4 coverage: Remove debug code from the instrumentor 2023-09-20 17:24:10 +10:00
Zalathar
01b67f4b26 coverage: Simplify sorting of coverage spans extracted from MIR
Switching to `Ordering::then_with` makes control-flow less complicated, and
there is no need to use `partial_cmp` here.
2023-09-18 23:15:25 +10:00
Zalathar
4690f97099 coverage: Fix an unstable-sort inconsistency in coverage spans
This code was calling `sort_unstable_by`, but failed to impose a total order on
the initial spans. That resulted in unpredictable handling of closure spans,
producing inconsistencies in the coverage maps and in user-visible coverage
reports.

This patch fixes the problem by always sorting closure spans before
otherwise-identical non-closure spans, and also switches to a stable sort in
case the ordering is still not total.
2023-09-18 21:28:56 +10:00
Zalathar
91e0b46f76 coverage: Replace an unnecessary map with a set
This hashmap's values were never used.
2023-09-16 12:07:35 +10:00
Zalathar
3b5b1aa2a5 coverage: Simplify internal representation of debug types 2023-09-16 12:07:35 +10:00
Zalathar
d0d1187ebb coverage: Update log module names in debug docs 2023-09-16 12:07:35 +10:00
Zalathar
d79cd17199 coverage: Arrange imports in rustc_mir_transform::coverage::debug 2023-09-16 12:07:35 +10:00
Zalathar
e54204c8e9 coverage: In the visitor, track max counter/expression IDs without +1
This makes the visitor track the highest seen counter/expression IDs directly,
and only add +1 (to convert to a vector length) at the very end.
2023-09-07 18:06:13 +10:00
Zalathar
f191b1c2fc coverage: Simplify the coverageinfo query to a single pass 2023-09-07 18:06:13 +10:00
Zalathar
3f549466a8 coverage: Extract a common iterator over a function's coverage statements
Both of the coverage queries can now use this one helper function to iterate
over all of the `mir::Coverage` payloads in the statements of a `mir::Body`.
2023-09-07 18:06:13 +10:00
Camille GILLOT
258ace613d Use relative positions inside a SourceFile. 2023-09-03 12:56:10 +00:00
Ralf Jung
4c53783f3c when terminating during unwinding, show the reason why 2023-08-24 13:28:26 +02:00
bors
5c6a7e71cd Auto merge of #114993 - RalfJung:panic-nounwind, r=fee1-dead
interpret/miri: call the panic_nounwind machinery the same way codegen does
2023-08-20 22:01:18 +00:00
Ralf Jung
818ec8e23a give some unwind-related terminators a more clear name 2023-08-20 15:52:38 +02:00
Zalathar
72f4c78dc6 coverage: Don't store function_source_hash in BcbCounter::Counter
This shows one small benefit of separating `BcbCounter` from `CoverageKind`.
The function source hash will be the same for all counters within a function,
so instead of passing it through `CoverageCounters` and storing it in every
counter, we can just supply it during the final conversion to `CoverageKind`.
2023-08-20 12:02:40 +10:00
Zalathar
fbab055e77 coverage: Give the instrumentor its own counter type, separate from MIR
This splits off `BcbCounter` from MIR's `CoverageKind`, allowing the two types
to evolve in different directions as necessary.
2023-08-20 12:02:40 +10:00
Zalathar
629437eec7 coverage: Move a debug print into make_code_region 2023-08-20 12:02:40 +10:00
Zalathar
cad50f40e5 coverage: Remove a useless let () = 2023-08-20 12:02:40 +10:00
Matthias Krüger
4cd3b0b71c use static arrays instead of vectors 2023-08-19 18:49:58 +02:00
Zalathar
5ca30c4646 Store BCB counters externally, not directly in the BCB graph
Storing coverage counter information in `CoverageCounters` has a few advantages
over storing it directly inside BCB graph nodes:

- The graph doesn't need to be mutable when making the counters, making it
easier to see that the graph itself is not modified during this step.

- All of the counter data is clearly visible in one place.

- It becomes possible to use a representation that doesn't correspond 1:1 to
graph nodes, e.g. storing all the edge counters in a single hashmap instead of
several.
2023-08-13 12:18:06 +10:00
Zalathar
5302c9d451 Accumulate intermediate expressions into CoverageCounters
This avoids the need to pass around a separate vector to accumulate into, and
avoids the need to create a fake empty vector when failure occurs.
2023-08-13 12:18:06 +10:00
Zalathar
c74db79c3b Rename helper struct BcbCounters to MakeBcbCounters
This avoids confusion with data structures that actually hold BCB counter
information.
2023-08-13 12:18:06 +10:00
Zalathar
3920e07f0b Make coverage counter IDs count up from 0, not 1
Operand types are now tracked explicitly, so there is no need to reserve ID 0
for the special always-zero counter.

As part of the renumbering, this change fixes an off-by-one error in the way
counters were counted by the `coverageinfo` query. As a result, functions
should now have exactly the number of counters they actually need, instead of
always having an extra counter that is never used.
2023-08-01 11:29:55 +10:00
Zalathar
f103db894f Make coverage expression IDs count up from 0, not down from u32::MAX
Operand types are now tracked explicitly, so there is no need for expression
IDs to avoid counter IDs by descending from `u32::MAX`. Instead they can just
count up from 0, and can be used directly as indices when necessary.
2023-08-01 11:29:55 +10:00
Zalathar
1a014d42f4 Replace ExpressionOperandId with enum Operand
Because the three kinds of operand are now distinguished explicitly, we no
longer need fiddly code to disambiguate counter IDs and expression IDs based on
the total number of counters/expressions in a function.

This does increase the size of operands from 4 bytes to 8 bytes, but that
shouldn't be a big deal since they are mostly stored inside boxed structures,
and the current coverage code is not particularly size-optimized anyway.
2023-08-01 11:29:55 +10:00
Matthias Krüger
23815467a2 inline format!() args up to and including rustc_middle 2023-07-30 13:18:33 +02:00
Zalathar
8745fdc448 Replace a lazy RefCell<Option<T>> with OnceCell<T> 2023-07-28 12:55:13 +10:00
Matthias Krüger
c64ef5e070 inline format!() args from rustc_codegen_llvm to the end (4)
r? @WaffleLapkin
2023-07-25 23:20:28 +02:00
bors
6b46c996e1 Auto merge of #113105 - matthiaskrgr:rollup-rci0uym, r=matthiaskrgr
Rollup of 8 pull requests

Successful merges:

 - #112207 (Add trustzone and virtualization target features for aarch32.)
 - #112454 (Make compiletest aware of targets without dynamic linking)
 - #112628 (Allow comparing `Box`es with different allocators)
 - #112692 (Provide more context for `rustc +nightly -Zunstable-options` on stable)
 - #112972 (Make `UnwindAction::Continue` explicit in MIR dump)
 - #113020 (Add tests impl via obj unless denied)
 - #113084 (Simplify some conditions)
 - #113103 (Normalize types when applying uninhabited predicate.)

r? `@ghost`
`@rustbot` modify labels: rollup
2023-06-27 21:31:47 +00:00
Maybe Waffle
ef05533c39 Simplify some conditions 2023-06-27 07:40:47 +00:00
Zalathar
fbb2079a24 Use CoverageKind::as_operand_id instead of manually reimplementing it 2023-06-27 12:51:42 +10:00
Deadbeef
89c24af133 Better error for non const PartialEq call generated by match 2023-06-18 05:24:38 +00:00
Matthias Krüger
09489b9137
Rollup merge of #111121 - Zalathar:ra-false-positive, r=jackh726
Work around `rust-analyzer` false-positive type errors

rust-analyzer incorrectly reports two type errors in `debug.rs`:

> expected &dyn Display, found &i32
> expected &dyn Display, found &i32

This is due to a known bug in r-a: (https://github.com/rust-lang/rust-analyzer/issues/11847).

In these particular cases, changing `&0` to `&0i32` seems to be enough to avoid the bug.
2023-05-24 21:36:56 +02:00
bors
97d328012b Auto merge of #111673 - cjgillot:dominator-preprocess, r=cjgillot,tmiasko
Preprocess and cache dominator tree

Preprocessing dominators has a very strong effect for https://github.com/rust-lang/rust/pull/111344.
That pass checks that assignments dominate their uses repeatedly. Using the unprocessed dominator tree caused a quadratic runtime (number of bbs x depth of the dominator tree).

This PR also caches the dominator tree and the pre-processed dominators in the MIR cfg cache.

Rebase of https://github.com/rust-lang/rust/pull/107157
cc `@tmiasko`
2023-05-24 16:18:21 +00:00
Maybe Waffle
f542778533 Drive-by cleanup: debug::term_type => TerminatorKind::name 2023-05-17 11:27:37 +00:00