Take MIR dataflow analyses by mutable reference
The main motivation here is any analysis requiring dynamically sized scratch memory to work. One concrete example would be pointer target tracking, where tracking the results of a dereference can result in multiple possible targets. This leads to processing multi-level dereferences requiring the ability to handle a changing number of potential targets per step. A (simplified) function for this would be `fn apply_deref(potential_targets: &mut Vec<Target>)` which would use the scratch space contained in the analysis to send arguments and receive the results.
The alternative to this would be to wrap everything in a `RefCell`, which is what `MaybeRequiresStorage` currently does. This comes with a small perf cost and loses the compiler's guarantee that we don't try to take multiple borrows at the same time.
For the implementation:
* `AnalysisResults` is an unfortunate requirement to avoid an unconstrained type parameter error.
* `CloneAnalysis` could just be `Clone` instead, but that would result in more work than is required to have multiple cursors over the same result set.
* `ResultsVisitor` now takes the results type on in each function as there's no other way to have access to the analysis without cloning it. This could use an associated type rather than a type parameter, but the current approach makes it easier to not care about the type when it's not necessary.
* `MaybeRequiresStorage` now no longer uses a `RefCell`, but the graphviz formatter now does. It could be removed, but that would require even more changes and doesn't really seem necessary.
Unify terminology used in unwind action and terminator, and reflect
the fact that a nounwind panic is triggered instead of an immediate
abort is triggered for this terminator.
As a part of drop elaboration, we identify dead unwinds, i.e., unwind
edges on a drop terminators which are known to be unreachable, because
there is no need to drop anything.
Previously, the data flow framework was informed about the dead unwinds,
and it assumed those edges are absent from MIR. Unfortunately, the data
flow framework wasn't consistent in maintaining this assumption.
In particular, if a block was reachable only through a dead unwind edge,
its state was propagated to other blocks still. This became an issue in
the context of change removes DropAndReplace terminator, since it
introduces initialization into cleanup blocks.
To avoid this issue, remove unreachable unwind edges before the drop
elaboration, and elaborate only blocks that remain reachable.
Convert all the crates that have had their diagnostic migration
completed (except save_analysis because that will be deleted soon and
apfloat because of the licensing problem).
compiler: remove unnecessary imports and qualified paths
Some of these imports were necessary before Edition 2021, others were already in the prelude.
I hope it's fine that this PR is so spread-out across files :/
Replace `Body::basic_blocks()` with field access
Since the refactoring in #98930, it is possible to borrow the basic blocks
independently from other parts of MIR by accessing the `basic_blocks` field
directly.
Replace unnecessary `Body::basic_blocks()` method with a direct field access,
which has an additional benefit of borrowing the basic blocks only.
Make it explicit that the analysis direction is constant.
This also makes the value immediately available for optimizations.
Previously those functions were neither inline nor generic and so their
definition was unavailable when using data flow framework from other
crates.
Switch sources are used by backward analysis with a custom switch int
edge effects, but are otherwise unnecessarily computed.
Delay the computation until we know that switch sources are indeed
required and avoid the computation otherwise.
This reduces peak memory usage significantly for some programs with very
large functions, such as:
- `keccak`, `unicode_normalization`, and `match-stress-enum`, from
the `rustc-perf` benchmark suite;
- `http-0.2.6` from crates.io.
The new type is used in the analyses where the bitsets can get huge
(e.g. 10s of thousands of bits): `MaybeInitializedPlaces`,
`MaybeUninitializedPlaces`, and `EverInitializedPlaces`.
Some refactoring was required in `rustc_mir_dataflow`. All existing
analysis domains are either `BitSet` or a trivial wrapper around
`BitSet`, and access in a few places is done via `Borrow<BitSet>` or
`BorrowMut<BitSet>`. Now that some of these domains are `ClusterBitSet`,
that no longer works. So this commit replaces the `Borrow`/`BorrowMut`
usage with a new trait `BitSetExt` containing the needed bitset
operations. The impls just forward these to the underlying bitset type.
This required fiddling with trait bounds in a few places.
The commit also:
- Moves `static_assert_size` from `rustc_data_structures` to
`rustc_index` so it can be used in the latter; the former now
re-exports it so existing users are unaffected.
- Factors out some common "clear excess bits in the final word"
functionality in `bit_set.rs`.
- Uses `fill` in a few places instead of loops.