Reduce false positives of tail-expr-drop-order from consumed values (attempt #2)
r? `@nikomatsakis`
Tracked by #123739.
Related to #129864 but not replacing, yet.
Related to #130836.
This is an implementation of the approach suggested in the [Zulip stream](https://rust-lang.zulipchat.com/#narrow/stream/213817-t-lang/topic/temporary.20drop.20order.20changes). A new MIR statement `BackwardsIncompatibleDrop` is added to the MIR syntax. The lint now works by inspecting possibly live move paths before at the `BackwardsIncompatibleDrop` location and the actual drop under the current edition, which should be one before Edition 2024 in practice.
take 2
open up coroutines
tweak the wordings
the lint works up until 2021
We were missing one case, for ADTs, which was
causing `Result` to yield incorrect results.
only include field spans with significant types
deduplicate and eliminate field spans
switch to emit spans to impl Drops
Co-authored-by: Niko Matsakis <nikomat@amazon.com>
collect drops instead of taking liveness diff
apply some suggestions and add explantory notes
small fix on the cache
let the query recurse through coroutine
new suggestion format with extracted variable name
fine-tune the drop span and messages
bugfix on runtime borrows
tweak message wording
filter out ecosystem types earlier
apply suggestions
clippy
check lint level at session level
further restrict applicability of the lint
translate bid into nop for stable mir
detect cycle in type structure
the behavior of the type system not only depends on the current
assumptions, but also the currentnphase of the compiler. This is
mostly necessary as we need to decide whether and how to reveal
opaque types. We track this via the `TypingMode`.
There are a handful of tier 2 and tier 3 targets that cause a LLVM crash
or linker error when generating code that contains `f16` or `f128`. The
cranelift backend also does not support these types. To work around
this, every function in `std` or `core` that contains these types must
be marked `#[inline]` in order to avoid sending any code to the backend
unless specifically requested.
However, this is inconvenient and easy to forget. Introduce a check for
these types in the frontend that automatically inlines any function
signatures that take or return `f16` or `f128`.
Note that this is not a perfect fix because it does not account for the
types being passed by reference or as members of aggregate types, but
this is sufficient for what is currently needed in the standard library.
Fixes: https://github.com/rust-lang/rust/issues/133035
Closes: https://github.com/rust-lang/rust/pull/133037
coverage: Restrict empty-span expansion to only cover `{` and `}`
Coverage instrumentation has some tricky code for converting a coverage-relevant `Span` into a set of start/end line/byte-column coordinates that will be embedded in the CGU's coverage metadata.
A big part of this complexity is special code for handling empty spans, which are expanded into non-empty spans (if possible) because LLVM's coverage reporter does not handle empty spans well.
This PR simplifies that code by restricting it to only apply in two specific situations: when the character after the empty span is `{`, or the character before the empty span is `}`.
(As an added benefit, this means that the expanded spans no longer extend awkwardly beyond the end of a physical line, which was common under the previous implementation.)
Along the way, this PR also removes some unhelpful code for dealing with function source code spread across multiple files. Functions currently can't have coverage spans in multiple files, and if that ever changes (e.g. to properly support expansion regions) then this code will need to be completely overhauled anyway.
Functions currently can't have mappings in multiple files, and if that ever
changes (e.g. to properly support expansion regions), this code will need to be
completely overhauled anyway.
coverage: Simplify parts of coverage graph creation
This is a combination of three semi-related simplifications to how coverage graphs are created, grouped into one PR to avoid conflicts.
There are no observable changes to the output of any of the coverage tests.
Now that `Results` is the only impl of `ResultsVisitable`, the trait can
be removed. This simplifies things by removining unnecessary layers of
indirection and abstraction.
- `ResultsVisitor` is simpler.
- Its type parameter changes from `R` (an analysis result) to the
simpler `A` (an analysis).
- It no longer needs the `Domain` associated type, because it can use
`A::Domain`.
- Occurrences of `R` become `Results<'tcx, A>`, because there is now
only one kind of analysis results.
- `save_as_intervals` also changes type parameter from `R` to `A`.
- The `results.reconstruct_*` method calls are replaced with
`results.analysis.apply_*` method calls, which are equivalent.
- `Direction::visit_results_in_block` is simpler, with a single generic
param (`A`) instead of two (`D` and `R`/`F`, with a bound connecting
them). Likewise for `visit_results`.
- The `ResultsVisitor` impls for `MirBorrowCtxt` and
`StorageConflictVisitor` are now specific about the type of the
analysis results they work with. They both used to have a type param
`R` but they weren't genuinely generic. In both cases there was only a
single results type that made sense to instantiate them with.
We only need to take action when the next block cannot be added to the current
chain, but the logic is much simpler if we express it in terms of when the
block _can_ be added.
continue `TypingMode` refactor
There are still quite a few places which (indirectly) rely on the `Reveal` of a `ParamEnv`, but we're slowly getting there
r? `@compiler-errors`
Mark `simplify_aggregate_to_copy` mir-opt as unsound
Mark the `simplify_aggregate_to_copy` mir-opt added in #128299 as unsound as it seems to miscompile the MCVE reported in https://github.com/rust-lang/rust/issues/132353. The mir-opt can be re-enabled once this case is fixed.
```rs
fn pop_min(mut score2head: Vec<Option<usize>>) -> Option<usize> {
loop {
if let Some(col) = score2head[0] {
score2head[0] = None;
return Some(col);
}
}
}
fn main() {
let min = pop_min(vec![Some(1)]);
println!("min: {:?}", min);
// panic happens here on beta in release mode
// but not in debug mode
min.unwrap();
}
```
This MCVE is included as a `run-pass` ui regression test in the first commit. I built the ui test with a nightly manually, and can reproduce the behavioral difference with `-C opt-level=0` and `-C opt-level=1`. Locally, this ui test will fail unless it was run on a compiler built with the second commit marking the mir-opt as unsound thus disabling it by default.
This PR **partially reverts** commit e7386b3, reversing changes made to 02b1be1. The mir-opt implementation is just marked as unsound but **not** reverted to make reland reviews easier. Test changes are **reverted if they were not pure additions**. Tests added by the original PR received `-Z unsound-mir-opts` compile-flags.
cc `@DianQK` `@cjgillot` (PR author and reviewer of #128299)
They represent a lot of abstraction and indirection, but they're only
used for `ConstAnalysis`, and apparently won't be used for any other
analyses in the future. This commit inlines and removes them, which
makes `ConstAnalysis` easier to read and understand.
Rename `rustc_abi::Abi` to `BackendRepr`
Remove the confabulation of `rustc_abi::Abi` with what "ABI" actually means by renaming it to `BackendRepr`, and rename `Abi::Aggregate` to `BackendRepr::Memory`. The type never actually represented how things are passed, as that has to have `PassMode` considered, at minimum, but rather it just is how we represented some things to the backend. This conflation arose because LLVM, the primary backend at the time, would lower certain IR forms using certain ABIs. Even that only somewhat was true, as it broke down when one ventured significantly afield of what is described by the System V AMD64 ABI either by using different architectures, ABI-modifying IR annotations, the same architecture **with different ISA extensions enabled**, or other... unexpected delights.
Unfortunately both names are still somewhat of a misnomer right now, as people have written code for years based on this misunderstanding. Still, their original names are even moreso, and for better or worse, this backend code hasn't received as much maintenance as the rest of the compiler, lately. Actually arriving at a correct end-state will simply require us to disentangle a lot of code in order to fix, much of it pointlessly repeated in several places. Thus this is not an "actual fix", just a way to deflect further misunderstandings.
This is a standard pattern:
```
MyAnalysis.into_engine(tcx, body).iterate_to_fixpoint()
```
`into_engine` and `iterate_to_fixpoint` are always called in pairs, but
sometimes with a builder-style `pass_name` call between them. But a
builder-style interface is overkill here. This has been bugging me a for
a while.
This commit:
- Merges `Engine::new` and `Engine::iterate_to_fixpoint`. This removes
the need for `Engine` to have fields, leaving it as a trivial type
that the next commit will remove.
- Renames `Analysis::into_engine` as `Analysis::iterate_to_fixpoint`,
gives it an extra argument for the optional pass name, and makes it
call `Engine::iterate_to_fixpoint` instead of `Engine::new`.
This turns the pattern from above into this:
```
MyAnalysis.iterate_to_fixpoint(tcx, body, None)
```
which is shorter at every call site, and there's less plumbing required
to support it.
The initial naming of "Abi" was an awful mistake, conveying wrong ideas
about how psABIs worked and even more about what the enum meant.
It was only meant to represent the way the value would be described to
a codegen backend as it was lowered to that intermediate representation.
It was never meant to mean anything about the actual psABI handling!
The conflation is because LLVM typically will associate a certain form
with a certain ABI, but even that does not hold when the special cases
that actually exist arise, plus the IR annotations that modify the ABI.
Reframe `rustc_abi::Abi` as the `BackendRepr` of the type, and rename
`BackendRepr::Aggregate` as `BackendRepr::Memory`. Unfortunately, due to
the persistent misunderstandings, this too is now incorrect:
- Scattered ABI-relevant code is entangled with BackendRepr
- We do not always pre-compute a correct BackendRepr that reflects how
we "actually" want this value to be handled, so we leave the backend
interface to also inject various special-cases here
- In some cases `BackendRepr::Memory` is a "real" aggregate, but in
others it is in fact using memory, and in some cases it is a scalar!
Our rustc-to-backend lowering code handles this sort of thing right now.
That will eventually be addressed by lifting duplicated lowering code
to either rustc_codegen_ssa or rustc_target as appropriate.
coverage: Don't rely on the custom traversal to find enclosing loops
This opens up the possibility of modifying or removing the custom graph traversal used in coverage counter creation, without losing access to the heuristics that care about a node's enclosing loops.
Actually changing the traversal is left for future work, because this PR on its own doesn't change the emitted coverage mappings at all.
Effects cleanup
- removed extra bits from predicates queries that are no longer needed in the new system
- removed the need for `non_erasable_generics` to take in tcx and DefId, removed unused arguments in callers
r? compiler-errors
- removed extra bits from predicates queries that are no longer needed in the new system
- removed the need for `non_erasable_generics` to take in tcx and DefId, removed unused arguments in callers
Then we can rename the _raw functions to drop their suffix, and instead
explicitly use is_stable_const_fn for the few cases where that is really what
you want.
Move `cmp_in_dominator_order` out of graph dominator computation
Dominator-order information is only needed for coverage graphs, and is easy enough to collect by just traversing the graph again.
This avoids wasted work when computing graph dominators for any other purpose.
Dominator-order information is only needed for coverage graphs, and is easy
enough to collect by just traversing the graph again.
This avoids wasted work when computing graph dominators for any other purpose.
coverage: Make counter creation handle node/edge counters more uniformly
Similar to #130380, this is another round of small improvements informed by my ongoing attempts to overhaul coverage counter creation.
One of the big benefits is getting rid of the awkward special-case that would sometimes attach an edge counter to a node instead. That was needed by the code that chooses which out-edge should be given a counter expression, but we can avoid that by making the corresponding check a little smarter.
I've also renamed several things to be simpler and more consistent, which should help with future changes.