Move code into `rustc_mir_transform`
I found two modules in other crates that are better placed in `rustc_mir_transform`, because that's the only crate that uses them.
r? ``@matthewjasper``
Because it's only used in `rustc_mir_transform`. (Presumably it is
currently in `rustc_middle` because lots of other MIR-related stuff is,
but that's not a hard requirement.) And because `rustc_middle` is huge
and it's always good to make it smaller.
`rustc_mir_dataflow/src/elaborate_drops.rs` contains some infrastructure
used by a few MIR passes: the `elaborate_drop` function, the
`DropElaborator` trait, etc.
`rustc_mir_transform/src/elaborate_drops.rs` (same file name, different
crate) contains the `ElaborateDrops` pass. It relies on a lot of the
infrastructure from `rustc_mir_dataflow/src/elaborate_drops.rs`.
It turns out that the drop infrastructure is only used in
`rustc_mir_transform`, so this commit moves it there. (The only
exception is the small `DropFlagState` type, which is moved to the
existing `rustc_mir_dataflow/src/drop_flag_effects.rs`.) The file is
renamed from `rustc_mir_dataflow/src/elaborate_drops.rs` to
`rustc_mir_transform/src/elaborate_drop.rs` (with no trailing `s`)
because (a) the `elaborate_drop` function is the most important export,
and (b) `rustc_mir_transform/src/elaborate_drops.rs` already exists.
All the infrastructure pieces that used to be `pub` are now
`pub(crate)`, because they are now only used within
`rustc_mir_transform`.
coverage: Eliminate more counters by giving them to unreachable nodes
When preparing a function's coverage counters and metadata during codegen, any part of the original coverage graph that was removed by MIR optimizations can be treated as having an execution count of zero.
Somewhat counter-intuitively, if we give those unreachable nodes a _higher_ priority for receiving physical counters (instead of counter expressions), that ends up reducing the total number of physical counters needed.
This works because if a node is unreachable, we don't actually create a physical counter for it. Instead that node gets a fixed zero counter, and any other node that would have relied on that physical counter in its counter expression can just ignore that term completely.
When preparing a function's coverage counters and metadata during codegen, any
part of the original coverage graph that was removed by MIR optimizations can
be treated as having an execution count of zero.
Somewhat counter-intuitively, if we give those unreachable nodes a _higher_
priority for receiving physical counters (instead of counter expressions), that
ends up reducing the total number of physical counters needed.
This works because if a node is unreachable, we don't actually create a
physical counter for it. Instead that node gets a fixed zero counter, and any
other node that would have relied on that physical counter in its counter
expression can just ignore that term completely.
Rename rustc_middle::Ty::is_unsafe_ptr to is_raw_ptr
The wording unsafe pointer is less common and not mentioned in a lot of places, instead this is usually called a "raw pointer". For the sake of uniformity, we rename this method.
This came up during the review of
https://github.com/rust-lang/rust/pull/134424.
r? `@Noratrieb`
The wording unsafe pointer is less common and not mentioned in a lot of
places, instead this is usually called a "raw pointer". For the sake of
uniformity, we rename this method.
This came up during the review of
https://github.com/rust-lang/rust/pull/134424.
coverage: Defer part of counter-creation until codegen
Follow-up to #135481 and #135873.
One of the pleasant properties of the new counter-assignment algorithm is that we can stop partway through the process, store the intermediate state in MIR, and then resume the rest of the algorithm during codegen. This lets it take into account which parts of the control-flow graph were eliminated by MIR opts, resulting in fewer physical counters and simpler counter expressions.
Those improvements end up completely obsoleting much larger chunks of code that were previously responsible for cleaning up the coverage metadata after MIR opts, while also doing a more thorough cleanup job.
(That change also unlocks some further simplifications that I've kept out of this PR to limit its scope.)
Visit all debug info in MIR Visitor
I've been experimenting with simplifying debug info in MIR inliner, and discovered that MIR Visitor doesn't reliably visit all spans. This PR adds the missing visitor calls.
Update bootstrap compiler and rustfmt
The rustfmt version we previously used formats things differently from what the latest nightly rustfmt does. This causes issues for subtrees that get formatted both in-tree and in their own repo. Updating the rustfmt used in-tree solves those issues. Also bumped the bootstrap compiler as the stage0 update command always updates both at the same
time.
MIR validation: add comment explaining the limitations of CfgChecker
I hope this right but I am not sure.^^
Cc `@compiler-errors` `@lcnr` `@cjgillot` `@oli-obk`
Remove hook calling via `TyCtxtAt`.
All hooks receive a `TyCtxtAt` argument.
Currently hooks can be called through `TyCtxtAt` or `TyCtxt`. In the latter case, a `TyCtxtAt` is constructed with a dummy span and passed to the hook.
However, in practice hooks are never called through `TyCtxtAt`, and always receive a dummy span. (I confirmed this via code inspection, and double-checked it by temporarily making the `TyCtxtAt` code path panic and running all the tests.)
This commit removes all the `TyCtxtAt` machinery for hooks. All hooks now receive `TyCtxt` instead of `TyCtxtAt`. There are two existing hooks that use `TyCtxtAt::span`: `const_caller_location_provider` and `try_destructure_mir_constant_for_user_output`. For both hooks the span is always a dummy span, probably unintentionally. This dummy span use is now explicit. If a non-dummy span is needed for these two hooks it would be easy to add it as an extra argument because hooks are less constrained than queries.
r? `@oli-obk`
All hooks receive a `TyCtxtAt` argument.
Currently hooks can be called through `TyCtxtAt` or `TyCtxt`. In the
latter case, a `TyCtxtAt` is constructed with a dummy span and passed to
the hook.
However, in practice hooks are never called through `TyCtxtAt`, and
always receive a dummy span. (I confirmed this via code inspection, and
double-checked it by temporarily making the `TyCtxtAt` code path panic
and running all the tests.)
This commit removes all the `TyCtxtAt` machinery for hooks. All hooks
now receive `TyCtxt` instead of `TyCtxtAt`. There are two existing hooks
that use `TyCtxtAt::span`: `const_caller_location_provider` and
`try_destructure_mir_constant_for_user_output`. For both hooks the span
is always a dummy span, probably unintentionally. This dummy span use is
now explicit. If a non-dummy span is needed for these two hooks it would
be easy to add it as an extra argument because hooks are less
constrained than queries.
Rename `tcx.ensure()` to `tcx.ensure_ok()`, and improve the associated docs
This is all based on my archaeology for https://rust-lang.zulipchat.com/#narrow/channel/182449-t-compiler.2Fhelp/topic/.60TyCtxtEnsure.60.
The main renamings are:
- `tcx.ensure()` → `tcx.ensure_ok()`
- `tcx.ensure_with_value()` → `tcx.ensure_done()`
- Query modifier `ensure_forwards_result_if_red` → `return_result_from_ensure_ok`
Hopefully these new names are a better fit for the *actual* function and purpose of these query call modes.
Implement MIR lowering for unsafe binders
This is the final bit of the unsafe binders puzzle. It implements MIR, CTFE, and codegen for unsafe binders, and enforces that (for now) they are `Copy`. Later on, I'll introduce a new trait that relaxes this requirement to being "is `Copy` or `ManuallyDrop<T>`" which more closely models how we treat union fields.
Namely, wrapping unsafe binders is now `Rvalue::WrapUnsafeBinder`, which acts much like an `Rvalue::Aggregate`. Unwrapping unsafe binders are implemented as a MIR projection `ProjectionElem::UnwrapUnsafeBinder`, which acts much like `ProjectionElem::Field`.
Tracking:
- https://github.com/rust-lang/rust/issues/130516
Similar to how the alignment is already checked, this adds a check
for null pointer dereferences in debug mode. It is implemented similarly
to the alignment check as a MirPass.
This is related to a 2025H1 project goal for better UB checks in debug
mode: https://github.com/rust-lang/rust-project-goals/pull/177.
It's a function that does stuff with MIR and yet it weirdly has its own
module in `rustc_middle::util`. This commit moves it into
`rustc_middle::mir`, a more sensible home.
This also parameterize the "excluded pointee types" and exposes a
general method for inserting checks on pointers.
This is a preparation for adding a NullCheck that makes use of the same
code.
Lower index bounds checking to `PtrMetadata`, this time with the right fake borrow semantics 😸
Change `Rvalue::RawRef` to take a `RawRefKind` instead of just a `Mutability`. Then introduce `RawRefKind::FakeForPtrMetadata` and use that for lowering index bounds checking to a `PtrMetadata`. This new `RawRefKind::FakeForPtrMetadata` acts like a shallow fake borrow in borrowck, which mimics the semantics of the old `Rvalue::Len` operation we're replacing.
We can then use this `RawRefKind` instead of using a span desugaring hack in CTFE.
cc ``@scottmcm`` ``@RalfJung``
Use identifiers more in diagnostics code
This should make the diagnostics code slightly more correct when rendering idents in mixed crate edition situations. Kinda a no-op, but a cleanup regardless.
r? oli-obk or reassign
Incorporate `iter_nodes` into `graph::DirectedGraph`
This helper method iterates over all node IDs in the dense range `0..num_nodes`.
In practice, we have a lot of graph-algorithm code that already assumes that nodes are densely numbered, by using `num_nodes` to allocate per-node indexed data structures. So I don't think this is actually a substantial change to the de-facto semantics of `graph::DirectedGraph`.
---
Resolves a FIXME from #135481.
Get rid of `mir::Const::from_ty_const`
This function is strange, because it turns valtrees into `mir::Const::Value`, but the rest of the const variants stay as type system consts.
All of the callsites except for one in `instsimplify` (array length simplification of `ptr_metadata` call) just go through the valtree arm of the function, so it's easier to just create a `mir::Const` directly for those.
For the instsimplify case, if we have a type system const we should *keep* having a type system const, rather than turning it into a `mir::Const::Value`; it doesn't really matter in practice, though, bc `usize` has no padding, but it feels more principled.
This assumes that the set of valid node IDs is exactly `0..num_nodes`.
In practice, we have a lot of graph-algorithm code that already assumes that
nodes are densely numbered, by using `num_nodes` to allocate per-node indexed
data structures.
Add `#[optimize(none)]`
cc #54882
This extends the `optimize` attribute to add `none`, which corresponds to the LLVM `OptimizeNone` attribute.
Not sure if an MCP is required for this, happy to file one if so.