Remove both StorageLive and StorageDead in CopyProp.
Fixes https://github.com/rust-lang/rust/issues/107511https://github.com/rust-lang/rust/pull/106908 removed StorageDead without the accompanying StorageLive. In loops, execution would see repeated StorageLive, without any StorageDead, which is UB.
So when removing storage statements, we have to remove both StorageLive and StorageDead.
~I also added a MIR validation pass for StorageLive. It may be a bit overzealous.~
Currently, deriving on packed structs has some non-trivial limitations,
related to the fact that taking references on unaligned fields is UB.
The current approach to field accesses in derived code:
- Normal case: `&self.0`
- In a packed struct that derives `Copy`: `&{self.0}`
- In a packed struct that doesn't derive `Copy`: `&self.0`
Plus, we disallow deriving any builtin traits other than `Default` for any
packed generic type, because it's possible that there might be
misaligned fields. This is a fairly broad restriction.
Plus, we disallow deriving any builtin traits other than `Default` for most
packed types that don't derive `Copy`. (The exceptions are those where the
alignments inherently satisfy the packing, e.g. in a type with
`repr(packed(N))` where all the fields have alignments of `N` or less
anyway. Such types are pretty strange, because the `packed` attribute is
not having any effect.)
This commit introduces a new, simpler approach to field accesses:
- Normal case: `&self.0`
- In a packed struct: `&{self.0}`
In the latter case, this requires that all fields impl `Copy`, which is
a new restriction. This means that the following example compiles under
the old approach and doesn't compile under the new approach.
```
#[derive(Debug)]
struct NonCopy(u8);
#[derive(Debug)
#[repr(packed)]
struct MyType(NonCopy);
```
(Note that the old approach's support for cases like this was brittle.
Changing the `u8` to a `u16` would be enough to stop it working. So not
much capability is lost here.)
However, the other constraints from the old rules are removed. We can now
derive builtin traits for packed generic structs like this:
```
trait Trait { type A; }
#[derive(Hash)]
#[repr(packed)]
pub struct Foo<T: Trait>(T, T::A);
```
To allow this, we add a `T: Copy` bound in the derived impl and a `T::A:
Copy` bound in where clauses. So `T` and `T::A` must impl `Copy`.
We can now also derive builtin traits for packed structs that don't derive
`Copy`, so long as the fields impl `Copy`:
```
#[derive(Hash)]
#[repr(packed)]
pub struct Foo(u32);
```
This includes types that hand-impl `Copy` rather than deriving it, such as the
following, that show up in winapi-0.2:
```
#[derive(Clone)]
#[repr(packed)]
struct MyType(i32);
impl Copy for MyType {}
```
The new approach is simpler to understand and implement, and it avoids
the need for the `unsafe_derive_on_repr_packed` check.
One exception is required for backwards-compatibility: we allow `[u8]`
fields for now. There is a new lint for this,
`byte_slice_in_packed_struct_with_derive`.
Previously, a Drop terminator was considered a move in MIR.
This commit changes the behavior to only treat Drop as a mutable
access to the dropped place.
In order for this change to be correct, we need to guarantee that
a) A dropped value won't be used again
b) Places that appear in a drop won't be used again before a
subsequent initialization.
We can ensure this to be correct at MIR construction because Drop
will only be emitted when a variable goes out of scope,
thus having:
(a) as there is no way of reaching the old value. drop-elaboration
will also remove any uninitialized drop.
(b) as the place can't be named following the end of the scope.
However, the initialization status, previously tracked by moves,
should also be tied to the execution of a Drop, hence the
additional logic in the dataflow analyses.
Implement simple CopyPropagation based on SSA analysis
This PR extracts the "copy propagation" logic from https://github.com/rust-lang/rust/pull/106285.
MIR may produce chains of assignment between locals, like `_x = move? _y`.
This PR attempts to remove such chains by unifying locals.
The current implementation is a bit overzealous in turning moves into copies, and in removing storage statements.
Use stable metric for const eval limit instead of current terminator-based logic
This patch adds a `MirPass` that inserts a new MIR instruction `ConstEvalCounter` to any loops and function calls in the CFG. This instruction is used during Const Eval to count against the `const_eval_limit`, and emit the `StepLimitReached` error, replacing the current logic which uses Terminators only.
The new method of counting loops and function calls should be more stable across compiler versions (i.e., not cause crates that compiled successfully before, to no longer compile when changes to the MIR generation/optimization are made).
Also see: #103877
InstCombine away intrinsic validity assertions
This optimization (currently) fires 246 times on the standard library. It seems to fire hardly at all on the big crates in the benchmark suite. Interesting.
This patch adds a `MirPass` that tracks the number of back-edges and
function calls in the CFG, adds a new MIR instruction to increment a
counter every time they are encountered during Const Eval, and emit a
warning if a configured limit is breached.
There is a number of APIs that answer dominance queries. Previously they
were named either "dominates" or "is_dominated_by". Consistently use the
"dominates" form.
No functional changes.
- Eliminates all the `get_context` calls that async lowering created.
- Replace all `Local` `ResumeTy` types with `&mut Context<'_>`.
The `Local`s that have their types replaced are:
- The `resume` argument itself.
- The argument to `get_context`.
- The yielded value of a `yield`.
The `ResumeTy` hides a `&mut Context<'_>` behind an unsafe raw pointer, and the
`get_context` function is being used to convert that back to a `&mut Context<'_>`.
Ideally the async lowering would not use the `ResumeTy`/`get_context` indirection,
but rather directly use `&mut Context<'_>`, however that would currently
lead to higher-kinded lifetime errors.
See <https://github.com/rust-lang/rust/issues/105501>.
The async lowering step and the type / lifetime inference / checking are
still using the `ResumeTy` indirection for the time being, and that indirection
is removed here. After this transform, the generator body only knows about `&mut Context<'_>`.
Always permit ConstProp to exploit arithmetic identities
Fixes https://github.com/rust-lang/rust/issues/72751
Initially, I thought I would need to enable operand propagation then do something else, but actually https://github.com/rust-lang/rust/pull/74491 already has the fix for the issue in question! It looks like this optimization was put under MIR opt level 3 due to possible soundness/stability implications, then demoted further to MIR opt level 4 when MIR opt level 2 became associated with `--release`.
Perhaps in the past we were doing CTFE on optimized MIR? We aren't anymore, so this optimization has no stability implications.
r? `@oli-obk`
Improve syntax of `newtype_index`
This makes it more like proper Rust and also makes the implementation a lot simpler.
Mostly just turns weird flags in the body into proper attributes.
It should probably also be converted to an attribute macro instead of function-like, but that can be done in a future PR.
Remove the `..` from the body, only a few invocations used it and it's
inconsistent with rust syntax.
Use `;` instead of `,` between consts. As the Rust syntax gods inteded.
This removes the `custom` format functionality as its only user was
trivially migrated to using a normal format.
If a new use case for a custom formatting impl pops up, you can add it
back.
Custom MIR: Many more improvements
Commits are each atomic changes, best reviewed one at a time, with the exception that the last commit includes all the documentation.
### First commit
Unsafetyck was not correctly disabled before for `dialect = "built"` custom MIR. This is fixed and a regression test is added.
### Second commit
Implements `Discriminant`, `SetDiscriminant`, and `SwitchInt`.
### Third commit
Implements indexing, field, and variant projections.
### Fourth commit
Documents the previous commits and everything else.
There is some amount of weirdness here due to having to beat Rust syntax into cooperating with MIR concepts, but it hopefully should not be too much. All of it is documented.
r? `@oli-obk`
always check alignment during CTFE
We originally disabled alignment checks because they got in the way -- there are some things we do with the interpreter during CTFE which does not correspond to actually running user-written code, but is purely administrative, and we didn't want alignment checks there, so we just disabled them entirely. But with `-Zextra-const-ub-checks` we anyway had to figure out how to disable those alignment checks while doing checks in regular code. So now it is easy to enable CTFE alignment checking by default. Let's see what the perf consequences of that are.
r? `@oli-obk`
Various cleanups to dest prop
This makes fixing the issues identified in #105577 easier. A couple changes
- Use an enum with names instead of a bool
- Only call `remove_candidates_if` from one place instead of two. Doing it from two places is far too fragile, since any divergence in the behavior between those callsites is likely to be unsound.
- Remove `is_constant`. Right now we only merge locals, so this doesn't do anything, and the logic would be wrong if it did.
r? `@tmiasko`
Combine `ty::Projection` and `ty::Opaque` into `ty::Alias`
Implements https://github.com/rust-lang/types-team/issues/79.
This PR consolidates `ty::Projection` and `ty::Opaque` into a single `ty::Alias`, with an `AliasKind` and `AliasTy` type (renamed from `ty::ProjectionTy`, which is the inner data of `ty::Projection`) defined as so:
```
enum AliasKind {
Projection,
Opaque,
}
struct AliasTy<'tcx> {
def_id: DefId,
substs: SubstsRef<'tcx>,
}
```
Since we don't have access to `TyCtxt` in type flags computation, and because repeatedly calling `DefKind` on the def-id is expensive, these two types are distinguished with `ty::AliasKind`, conveniently glob-imported into `ty::{Projection, Opaque}`. For example:
```diff
match ty.kind() {
- ty::Opaque(..) =>
+ ty::Alias(ty::Opaque, ..) => {}
_ => {}
}
```
This PR also consolidates match arms that treated `ty::Opaque` and `ty::Projection` identically.
r? `@ghost`
Don't require owned data in `MaybeStorageLive`
Small improvement that avoids a clone. I don't expect this to have any noticeable perf effects, but better to have it than not to.
r? ``@tmiasko``
compiler: remove unnecessary imports and qualified paths
Some of these imports were necessary before Edition 2021, others were already in the prelude.
I hope it's fine that this PR is so spread-out across files :/
make retagging work even with 'unstable' places
This is based on top of https://github.com/rust-lang/rust/pull/105301. Only the last two commits are new.
While investigating https://github.com/rust-lang/unsafe-code-guidelines/issues/381 I realized that we would have caught this issue much earlier if the add_retag pass wouldn't bail out on assignments of the form `*ptr = ...`.
So this PR changes our retag strategy:
- When a new reference is created via `Rvalue::Ref` (or a raw ptr via `Rvalue::AddressOf`), we do the retagging as part of just executing that address-taking operation.
- For everything else, we still insert retags -- these retags basically serve to ensure that references stored in local variables (and their fields) are always freshly tagged, so skipping this for assignments like `*ptr = ...` is less egregious.
r? ```@oli-obk```
Re-enable removal of ZST writes to unions
This was previously disabled because Miri was lazily allocating unsized locals. But we aren't doing that anymore since https://github.com/rust-lang/rust/pull/98831, so we can have this optimization back.
feed resolver_for_lowering instead of storing it in a field
r? `@cjgillot`
opening this as
* a discussion for `no_hash` + `feedable` queries. I think we'll want those, but I don't quite understand why they are rejected beyond a double check of the stable hashes for situations where the query is fed but also read from incremental caches.
* and a discussion on removing all untracked fields from TyCtxt and setting it up so that they are fed queries instead
Disable top down MIR inlining
The current MIR inliner has exponential behavior in some cases: <https://godbolt.org/z/7jnWah4fE>. The cause of this is top-down inlining, where we repeatedly do inlining like `call_a() => { call_b(); call_b(); }`. Each decision on its own seems to make sense, but the result is exponential.
Disabling top-down inlining fundamentally prevents this. Each call site in the original, unoptimized source code is now considered for inlining exactly one time, which means that the total growth in MIR size is limited to number of call sites * inlining threshold.
Top down inlining may be worth re-introducing at some point, but it needs to be accompanied with a principled way to prevent this kind of behavior.
This fixes a number of correctness issues from the previous version. Additionally, we use a new
strategy which has much better performance charactersitics and also finds more opportunities to
apply the optimization.
Refine `instruction_set` MIR inline rules
Previously an exact match of the `instruction_set` attribute was required for an MIR inline to be considered. This change checks for an exact match *only* if the callee sets an `instruction_set` in the first place. When the callee does not declare an instruction set then it is considered to be platform agnostic code and it's allowed to be inline'd into the caller.
cc ``@oli-obk``
[Edit] Zulip Context: https://rust-lang.zulipchat.com/#narrow/stream/189540-t-compiler.2Fwg-mir-opt/topic/What.20exactly.20does.20the.20MIR.20optimizer.20do.3F
Previously an exact match of the `instruction_set` attribute was required for an MIR inline to be considered. This change checks for an exact match *only* if the callee sets an `instruction_set` in the first place. When the callee does not declare an instruction set then it is considered to be platform agnostic code and it's allowed to be inline'd into the caller.
Make rustc_target usable outside of rustc
I'm working on showing type size in rust-analyzer (https://github.com/rust-lang/rust-analyzer/pull/13490) and I currently copied rustc code inside rust-analyzer, which works, but is bad. With this change, I would become able to use `rustc_target` and `rustc_index` directly in r-a, reducing the amount of copy needed.
This PR contains some feature flag to put nightly features behind them to make crates buildable on the stable compiler + makes layout related types generic over index type + removes interning of nested layouts.
Previously, async constructs would be lowered to "normal" generators,
with an additional `from_generator` / `GenFuture` shim in between to
convert from `Generator` to `Future`.
The compiler will now special-case these generators internally so that
async constructs will *directly* implement `Future` without the need
to go through the `from_generator` / `GenFuture` shim.
The primary motivation for this change was hiding this implementation
detail in stack traces and debuginfo, but it can in theory also help
the optimizer as there is less abstractions to see through.
nll: correctly deal with bivariance
fixes#104409
when in a bivariant context, relating stuff should always trivially succeed. Also changes the mir validator to correctly deal with higher ranked regions.
r? types cc ``@RalfJung``
Record `LocalDefId` in HIR nodes instead of a side table
This is part of an attempt to remove the `HirId -> LocalDefId` table from HIR.
This attempt is a prerequisite to creation of `LocalDefId` after HIR lowering (https://github.com/rust-lang/rust/pull/96840), by controlling how `def_id` information is accessed.
This first part adds the information to HIR nodes themselves instead of a table.
The second part is https://github.com/rust-lang/rust/pull/103902
The third part will be to make `hir::Visitor::visit_fn` take a `LocalDefId` as last parameter.
The fourth part will be to completely remove the side table.
Perform simple scalar replacement of aggregates (SROA) MIR opt
This is a re-open of https://github.com/rust-lang/rust/pull/85796
I copied the debuginfo implementation (first commit) from `@eddyb's` own SROA PR.
This pass replaces plain field accesses by simple locals when possible.
To be eligible, the replaced locals:
- must not be enums or unions;
- must not be used whole;
- must not have their address taken.
The storage and deinit statements are duplicated on each created local.
cc `@tmiasko` who reviewed the former version of this PR.
interpret: support for per-byte provenance
Also factors the provenance map into its own module.
The third commit does the same for the init mask. I can move it in a separate PR if you prefer.
Fixes https://github.com/rust-lang/miri/issues/2181
r? `@oli-obk`
Add new MIR constant propagation based on dataflow analysis
The current constant propagation in `rustc_mir_transform/src/const_prop.rs` fails to handle many cases that would be expected from a constant propagation optimization. For example:
```rust
let x = if true { 0 } else { 0 };
```
This pull request adds a new constant propagation MIR optimization pass based on the existing dataflow analysis framework. Since most of the analysis is not unique to constant propagation, a generic framework has been extracted. It works on top of the existing framework and could be reused for other optimzations.
Closes#80038. Closes#81605.
## Todo
### Essential
- [x] [Writes to inactive enum variants](https://github.com/rust-lang/rust/pull/101168#pullrequestreview-1089493974). Resolved by rejecting the registration of places with downcast projections for now. Could be improved by flooding other variants if mutable access to a variant is observed.
- [X] Handle [`StatementKind::CopyNonOverlapping`](https://github.com/rust-lang/rust/pull/101168#discussion_r957774914). Resolved by flooding the destination.
- [x] Handle `UnsafeCell` / `!Freeze` correctly.
- [X] Overflow propagation of `CheckedBinaryOp`: Decided to not propagate if overflow flag is `true` (`false` will still be propagated)
- [x] More documentation in general.
- [x] Arguments for correctness, documentation of necessary assumptions.
- [x] Better performance, or alternatively, require `-Zmir-opt-level=3` for now.
### Extra
- [x] Add explicit unreachability, i.e. upgrading the lattice from $\mathbb{P} \to \mathbb{V}$ to $\set{\bot} \cup (\mathbb{P} \to \mathbb{V})$.
- [x] Use storage statements to improve precision.
- [ ] Consider opening issue for duplicate diagnostics: https://github.com/rust-lang/rust/pull/101168#issuecomment-1276609950
- [ ] Flood moved-from places with $\bot$ (requires some changes for places with tracked projections).
- [ ] Add downcast projections back in.
- [ ] [Algebraic simplifications](https://github.com/rust-lang/rust/pull/101168#discussion_r957967878) (possibly with a shared API; done by old const prop).
- [ ] Propagation through slices / arrays.
- [ ] Find other optimizations that are done by old `const_prop.rs`, but not by this one.
Add mir building test directory
The first commit renames `mir-map.0` mir dumps to `built.after` dumps. I am happy to drop this commit if someone can explain the origin of the name.
The second commit moves a bunch of mir building tests into their own directory. I did my best to make sure that all of these tests are actually testing mir building, and not just incidentally using `built.after`
r? ``@oli-obk``