Box AssertKind
r? `@nnethercote` this feels like your kind of thing
I want to add a new variant to `AssertKind` that needs 3 operands, and that ends up breaking a bunch of size assertions. So... what if we go the opposite direction first; shrinking `AssertKind` by boxing it?
Don't validate constants in const propagation
Validation is neither necessary nor desirable.
The constant validation is already omitted at mir-opt-level >= 3, so there there are not changes in MIR test output (the propagation of invalid constants is covered by an existing test in tests/mir-opt/const_prop/invalid_constant.rs).
Previously, when borrowck failed it would taint all promoteds within the MIR
body. An attempt to evaluated the promoteds would subsequently fail with
spurious "note: erroneous constant used". For example:
```console
...
note: erroneous constant used
--> tests/ui/borrowck/tainted-promoteds.rs:7:9
|
7 | a = &0 * &1 * &2 * &3;
| ^^
note: erroneous constant used
--> tests/ui/borrowck/tainted-promoteds.rs:7:14
|
7 | a = &0 * &1 * &2 * &3;
| ^^
note: erroneous constant used
--> tests/ui/borrowck/tainted-promoteds.rs:7:19
|
7 | a = &0 * &1 * &2 * &3;
| ^^
note: erroneous constant used
--> tests/ui/borrowck/tainted-promoteds.rs:7:24
|
7 | a = &0 * &1 * &2 * &3;
| ^^
```
Borrowck failure doesn't indicate that there is anything wrong with
promoteds. Leave them untainted.
Validation is neither necessary nor desirable.
The validation is already omitted at mir-opt-level >= 3, so there there
are not changes in MIR test output (the propagation of invalid constants
is covered by an existing test in tests/mir-opt/const_prop/invalid_constant.rs).
coverage: Don't underflow column number
I noticed this when running coverage on a debug build of rustc. There
may be other places that do this but I'm just fixing the one I hit.
r? `@wesleywiser` `@richkadel`
They're semantically the same, so this means the backends don't need to handle the intrinsic and means fewer MIR basic blocks in pointer arithmetic code.
Switch to `EarlyBinder` for `explicit_item_bounds`
Part of the work to finish https://github.com/rust-lang/rust/issues/105779.
This PR adds `EarlyBinder` to the return type of the `explicit_item_bounds` query and removes `bound_explicit_item_bounds`.
r? `@compiler-errors` (hope it's okay to request you, since you reviewed #110299 and #110498😃)
Normalize types and consts in MIR opts.
Some passes were using a non-RevealAll param_env, which is needlessly restrictive in mir-opts.
As a drive-by, we normalize all constants, since just normalizing their types is not enough.
Add `intrinsics::transmute_unchecked`
This takes a whole 3 lines in `compiler/` since it lowers to `CastKind::Transmute` in MIR *exactly* the same as the existing `intrinsics::transmute` does, it just doesn't have the fancy checking in `hir_typeck`.
Added to enable experimenting with the request in <https://github.com/rust-lang/rust/pull/106281#issuecomment-1496648190> and because the portable-simd folks might be interested for dependently-sized array-vector conversions.
It also simplifies a couple places in `core`.
See also https://github.com/rust-lang/rust/pull/108442#issuecomment-1474777273, where `CastKind::Transmute` was added having exactly these semantics before the lang meeting (which I wasn't in) independently expressed interest.
Remove the size of locals heuristic in MIR inlining
This heuristic doesn't necessarily correlate to complexity of the MIR Body. In particular, a lot of straight-line code in MIR tends to never reuse a local, even though any optimizer would effectively reuse the storage or just put everything in registers. So it doesn't even necessarily make sense that this would be a stack size heuristic.
So... what happens if we just delete the heuristic? The benchmark suite improves significantly. Less heuristics better?
r? `@cjgillot`
Run various queries from other queries instead of explicitly in phases
These are just legacy leftovers from when rustc didn't have a query system. While there are more cleanups of this sort that can be done here, I want to land them in smaller steps.
This phased order of query invocations was already a lie, as any query that looks at types (e.g. the wf checks run before) can invoke e.g. const eval which invokes borrowck, which invokes typeck, ...
This takes a whole 3 lines in `compiler/` since it lowers to `CastKind::Transmute` in MIR *exactly* the same as the existing `intrinsics::transmute` does, it just doesn't have the fancy checking in `hir_typeck`.
Added to enable experimenting with the request in <https://github.com/rust-lang/rust/pull/106281#issuecomment-1496648190> and because the portable-simd folks might be interested for dependently-sized array-vector conversions.
It also simplifies a couple places in `core`.
Evaluate place expression in `PlaceMention`
https://github.com/rust-lang/rust/pull/102256 introduces a `PlaceMention(place)` MIR statement which keep trace of `let _ = place` statements from surface rust, but without semantics.
This PR proposes to change the behaviour of `let _ =` patterns with respect to the borrow-checker to verify that the bound place is live.
Specifically, consider this code:
```rust
let _ = {
let a = 5;
&a
};
```
This passes borrowck without error on stable. Meanwhile, replacing `_` by `_: _` or `_p` errors with "error[E0597]: `a` does not live long enough", [see playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=c448d25a7c205dc95a0967fe96bccce8).
This PR *does not* change how `_` patterns behave with respect to initializedness: it remains ok to bind a moved-from place to `_`.
The relevant test is `tests/ui/borrowck/let_underscore_temporary.rs`. Crater check found no regression.
For consistency, this PR changes miri to evaluate the place found in `PlaceMention`, and report eventual dangling pointers found within it.
r? `@RalfJung`
Add offset_of! macro (RFC 3308)
Implements https://github.com/rust-lang/rfcs/pull/3308 (tracking issue #106655) by adding the built in macro `core::mem::offset_of`. Two of the future possibilities are also implemented:
* Nested field accesses (without array indexing)
* DST support (for `Sized` fields)
I wrote this a few months ago, before the RFC merged. Now that it's merged, I decided to rebase and finish it.
cc `@thomcc` (RFC author)
Deduplicate unreachable blocks, for real this time
In https://github.com/rust-lang/rust/pull/106428 (in particular 41eda69516) we noticed that inlining `unreachable_unchecked` can produce duplicate unreachable blocks. So we improved two MIR optimizations: `SimplifyCfg` was given a simplify to deduplicate unreachable blocks, then `InstCombine` was given a combiner to deduplicate switch targets that point at the same block. The problem is that change doesn't actually work.
Our current pass order is
```
SimplifyCfg (does nothing relevant to this situation)
Inline (produces multiple unreachable blocks)
InstCombine (doesn't do anything here, oops)
SimplifyCfg (produces the duplicate SwitchTargets that InstCombine is looking for)
```
So in here, I have factored out the specific function from `InstCombine` and placed it inside the simplify that produces the case it is looking for. This should ensure that it runs in the scenario it was designed for.
Fixes https://github.com/rust-lang/rust/issues/110551
r? `@cjgillot`
Don't allocate on SimplifyCfg/Locals/Const on every MIR pass
Hey! 👋🏾 This is a first PR attempt to see if I could speed up some rustc internals.
Thought process:
```rust
pub struct SimplifyCfg {
label: String,
}
```
in [compiler/src/rustc_mir_transform/simplify.rs](7908a1d654/compiler/rustc_mir_transform/src/simplify.rs (L39)) fires multiple times per MIR analysis. This means that a likely string allocation is happening in each of these runs, which may add up, as they are not being lazily allocated or cached in between the different passes.
...yes, I know that adding a global static array is probably not the future-proof solution, but I wanted to lob this now as a proof of concept to see if it's worth shaving off a few cycles and then making more robust.
Encode hashes as bytes, not varint
In a few places, we store hashes as `u64` or `u128` and then apply `derive(Decodable, Encodable)` to the enclosing struct/enum. It is more efficient to encode hashes directly than try to apply some varint encoding. This PR adds two new types `Hash64` and `Hash128` which are produced by `StableHasher` and replace every use of storing a `u64` or `u128` that represents a hash.
Distribution of the byte lengths of leb128 encodings, from `x build --stage 2` with `incremental = true`
Before:
```
( 1) 373418203 (53.7%, 53.7%): 1
( 2) 196240113 (28.2%, 81.9%): 3
( 3) 108157958 (15.6%, 97.5%): 2
( 4) 17213120 ( 2.5%, 99.9%): 4
( 5) 223614 ( 0.0%,100.0%): 9
( 6) 216262 ( 0.0%,100.0%): 10
( 7) 15447 ( 0.0%,100.0%): 5
( 8) 3633 ( 0.0%,100.0%): 19
( 9) 3030 ( 0.0%,100.0%): 8
( 10) 1167 ( 0.0%,100.0%): 18
( 11) 1032 ( 0.0%,100.0%): 7
( 12) 1003 ( 0.0%,100.0%): 6
( 13) 10 ( 0.0%,100.0%): 16
( 14) 10 ( 0.0%,100.0%): 17
( 15) 5 ( 0.0%,100.0%): 12
( 16) 4 ( 0.0%,100.0%): 14
```
After:
```
( 1) 372939136 (53.7%, 53.7%): 1
( 2) 196240140 (28.3%, 82.0%): 3
( 3) 108014969 (15.6%, 97.5%): 2
( 4) 17192375 ( 2.5%,100.0%): 4
( 5) 435 ( 0.0%,100.0%): 5
( 6) 83 ( 0.0%,100.0%): 18
( 7) 79 ( 0.0%,100.0%): 10
( 8) 50 ( 0.0%,100.0%): 9
( 9) 6 ( 0.0%,100.0%): 19
```
The remaining 9 or 10 and 18 or 19 are `u64` and `u128` respectively that have the high bits set. As far as I can tell these are coming primarily from `SwitchTargets`.
Check freeze with right param-env in `deduced_param_attrs`
We're checking if a trait (`Freeze`) holds in a polymorphic function, but not using that function's own (reveal-all) param-env. This causes us to try to eagerly normalize a specializable projection type that has no default value, which causes an ICE.
Fixes#110171
Permit MIR inlining without #[inline]
I noticed that there are at least a handful of portable-simd functions that have no `#[inline]` but compile to an assign + return.
I locally benchmarked inlining thresholds between 0 and 50 in increments of 5, and 50 seems to be the best. Interesting. That didn't include check builds though, ~maybe perf will have something to say about that~.
Perf has little useful to say about this. We generally regress all the check builds, as best as I can tell, due to a number of small codegen changes in a particular hot function in the compiler. Probably this is because we've nudged the inlining outcomes all over, and uses of `#[inline(always)]`/`#[inline(never)]` might need to be adjusted.
This allows us to get rid of the `rustc_const_eval->rustc_borrowck`
dependency edge which was delaying the compilation of borrowck.
The added utils in `rustc_middle` are small and should not affect
compile times there.
Preserve argument indexes when inlining MIR
We store argument indexes on VarDebugInfo. Unlike the previous method of relying on the variable index to know whether a variable is an argument, this survives MIR inlining.
We also no longer check if var.source_info.scope is the outermost scope. When a function gets inlined, the arguments to the inner function will no longer be in the outermost scope. What we care about though is whether they were in the outermost scope prior to inlining, which we know by whether we assigned an argument index.
Fixes#83217
I considered using `Option<NonZeroU16>` instead of `Option<u16>` to store the index. I didn't because `TypeFoldable` isn't implemented for `NonZeroU16` and because it looks like due to padding, it currently wouldn't make any difference. But I indexed from 1 anyway because (a) it'll make it easier if later it becomes worthwhile to use a `NonZeroU16` and because the arguments were previously indexed from 1, so it made for a smaller change.
This is my first PR on rust-lang/rust, so apologies if I've gotten anything not quite right.
We store argument indexes on VarDebugInfo. Unlike the previous method of
relying on the variable index to know whether a variable is an argument,
this survives MIR inlining.
We also no longer check if var.source_info.scope is the outermost scope.
When a function gets inlined, the arguments to the inner function will
no longer be in the outermost scope. What we care about though is
whether they were in the outermost scope prior to inlining, which we
know by whether we assigned an argument index.
Make elaboration generic over input
Combines all the `elaborate_*` family of functions into just one, which is an iterator over the same type that you pass in (e.g. elaborating `Predicate` gives `Predicate`s, elaborating `Obligation`s gives `Obligation`s, etc.)
Refactor unwind in MIR
This makes unwinding from current `Option<BasicBlock>` into
```rust
enum UnwindAction {
Continue,
Cleanup(BasicBlock),
Unreachable,
Terminate,
}
```
cc `@JakobDegen` `@RalfJung` `@Amanieu`
Unify terminology used in unwind action and terminator, and reflect
the fact that a nounwind panic is triggered instead of an immediate
abort is triggered for this terminator.
Move a const-prop-lint specific hack from mir interpret to const-prop-lint and make it fallible
fixes#109743
This hack didn't need to live in the mir interpreter. For const-prop-lint it is entirely correct to avoid doing any const prop if normalization fails at this stage. Most likely we couldn't const propagate anything anyway, and if revealing was needed (so opaque types were involved), we wouldn't want to be too smart and leak the hidden type anyway.
Insert alignment checks for pointer dereferences when debug assertions are enabled
Closes https://github.com/rust-lang/rust/issues/54915
- [x] Jake tells me this sounds like a place to use `MirPatch`, but I can't figure out how to insert a new basic block with a new terminator in the middle of an existing basic block, using `MirPatch`. (if nobody else backs up this point I'm checking this as "not actually a good idea" because the code looks pretty clean to me after rearranging it a bit)
- [x] Using `CastKind::PointerExposeAddress` is definitely wrong, we don't want to expose. Calling a function to get the pointer address seems quite excessive. ~I'll see if I can add a new `CastKind`.~ `CastKind::Transmute` to the rescue!
- [x] Implement a more helpful panic message like slice bounds checking.
r? `@oli-obk`