coverage: Don't underflow column number
I noticed this when running coverage on a debug build of rustc. There
may be other places that do this but I'm just fixing the one I hit.
r? `@wesleywiser` `@richkadel`
They're semantically the same, so this means the backends don't need to handle the intrinsic and means fewer MIR basic blocks in pointer arithmetic code.
Switch to `EarlyBinder` for `explicit_item_bounds`
Part of the work to finish https://github.com/rust-lang/rust/issues/105779.
This PR adds `EarlyBinder` to the return type of the `explicit_item_bounds` query and removes `bound_explicit_item_bounds`.
r? `@compiler-errors` (hope it's okay to request you, since you reviewed #110299 and #110498😃)
Normalize types and consts in MIR opts.
Some passes were using a non-RevealAll param_env, which is needlessly restrictive in mir-opts.
As a drive-by, we normalize all constants, since just normalizing their types is not enough.
Add `intrinsics::transmute_unchecked`
This takes a whole 3 lines in `compiler/` since it lowers to `CastKind::Transmute` in MIR *exactly* the same as the existing `intrinsics::transmute` does, it just doesn't have the fancy checking in `hir_typeck`.
Added to enable experimenting with the request in <https://github.com/rust-lang/rust/pull/106281#issuecomment-1496648190> and because the portable-simd folks might be interested for dependently-sized array-vector conversions.
It also simplifies a couple places in `core`.
See also https://github.com/rust-lang/rust/pull/108442#issuecomment-1474777273, where `CastKind::Transmute` was added having exactly these semantics before the lang meeting (which I wasn't in) independently expressed interest.
Remove the size of locals heuristic in MIR inlining
This heuristic doesn't necessarily correlate to complexity of the MIR Body. In particular, a lot of straight-line code in MIR tends to never reuse a local, even though any optimizer would effectively reuse the storage or just put everything in registers. So it doesn't even necessarily make sense that this would be a stack size heuristic.
So... what happens if we just delete the heuristic? The benchmark suite improves significantly. Less heuristics better?
r? `@cjgillot`
Run various queries from other queries instead of explicitly in phases
These are just legacy leftovers from when rustc didn't have a query system. While there are more cleanups of this sort that can be done here, I want to land them in smaller steps.
This phased order of query invocations was already a lie, as any query that looks at types (e.g. the wf checks run before) can invoke e.g. const eval which invokes borrowck, which invokes typeck, ...
This takes a whole 3 lines in `compiler/` since it lowers to `CastKind::Transmute` in MIR *exactly* the same as the existing `intrinsics::transmute` does, it just doesn't have the fancy checking in `hir_typeck`.
Added to enable experimenting with the request in <https://github.com/rust-lang/rust/pull/106281#issuecomment-1496648190> and because the portable-simd folks might be interested for dependently-sized array-vector conversions.
It also simplifies a couple places in `core`.
Evaluate place expression in `PlaceMention`
https://github.com/rust-lang/rust/pull/102256 introduces a `PlaceMention(place)` MIR statement which keep trace of `let _ = place` statements from surface rust, but without semantics.
This PR proposes to change the behaviour of `let _ =` patterns with respect to the borrow-checker to verify that the bound place is live.
Specifically, consider this code:
```rust
let _ = {
let a = 5;
&a
};
```
This passes borrowck without error on stable. Meanwhile, replacing `_` by `_: _` or `_p` errors with "error[E0597]: `a` does not live long enough", [see playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=c448d25a7c205dc95a0967fe96bccce8).
This PR *does not* change how `_` patterns behave with respect to initializedness: it remains ok to bind a moved-from place to `_`.
The relevant test is `tests/ui/borrowck/let_underscore_temporary.rs`. Crater check found no regression.
For consistency, this PR changes miri to evaluate the place found in `PlaceMention`, and report eventual dangling pointers found within it.
r? `@RalfJung`
Add offset_of! macro (RFC 3308)
Implements https://github.com/rust-lang/rfcs/pull/3308 (tracking issue #106655) by adding the built in macro `core::mem::offset_of`. Two of the future possibilities are also implemented:
* Nested field accesses (without array indexing)
* DST support (for `Sized` fields)
I wrote this a few months ago, before the RFC merged. Now that it's merged, I decided to rebase and finish it.
cc `@thomcc` (RFC author)
Deduplicate unreachable blocks, for real this time
In https://github.com/rust-lang/rust/pull/106428 (in particular 41eda69516) we noticed that inlining `unreachable_unchecked` can produce duplicate unreachable blocks. So we improved two MIR optimizations: `SimplifyCfg` was given a simplify to deduplicate unreachable blocks, then `InstCombine` was given a combiner to deduplicate switch targets that point at the same block. The problem is that change doesn't actually work.
Our current pass order is
```
SimplifyCfg (does nothing relevant to this situation)
Inline (produces multiple unreachable blocks)
InstCombine (doesn't do anything here, oops)
SimplifyCfg (produces the duplicate SwitchTargets that InstCombine is looking for)
```
So in here, I have factored out the specific function from `InstCombine` and placed it inside the simplify that produces the case it is looking for. This should ensure that it runs in the scenario it was designed for.
Fixes https://github.com/rust-lang/rust/issues/110551
r? `@cjgillot`
Don't allocate on SimplifyCfg/Locals/Const on every MIR pass
Hey! 👋🏾 This is a first PR attempt to see if I could speed up some rustc internals.
Thought process:
```rust
pub struct SimplifyCfg {
label: String,
}
```
in [compiler/src/rustc_mir_transform/simplify.rs](7908a1d654/compiler/rustc_mir_transform/src/simplify.rs (L39)) fires multiple times per MIR analysis. This means that a likely string allocation is happening in each of these runs, which may add up, as they are not being lazily allocated or cached in between the different passes.
...yes, I know that adding a global static array is probably not the future-proof solution, but I wanted to lob this now as a proof of concept to see if it's worth shaving off a few cycles and then making more robust.
Encode hashes as bytes, not varint
In a few places, we store hashes as `u64` or `u128` and then apply `derive(Decodable, Encodable)` to the enclosing struct/enum. It is more efficient to encode hashes directly than try to apply some varint encoding. This PR adds two new types `Hash64` and `Hash128` which are produced by `StableHasher` and replace every use of storing a `u64` or `u128` that represents a hash.
Distribution of the byte lengths of leb128 encodings, from `x build --stage 2` with `incremental = true`
Before:
```
( 1) 373418203 (53.7%, 53.7%): 1
( 2) 196240113 (28.2%, 81.9%): 3
( 3) 108157958 (15.6%, 97.5%): 2
( 4) 17213120 ( 2.5%, 99.9%): 4
( 5) 223614 ( 0.0%,100.0%): 9
( 6) 216262 ( 0.0%,100.0%): 10
( 7) 15447 ( 0.0%,100.0%): 5
( 8) 3633 ( 0.0%,100.0%): 19
( 9) 3030 ( 0.0%,100.0%): 8
( 10) 1167 ( 0.0%,100.0%): 18
( 11) 1032 ( 0.0%,100.0%): 7
( 12) 1003 ( 0.0%,100.0%): 6
( 13) 10 ( 0.0%,100.0%): 16
( 14) 10 ( 0.0%,100.0%): 17
( 15) 5 ( 0.0%,100.0%): 12
( 16) 4 ( 0.0%,100.0%): 14
```
After:
```
( 1) 372939136 (53.7%, 53.7%): 1
( 2) 196240140 (28.3%, 82.0%): 3
( 3) 108014969 (15.6%, 97.5%): 2
( 4) 17192375 ( 2.5%,100.0%): 4
( 5) 435 ( 0.0%,100.0%): 5
( 6) 83 ( 0.0%,100.0%): 18
( 7) 79 ( 0.0%,100.0%): 10
( 8) 50 ( 0.0%,100.0%): 9
( 9) 6 ( 0.0%,100.0%): 19
```
The remaining 9 or 10 and 18 or 19 are `u64` and `u128` respectively that have the high bits set. As far as I can tell these are coming primarily from `SwitchTargets`.
Check freeze with right param-env in `deduced_param_attrs`
We're checking if a trait (`Freeze`) holds in a polymorphic function, but not using that function's own (reveal-all) param-env. This causes us to try to eagerly normalize a specializable projection type that has no default value, which causes an ICE.
Fixes#110171
Permit MIR inlining without #[inline]
I noticed that there are at least a handful of portable-simd functions that have no `#[inline]` but compile to an assign + return.
I locally benchmarked inlining thresholds between 0 and 50 in increments of 5, and 50 seems to be the best. Interesting. That didn't include check builds though, ~maybe perf will have something to say about that~.
Perf has little useful to say about this. We generally regress all the check builds, as best as I can tell, due to a number of small codegen changes in a particular hot function in the compiler. Probably this is because we've nudged the inlining outcomes all over, and uses of `#[inline(always)]`/`#[inline(never)]` might need to be adjusted.