Operand types are now tracked explicitly, so there is no need to reserve ID 0
for the special always-zero counter.
As part of the renumbering, this change fixes an off-by-one error in the way
counters were counted by the `coverageinfo` query. As a result, functions
should now have exactly the number of counters they actually need, instead of
always having an extra counter that is never used.
Operand types are now tracked explicitly, so there is no need for expression
IDs to avoid counter IDs by descending from `u32::MAX`. Instead they can just
count up from 0, and can be used directly as indices when necessary.
Because the three kinds of operand are now distinguished explicitly, we no
longer need fiddly code to disambiguate counter IDs and expression IDs based on
the total number of counters/expressions in a function.
This does increase the size of operands from 4 bytes to 8 bytes, but that
shouldn't be a big deal since they are mostly stored inside boxed structures,
and the current coverage code is not particularly size-optimized anyway.
The actual motivation here is to prevent `rustfmt` from suddenly reformatting
these enum variants onto a single line, when they become slightly shorter in
the future.
But there's no harm in adding some helpful documentation at the same time.
Normalize the RHS of an `Unsize` goal in the new solver
`Unsize` goals are... tricky. Not only do they structurally match on their self type, but they're also structural on their other type parameter. I'm pretty certain that it is both incomplete and also just plain undesirable to not consider normalizing the RHS of an unsize goal. More practically, I'd like for this code to work:
```rust
trait A {}
trait B: A {}
impl A for usize {}
impl B for usize {}
trait Mirror {
type Assoc: ?Sized;
}
impl<T: ?Sized> Mirror for T {
type Assoc = T;
}
fn main() {
// usize: Unsize<dyn B>
let x = Box::new(1usize) as Box<<dyn B as Mirror>::Assoc>;
// dyn A: Unsize<dyn B>
let y = x as Box<<dyn A as Mirror>::Assoc>;
}
```
---
In order to achieve this, we add `EvalCtxt::normalize_non_self_ty` (naming modulo bikeshedding), which *must* be used for all non-self type arguments that are structurally matched in candidate assembly. Currently this is only necessary for `Unsize`'s argument, but I could see future traits requiring this (hopefully rarely) in the future. It uses `repeat_while_none` to limit infinite looping, and normalizes the self type until it is no longer an alias.
Also, we need to fix feature gate detection for `trait_upcasting` and `unsized_tuple_coercion` when HIR typeck has unnormalized types. We can do that by checking the `ImplSource` returned by selection, which necessitates adding a new impl source for tuple upcasting.
Use u64 for incr comp allocation offsets
Fixes https://github.com/rust-lang/rust/issues/76037
Fixes https://github.com/rust-lang/rust/issues/95780
Fixes https://github.com/rust-lang/rust/issues/111613
These issues are all reporting ICEs caused by using `u32` to store offsets to allocations in the incremental compilation cache. This PR aims to lift that limitation by changing the offset type in question to `u64`.
There are two perf runs in this PR. The first reports a regression, and the second does not. The changes are the same in both. I rebased the PR then did the second perf run because I noticed that the primary regression in it was very commonly seen in spurious regression reports.
I do not know what the perf run will report when this is merged. I would not be surprised to see regression or neutral, but the cachegrind diffs for the regression point at `try_mark_previous_green` which is a common source of inexplicable regressions and I don't think should be perturbed by this PR.
I'm not opposed to adding a regression test such as
```rust
fn main() {
println!("{}", [37; 1 << 30].len());
}
```
But that program takes 1 minute to compile and consumes 4.6 GB of memory then writes that much to disk. Is that a concerning amount of resource use for a test?
r? `@nnethercote`
This means we call `MonoItem::size_estimate` (which involves a query)
less often: just once per mono item, and then once more per inline item
placement. After that we can reuse the stored value as necessary. This
means `CodegenUnit::compute_size_estimate` is cheaper.
It makes it sound like the `ExprKind` and `Rvalue` are supposed to represent all pointer related
casts, when in reality their just used to share a some enum variants. Make it clear there these
are only coercion to make it clear why only some pointer related "casts" are in the enum.
Move `TyCtxt::mk_x` to `Ty::new_x` where applicable
Part of rust-lang/compiler-team#616
turns out there's a lot of places we construct `Ty` this is a ridiculously huge PR :S
r? `@oli-obk`
Specialize `try_destructure_mir_constant` for its sole user (pretty printing)
We can't remove the query, as we need to invoke it from rustc_middle, but can only implement it in mir interpretation/const eval.
r? `@RalfJung` for a first round.
While we could move all the logic into pretty printing, that would end up duplicating a bit of code with const eval, which doesn't seem great either.
Make simd_shuffle_indices use valtrees
This removes the second-to-last user of the `destructure_mir_constant` query. So in a follow-up we can remove the query and just move the query provider function directly into pretty printing (which is the last user).
cc `@rust-lang/clippy` there's a small functional change, but I think it is correct?
Make `UnwindAction::Continue` explicit in MIR dump
Makes it easier to spot unwinding related issues in MIR by making `UnwindAction::Continue` explicit, just like all other `UnwindAction`s.
- Rename `create_size_estimate` as `compute_size_estimate`, because that
makes more sense for the second and subsequent calls for each CGU.
- Change `CodegenUnit::size_estimate` from `Option<usize>` to `usize`.
We can still assert that `compute_size_estimate` is called first.
- Move the size estimation for `place_mono_items` inside the function,
for consistency with `merge_codegen_units`.
Because CGU merging relies on CGU sizes, but the CGU sizes before
inlining aren't accurate.
This requires tweaking how the sizes are updated during merging: if CGU
A and B both have an inlined function F, then `size(A + B)` will be a
little less than `size(A) + size(B)`, because `A + B` will only have one
copy of F. Also, the minimum CGU size is increased because it now has to
account for inlined functions.
This change doesn't have much effect on compile perf, but it makes
follow-on changes that involve more sophisticated reasoning about CGU
sizes much easier.