Refactor query system to maintain a global job id counter
This replaces the per-shard counters with a single global counter, simplifying
the JobId struct down to just a u64 and removing the need to pipe a DepKind
generic through a bunch of code. The performance implications on non-parallel
compilers are likely minimal (this switches to `Cell<u64>` as the backing
storage over a `u64`, but the latter was already inside a `RefCell` so it's not
really a significance divergence). On parallel compilers, the cost of a single
global u64 counter may be more significant: it adds a serialization point in
theory. On the other hand, we can imagine changing the counter to have a
thread-local component if it becomes worrisome or some similar structure.
The new design is sufficiently simpler that it warrants the potential for slight
changes down the line if/when we get parallel compilation to be more of a
default.
A u64 counter, instead of u32 (the old per-shard width), is chosen to avoid
possibly overflowing it and causing problems; it is effectively impossible that
we would overflow a u64 counter in this context.
This replaces the per-shard counters with a single global counter, simplifying
the JobId struct down to just a u64 and removing the need to pipe a DepKind
generic through a bunch of code. The performance implications on non-parallel
compilers are likely minimal (this switches to `Cell<u64>` as the backing
storage over a `u64`, but the latter was already inside a `RefCell` so it's not
really a significance divergence). On parallel compilers, the cost of a single
global u64 counter may be more significant: it adds a serialization point in
theory. On the other hand, we can imagine changing the counter to have a
thread-local component if it becomes worrisome or some similar structure.
The new design is sufficiently simpler that it warrants the potential for slight
changes down the line if/when we get parallel compilation to be more of a
default.
A u64 counter, instead of u32 (the old per-shard width), is chosen to avoid
possibly overflowing it and causing problems; it is effectively impossible that
we would overflow a u64 counter in this context.
Lazy type-alias-impl-trait
Previously opaque types were processed by
1. replacing all mentions of them with inference variables
2. memorizing these inference variables in a side-table
3. at the end of typeck, resolve the inference variables in the side table and use the resolved type as the hidden type of the opaque type
This worked okayish for `impl Trait` in return position, but required lots of roundabout type inference hacks and processing.
This PR instead stops this process of replacing opaque types with inference variables, and just keeps the opaque types around.
Whenever an opaque type `O` is compared with another type `T`, we make the comparison succeed and record `T` as the hidden type. If `O` is compared to `U` while there is a recorded hidden type for it, we grab the recorded type (`T`) and compare that against `U`. This makes implementing
* https://github.com/rust-lang/rfcs/pull/2515
much simpler (previous attempts on the inference based scheme were very prone to ICEs and general misbehaviour that was not explainable except by random implementation defined oddities).
r? `@nikomatsakis`
fixes#93411fixes#88236
by using an opaque type obligation to bubble up comparisons between opaque types and other types
Also uses proper obligation causes so that the body id works, because out of some reason nll uses body ids for logic instead of just diagnostics.
`Decoder` has two impls:
- opaque: this impl is already partly infallible, i.e. in some places it
currently panics on failure (e.g. if the input is too short, or on a
bad `Result` discriminant), and in some places it returns an error
(e.g. on a bad `Option` discriminant). The number of places where
either happens is surprisingly small, just because the binary
representation has very little redundancy and a lot of input reading
can occur even on malformed data.
- json: this impl is fully fallible, but it's only used (a) for the
`.rlink` file production, and there's a `FIXME` comment suggesting it
should change to a binary format, and (b) in a few tests in
non-fundamental ways. Indeed #85993 is open to remove it entirely.
And the top-level places in the compiler that call into decoding just
abort on error anyway. So the fallibility is providing little value, and
getting rid of it leads to some non-trivial performance improvements.
Much of this commit is pretty boring and mechanical. Some notes about
a few interesting parts:
- The commit removes `Decoder::{Error,error}`.
- `InternIteratorElement::intern_with`: the impl for `T` now has the same
optimization for small counts that the impl for `Result<T, E>` has,
because it's now much hotter.
- Decodable impls for SmallVec, LinkedList, VecDeque now all use
`collect`, which is nice; the one for `Vec` uses unsafe code, because
that gave better perf on some benchmarks.
Update rayon and rustc-rayon
This updates rayon for various tools and rustc-rayon for the compiler's parallel mode.
- rayon v1.3.1 -> v1.5.1
- rayon-core v1.7.1 -> v1.9.1
- rustc-rayon v0.3.1 -> v0.3.2
- rustc-rayon-core v0.3.1 -> v0.3.2
... and indirectly, this updates all of crossbeam-* to their latest versions.
Fixes#92677 by removing crossbeam-queue, but there's still a lingering question about how tidy discovers "runtime" dependencies. None of this is truly in the standard library's dependency tree at all.
Ensure that `Fingerprint` caching respects hashing configuration
Fixes#92266
In some `HashStable` impls, we use a cache to avoid re-computing
the same `Fingerprint` from the same structure (e.g. an `AdtDef`).
However, the `StableHashingContext` used can be configured to
perform hashing in different ways (e.g. skipping `Span`s). This
configuration information is not included in the cache key,
which will cause an incorrect `Fingerprint` to be used if
we hash the same structure with different `StableHashingContext`
settings.
To fix this, the configuration settings of `StableHashingContext`
are split out into a separate `HashingControls` struct. This
struct is used as part of the cache key, ensuring that our caches
always produce the correct result for the given settings.
With this in place, we now turn off `Span` hashing during the
entire process of computing the hash included in legacy symbols.
This current has no effect, but will matter when a future PR
starts hashing more `Span`s that we currently skip.
Don't perform any new queries while reading a query result on disk
In addition to being very confusing, this can cause us to add dep node edges between two queries that would not otherwise have an edge.
We now panic if any new dep node edges are created during the deserialization of a query result. This requires serializing the full `AdtDef` to disk, instead of just serializing the `DefId` and invoking the `adt_def` query during deserialization.
I'll probably split this up into several smaller PRs for perf runs.
Fixes#92266
In some `HashStable` impls, we use a cache to avoid re-computing
the same `Fingerprint` from the same structure (e.g. an `AdtDef`).
However, the `StableHashingContext` used can be configured to
perform hashing in different ways (e.g. skipping `Span`s). This
configuration information is not included in the cache key,
which will cause an incorrect `Fingerprint` to be used if
we hash the same structure with different `StableHashingContext`
settings.
To fix this, the configuration settings of `StableHashingContext`
are split out into a separate `HashingControls` struct. This
struct is used as part of the cache key, ensuring that our caches
always produce the correct result for the given settings.
With this in place, we now turn off `Span` hashing during the
entire process of computing the hash included in legacy symbols.
This current has no effect, but will matter when a future PR
starts hashing more `Span`s that we currently skip.
Remove special-cased stable hashing for HIR module
All other 'containers' (e.g. `impl` blocks) hashed their contents
in the normal, order-dependent way. However, `Mod` was hashing
its contents in a (sort-of) order-independent way. However, the
exact order is exposed to consumers through `Mod.item_ids`,
and through query results like `hir_module_items`. Therefore,
stable hashing needs to take the order of items into account,
to avoid fingerprint ICEs.
Unforuntately, I was unable to directly build a reproducer
for the ICE, due to the behavior of `Fingerprint::combine_commutative`.
This operation swaps the upper and lower `u64` when constructing the
result, which makes the function non-associative. Since we start
the hashing of module items by combining `Fingerprint::ZERO` with
the first item, it's difficult to actually build an example where
changing the order of module items leaves the final hash unchanged.
However, this appears to have been hit in practice in #92218
While we're not able to reproduce it, the fact that proc-macros
are involved (which can give an entire module the same span, preventing
any span-related invalidations) makes me confident that the root
cause of that issue is our method of hashing module items.
This PR removes all of the special handling for `Mod`, instead deriving
a `HashStable` implementation. This makes `Mod` consistent with other
'contains' like `Impl`, which hash their contents through the typical
derive of `HashStable`.
All other 'containers' (e.g. `impl` blocks) hashed their contents
in the normal, order-dependent way. However, `Mod` was hashing
its contents in a (sort-of) order-independent way. However, the
exact order is exposed to consumers through `Mod.item_ids`,
and through query results like `hir_module_items`. Therefore,
stable hashing needs to take the order of items into account,
to avoid fingerprint ICEs.
Unforuntately, I was unable to directly build a reproducer
for the ICE, due to the behavior of `Fingerprint::combine_commutative`.
This operation swaps the upper and lower `u64` when constructing the
result, which makes the function non-associative. Since we start
the hashing of module items by combining `Fingerprint::ZERO` with
the first item, it's difficult to actually build an example where
changing the order of module items leaves the final hash unchanged.
However, this appears to have been hit in practice in #92218
While we're not able to reproduce it, the fact that proc-macros
are involved (which can give an entire module the same span, preventing
any span-related invalidations) makes me confident that the root
cause of that issue is our method of hashing module items.
This PR removes all of the special handling for `Mod`, instead deriving
a `HashStable` implementation. This makes `Mod` consistent with other
'contains' like `Impl`, which hash their contents through the typical
derive of `HashStable`.
Currently, you can use `#[rustc_clean]` to assert to that a particular
query (technically, a `DepNode`) is green or red. However, a green
`DepNode` does not mean that the query result was actually deserialized
from disk - we might have never re-run a query that needed the result.
Some incremental tests are written as regression tests for ICEs that
occured during query result decoding. Using
`#[rustc_clean(loaded_from_disk="typeck")]`, you can now assert
that the result of a particular query (e.g. `typeck`) was actually
loaded from disk, in addition to being green.
This commit is intended to follow the stabilization disposition of the
FCP that has now finished in #84223. This stabilizes the ability to flag
thread local initializers as `const` expressions which enables the macro
to generate more efficient code for accessing it, notably removing
runtime checks for initialization.
More information can also be found in #84223 as well as the tests where
the feature usage was removed in this PR.
Closes#84223
Enable verification for 1/32th of queries loaded from disk
This is a limited enabling of incremental verification for query results loaded from disk, which previously did not run without -Zincremental-verify-ich. If enabled for all queries, we see a probably unacceptable hit of ~50% in the worst case, so this pairs back the verification to a more limited set based on the hash key.
Per collected [perf results](https://github.com/rust-lang/rust/pull/84227#issuecomment-953350582), this is a regression of at most 7% on coercions opt incr-unchanged, and typically less than 0.5% on other benchmarks (largely limited to incr-unchanged). I believe this is acceptable performance to land, and we can either ratchet it up or down fairly easily.
We have no real sense of whether this will lead to a large amount of assertions in the wild, but since those assertions may lead to miscompilations today, it seems potentially warranted. We have a good bit of lead time until the next stable release, though the holiday season will also start soon; we may wish to discuss the timing of enabling this and weigh the desire to prevent (possible) miscompilations against assertions.
cc `@rust-lang/wg-incr-comp`
Revert "Add rustc lint, warning when iterating over hashmaps"
Fixes perf regressions introduced in https://github.com/rust-lang/rust/pull/90235 by temporarily reverting the relevant PR.
Build the query vtable directly.
Continuation of https://github.com/rust-lang/rust/pull/89978.
This shrinks the query interface and attempts to reduce the amount of function pointer calls.
Add support for artifact size profiling
This adds support for profiling artifact file sizes (incremental compilation artifacts and query cache to begin with).
Eventually we want to track this in perf.rlo so we can ensure that file sizes do not change dramatically on each pull request.
This relies on support in measureme: https://github.com/rust-lang/measureme/pull/169. Once that lands we can update this PR to not point to a git dependency.
This was worked on together with `@michaelwoerister.`
r? `@wesleywiser`
Adopt let_else across the compiler
This performs a substitution of code following the pattern:
```
let <id> = if let <pat> = ... { identity } else { ... : ! };
```
To simplify it to:
```
let <pat> = ... { identity } else { ... : ! };
```
By adopting the `let_else` feature (cc #87335).
The PR also updates the syn crate because the currently used version of the crate doesn't support `let_else` syntax yet.
Note: Generally I'm the person who *removes* usages of unstable features from the compiler, not adds more usages of them, but in this instance I think it hopefully helps the feature get stabilized sooner and in a better state. I have written a [comment](https://github.com/rust-lang/rust/issues/87335#issuecomment-944846205) on the tracking issue about my experience and what I feel could be improved before stabilization of `let_else`.