Don't store lazyness in `DefKind::TyAlias`
1. Don't store lazyness of a type alias in its `DefKind`, but instead via a query.
2. This allows us to treat type aliases as lazy if `#[feature(lazy_type_alias)]` *OR* if the alias contains a TAIT, rather than having checks for both in separate parts of the codebase.
r? `@oli-obk` cc `@fmease`
Simplify/Optimize FileEncoder
FileEncoder is basically a BufWriter except that it exposes access to the not-written-to-yet region of the buffer so that some users can write directly to the buffer. This strategy is awesome because it lets us avoid calling memcpy for small copies, but the previous strategy was based on the writer accessing a `&mut [MaybeUninit<u8>; N]` and returning a `&[u8]` which is an API which currently mandates the use of unsafe code, making that interface in general not that appealing.
So this PR cleans up the FileEncoder implementation and builds on that general idea of direct buffer access in order to prevent `memcpy` calls in a few key places when encoding the dep graph and rmeta tables. The interface used here is now 100% safe, but with the caveat that internally we need to avoid trusting the number of bytes that the provided function claims to have written.
The original primary objective of this PR was to clean up the FileEncoder implementation so that the fix for the following issues would be easy to implement. The fix for these issues is to correctly update self.buffered even when writes fail, which I think it's easy to verify manually is now done, because all the FileEncoder methods are small.
Fixes https://github.com/rust-lang/rust/issues/115298
Fixes https://github.com/rust-lang/rust/issues/114671
Fixes https://github.com/rust-lang/rust/issues/114045
Fixes https://github.com/rust-lang/rust/issues/108100
Fixes https://github.com/rust-lang/rust/issues/106787
Add optimized lock methods for `Sharded` and refactor `Lock`
This adds methods to `Sharded` which pick a shard and also locks it. These branch on parallelism just once instead of twice, improving performance.
Benchmark for `cfg(parallel_compiler)` and 1 thread:
<table><tr><td rowspan="2">Benchmark</td><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th></tr><tr><td align="right">Time</td><td align="right">Time</td><td align="right">%</th></tr><tr><td>🟣 <b>clap</b>:check</td><td align="right">1.6461s</td><td align="right">1.6345s</td><td align="right"> -0.70%</td></tr><tr><td>🟣 <b>hyper</b>:check</td><td align="right">0.2414s</td><td align="right">0.2394s</td><td align="right"> -0.83%</td></tr><tr><td>🟣 <b>regex</b>:check</td><td align="right">0.9205s</td><td align="right">0.9143s</td><td align="right"> -0.67%</td></tr><tr><td>🟣 <b>syn</b>:check</td><td align="right">1.4981s</td><td align="right">1.4869s</td><td align="right"> -0.75%</td></tr><tr><td>🟣 <b>syntex_syntax</b>:check</td><td align="right">5.7629s</td><td align="right">5.7256s</td><td align="right"> -0.65%</td></tr><tr><td>Total</td><td align="right">10.0690s</td><td align="right">10.0008s</td><td align="right"> -0.68%</td></tr><tr><td>Summary</td><td align="right">1.0000s</td><td align="right">0.9928s</td><td align="right"> -0.72%</td></tr></table>
cc `@SparrowLii`
Use a specialized varint + bitpacking scheme for DepGraph encoding
The previous scheme here uses leb128 to encode the edge tables that represent the incr comp dependency graph. The problem with that scheme is that leb128 has overhead for larger values, and generally relies on the distribution of encoded values being heavily skewed towards smaller values. That is definitely not the case for a dep node index, since they are handed out sequentially and the whole range is covered, the distribution is actually biased in the opposite direction: Most dep nodes are large.
This PR implements a different varint encoding scheme. Instead of applying varint encoding to individual dep node indices (which is extremely branchy) we now apply it per node.
While being built, each node now stores its edges in a `SmallVec` with a bit of extra logic to track the max value of each edge. Then we varint encode the whole batch. This is a gamble: We save on space by only claiming 2 bits per node instead of ~3 bits per edge which is a nice savings but needs to balance out with the space overhead that a single large index in a node with a lot of edges will encode unnecessary bytes in each of that node's edge indices.
Then, to keep the runtime overhead of this encoding scheme down we deserialize our indices by loading 4 bytes for each then masking off the bytes that are't ours. This is much less code and branches than leb128, but relies on having some readable bytes past the end of each edge list. We explicitly add such padding to the in-memory data during decoding. And we also do this decoding lazily, turning a dense on-disk encoding into a peak memory reduction.
Then we apply a bit-packing scheme; since in https://github.com/rust-lang/rust/pull/115391 we now have unused bits on `DepKind`, we use those unused bits (currently there are 7!) to store the 2 bits that we need for the byte width of the edges in each node, then use the remaining bits to store the length of the edge list, if it fits.
r? `@nnethercote`
Remove conditional use of `Sharded` from query state
`Sharded` is already a zero cost abstraction, so it shouldn't affect the performance of the single thread compiler if LLVM does its job.
r? `@cjgillot`
Make `Sharded` an enum and specialize it for the single thread case
This changes `Sharded` to use a single shard by an enum, reducing the size of `Sharded` for greater cache efficiency.
Performance improvement with 1 thread and `cfg(parallel_compiler)`:
<table><tr><td rowspan="2">Benchmark</td><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th></tr><tr><td align="right">Time</td><td align="right">Time</td><td align="right">%</th></tr><tr><td>🟣 <b>clap</b>:check</td><td align="right">1.7009s</td><td align="right">1.6748s</td><td align="right">💚 -1.53%</td></tr><tr><td>🟣 <b>hyper</b>:check</td><td align="right">0.2525s</td><td align="right">0.2451s</td><td align="right">💚 -2.90%</td></tr><tr><td>🟣 <b>regex</b>:check</td><td align="right">0.9519s</td><td align="right">0.9353s</td><td align="right">💚 -1.74%</td></tr><tr><td>🟣 <b>syn</b>:check</td><td align="right">1.5504s</td><td align="right">1.5280s</td><td align="right">💚 -1.45%</td></tr><tr><td>🟣 <b>syntex_syntax</b>:check</td><td align="right">5.9536s</td><td align="right">5.8873s</td><td align="right">💚 -1.11%</td></tr><tr><td>Total</td><td align="right">10.4092s</td><td align="right">10.2706s</td><td align="right">💚 -1.33%</td></tr><tr><td>Summary</td><td align="right">1.0000s</td><td align="right">0.9825s</td><td align="right">💚 -1.75%</td></tr></table>
I did see an unexpected 0.23% change for the serial compiler, so this could use a perf run to see if that reproduces.
cc `@SparrowLii`
Store the laziness of type aliases in their `DefKind`
Previously, we would treat paths referring to type aliases as *lazy* type aliases if the current crate had lazy type aliases enabled independently of whether the crate which the alias was defined in had the feature enabled or not.
With this PR, the laziness of a type alias depends on the crate it is defined in. This generally makes more sense to me especially if / once lazy type aliases become the default in a new edition and we need to think about *edition interoperability*:
Consider the hypothetical case where the dependency crate has an older edition (and thus eager type aliases), it exports a type alias with bounds & a where-clause (which are void but technically valid), the dependent crate has the latest edition (and thus lazy type aliases) and it uses that type alias. Arguably, the bounds should *not* be checked since at any time, the dependency crate should be allowed to change the bounds at will with a *non*-major version bump & without negatively affecting downstream crates.
As for the reverse case (dependency: lazy type aliases, dependent: eager type aliases), I guess it rules out anything from slight confusion to mild annoyance from upstream crate authors that would be caused by the compiler ignoring the bounds of their type aliases in downstream crates with older editions.
---
This fixes#114468 since before, my assumption that the type alias associated with a given weak projection was lazy (and therefore had its variances computed) did not necessarily hold in cross-crate scenarios (which [I kinda had a hunch about](https://github.com/rust-lang/rust/pull/114253#discussion_r1278608099)) as outlined above. Now it does hold.
`@rustbot` label F-lazy_type_alias
r? `@oli-obk`
Implement rust-lang/compiler-team#578.
When an ICE is encountered on nightly releases, the new rustc panic
handler will also write the contents of the backtrace to disk. If any
`delay_span_bug`s are encountered, their backtrace is also added to the
file. The platform and rustc version will also be collected.
Don't hold the active queries lock while calling `make_query`
This moves the call to `make_query` outside the parts that holds the active queries lock in `try_collect_active_jobs`. This should help removed the deadlock and borrow panic that has been observed when printing the query stack during an ICE.
cc `@SparrowLii`
r? `@cjgillot`