Don't store lazyness in `DefKind::TyAlias`
1. Don't store lazyness of a type alias in its `DefKind`, but instead via a query.
2. This allows us to treat type aliases as lazy if `#[feature(lazy_type_alias)]` *OR* if the alias contains a TAIT, rather than having checks for both in separate parts of the codebase.
r? `@oli-obk` cc `@fmease`
Simplify/Optimize FileEncoder
FileEncoder is basically a BufWriter except that it exposes access to the not-written-to-yet region of the buffer so that some users can write directly to the buffer. This strategy is awesome because it lets us avoid calling memcpy for small copies, but the previous strategy was based on the writer accessing a `&mut [MaybeUninit<u8>; N]` and returning a `&[u8]` which is an API which currently mandates the use of unsafe code, making that interface in general not that appealing.
So this PR cleans up the FileEncoder implementation and builds on that general idea of direct buffer access in order to prevent `memcpy` calls in a few key places when encoding the dep graph and rmeta tables. The interface used here is now 100% safe, but with the caveat that internally we need to avoid trusting the number of bytes that the provided function claims to have written.
The original primary objective of this PR was to clean up the FileEncoder implementation so that the fix for the following issues would be easy to implement. The fix for these issues is to correctly update self.buffered even when writes fail, which I think it's easy to verify manually is now done, because all the FileEncoder methods are small.
Fixes https://github.com/rust-lang/rust/issues/115298
Fixes https://github.com/rust-lang/rust/issues/114671
Fixes https://github.com/rust-lang/rust/issues/114045
Fixes https://github.com/rust-lang/rust/issues/108100
Fixes https://github.com/rust-lang/rust/issues/106787
Add optimized lock methods for `Sharded` and refactor `Lock`
This adds methods to `Sharded` which pick a shard and also locks it. These branch on parallelism just once instead of twice, improving performance.
Benchmark for `cfg(parallel_compiler)` and 1 thread:
<table><tr><td rowspan="2">Benchmark</td><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th></tr><tr><td align="right">Time</td><td align="right">Time</td><td align="right">%</th></tr><tr><td>🟣 <b>clap</b>:check</td><td align="right">1.6461s</td><td align="right">1.6345s</td><td align="right"> -0.70%</td></tr><tr><td>🟣 <b>hyper</b>:check</td><td align="right">0.2414s</td><td align="right">0.2394s</td><td align="right"> -0.83%</td></tr><tr><td>🟣 <b>regex</b>:check</td><td align="right">0.9205s</td><td align="right">0.9143s</td><td align="right"> -0.67%</td></tr><tr><td>🟣 <b>syn</b>:check</td><td align="right">1.4981s</td><td align="right">1.4869s</td><td align="right"> -0.75%</td></tr><tr><td>🟣 <b>syntex_syntax</b>:check</td><td align="right">5.7629s</td><td align="right">5.7256s</td><td align="right"> -0.65%</td></tr><tr><td>Total</td><td align="right">10.0690s</td><td align="right">10.0008s</td><td align="right"> -0.68%</td></tr><tr><td>Summary</td><td align="right">1.0000s</td><td align="right">0.9928s</td><td align="right"> -0.72%</td></tr></table>
cc `@SparrowLii`
Use a specialized varint + bitpacking scheme for DepGraph encoding
The previous scheme here uses leb128 to encode the edge tables that represent the incr comp dependency graph. The problem with that scheme is that leb128 has overhead for larger values, and generally relies on the distribution of encoded values being heavily skewed towards smaller values. That is definitely not the case for a dep node index, since they are handed out sequentially and the whole range is covered, the distribution is actually biased in the opposite direction: Most dep nodes are large.
This PR implements a different varint encoding scheme. Instead of applying varint encoding to individual dep node indices (which is extremely branchy) we now apply it per node.
While being built, each node now stores its edges in a `SmallVec` with a bit of extra logic to track the max value of each edge. Then we varint encode the whole batch. This is a gamble: We save on space by only claiming 2 bits per node instead of ~3 bits per edge which is a nice savings but needs to balance out with the space overhead that a single large index in a node with a lot of edges will encode unnecessary bytes in each of that node's edge indices.
Then, to keep the runtime overhead of this encoding scheme down we deserialize our indices by loading 4 bytes for each then masking off the bytes that are't ours. This is much less code and branches than leb128, but relies on having some readable bytes past the end of each edge list. We explicitly add such padding to the in-memory data during decoding. And we also do this decoding lazily, turning a dense on-disk encoding into a peak memory reduction.
Then we apply a bit-packing scheme; since in https://github.com/rust-lang/rust/pull/115391 we now have unused bits on `DepKind`, we use those unused bits (currently there are 7!) to store the 2 bits that we need for the byte width of the edges in each node, then use the remaining bits to store the length of the edge list, if it fits.
r? `@nnethercote`
Remove conditional use of `Sharded` from query state
`Sharded` is already a zero cost abstraction, so it shouldn't affect the performance of the single thread compiler if LLVM does its job.
r? `@cjgillot`
Make `Sharded` an enum and specialize it for the single thread case
This changes `Sharded` to use a single shard by an enum, reducing the size of `Sharded` for greater cache efficiency.
Performance improvement with 1 thread and `cfg(parallel_compiler)`:
<table><tr><td rowspan="2">Benchmark</td><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th></tr><tr><td align="right">Time</td><td align="right">Time</td><td align="right">%</th></tr><tr><td>🟣 <b>clap</b>:check</td><td align="right">1.7009s</td><td align="right">1.6748s</td><td align="right">💚 -1.53%</td></tr><tr><td>🟣 <b>hyper</b>:check</td><td align="right">0.2525s</td><td align="right">0.2451s</td><td align="right">💚 -2.90%</td></tr><tr><td>🟣 <b>regex</b>:check</td><td align="right">0.9519s</td><td align="right">0.9353s</td><td align="right">💚 -1.74%</td></tr><tr><td>🟣 <b>syn</b>:check</td><td align="right">1.5504s</td><td align="right">1.5280s</td><td align="right">💚 -1.45%</td></tr><tr><td>🟣 <b>syntex_syntax</b>:check</td><td align="right">5.9536s</td><td align="right">5.8873s</td><td align="right">💚 -1.11%</td></tr><tr><td>Total</td><td align="right">10.4092s</td><td align="right">10.2706s</td><td align="right">💚 -1.33%</td></tr><tr><td>Summary</td><td align="right">1.0000s</td><td align="right">0.9825s</td><td align="right">💚 -1.75%</td></tr></table>
I did see an unexpected 0.23% change for the serial compiler, so this could use a perf run to see if that reproduces.
cc `@SparrowLii`
Store the laziness of type aliases in their `DefKind`
Previously, we would treat paths referring to type aliases as *lazy* type aliases if the current crate had lazy type aliases enabled independently of whether the crate which the alias was defined in had the feature enabled or not.
With this PR, the laziness of a type alias depends on the crate it is defined in. This generally makes more sense to me especially if / once lazy type aliases become the default in a new edition and we need to think about *edition interoperability*:
Consider the hypothetical case where the dependency crate has an older edition (and thus eager type aliases), it exports a type alias with bounds & a where-clause (which are void but technically valid), the dependent crate has the latest edition (and thus lazy type aliases) and it uses that type alias. Arguably, the bounds should *not* be checked since at any time, the dependency crate should be allowed to change the bounds at will with a *non*-major version bump & without negatively affecting downstream crates.
As for the reverse case (dependency: lazy type aliases, dependent: eager type aliases), I guess it rules out anything from slight confusion to mild annoyance from upstream crate authors that would be caused by the compiler ignoring the bounds of their type aliases in downstream crates with older editions.
---
This fixes#114468 since before, my assumption that the type alias associated with a given weak projection was lazy (and therefore had its variances computed) did not necessarily hold in cross-crate scenarios (which [I kinda had a hunch about](https://github.com/rust-lang/rust/pull/114253#discussion_r1278608099)) as outlined above. Now it does hold.
`@rustbot` label F-lazy_type_alias
r? `@oli-obk`
Implement rust-lang/compiler-team#578.
When an ICE is encountered on nightly releases, the new rustc panic
handler will also write the contents of the backtrace to disk. If any
`delay_span_bug`s are encountered, their backtrace is also added to the
file. The platform and rustc version will also be collected.
Don't hold the active queries lock while calling `make_query`
This moves the call to `make_query` outside the parts that holds the active queries lock in `try_collect_active_jobs`. This should help removed the deadlock and borrow panic that has been observed when printing the query stack during an ICE.
cc `@SparrowLii`
r? `@cjgillot`
Don't leak the function that is called on drop
It probably wasn't causing problems anyway, but still, a `// this leaks, please don't pass anything that owns memory` is not sustainable.
I could implement a version which does not require `Option`, but it would require `unsafe`, at which point it's probably not worth it.
Use dynamic dispatch for queries
This replaces most concrete query values `V` with `MaybeUninit<[u8; { size_of::<V>() }]>` reducing the code instantiated by queries. The compile time of `rustc_query_impl` is reduced by 27%. It is an alternative to https://github.com/rust-lang/rust/pull/107937 which uses unstable const generics while this uses a `EraseType` trait which maps query values to their erased variant.
This is achieved by introducing an `Erased` type which does sanity check with `cfg(debug_assertions)`. The query caches gets instantiated with these erased types leaving the code in `rustc_query_system` unaware of them. `rustc_query_system` is changed to use instances of `QueryConfig` so that `rustc_query_impl` can pass in `DynamicConfig` which holds a pointer to a virtual table.
<table><tr><td rowspan="2">Benchmark</td><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th></tr><tr><td align="right">Time</td><td align="right">Time</td><td align="right">%</th></tr><tr><td>🟣 <b>clap</b>:check</td><td align="right">1.7055s</td><td align="right">1.6949s</td><td align="right"> -0.62%</td></tr><tr><td>🟣 <b>hyper</b>:check</td><td align="right">0.2547s</td><td align="right">0.2528s</td><td align="right"> -0.73%</td></tr><tr><td>🟣 <b>regex</b>:check</td><td align="right">0.9590s</td><td align="right">0.9553s</td><td align="right"> -0.39%</td></tr><tr><td>🟣 <b>syn</b>:check</td><td align="right">1.5457s</td><td align="right">1.5440s</td><td align="right"> -0.11%</td></tr><tr><td>🟣 <b>syntex_syntax</b>:check</td><td align="right">5.9092s</td><td align="right">5.9009s</td><td align="right"> -0.14%</td></tr><tr><td>Total</td><td align="right">10.3741s</td><td align="right">10.3479s</td><td align="right"> -0.25%</td></tr><tr><td>Summary</td><td align="right">1.0000s</td><td align="right">0.9960s</td><td align="right"> -0.40%</td></tr></table>
<table><tr><td rowspan="2">Benchmark</td><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th></tr><tr><td align="right">Time</td><td align="right">Time</td><td align="right">%</th></tr><tr><td>🟣 <b>clap</b>:check:initial</td><td align="right">2.0605s</td><td align="right">2.0575s</td><td align="right"> -0.15%</td></tr><tr><td>🟣 <b>hyper</b>:check:initial</td><td align="right">0.3218s</td><td align="right">0.3216s</td><td align="right"> -0.07%</td></tr><tr><td>🟣 <b>regex</b>:check:initial</td><td align="right">1.1848s</td><td align="right">1.1839s</td><td align="right"> -0.07%</td></tr><tr><td>🟣 <b>syn</b>:check:initial</td><td align="right">1.9409s</td><td align="right">1.9376s</td><td align="right"> -0.17%</td></tr><tr><td>🟣 <b>syntex_syntax</b>:check:initial</td><td align="right">7.3105s</td><td align="right">7.2928s</td><td align="right"> -0.24%</td></tr><tr><td>Total</td><td align="right">12.8185s</td><td align="right">12.7935s</td><td align="right"> -0.20%</td></tr><tr><td>Summary</td><td align="right">1.0000s</td><td align="right">0.9986s</td><td align="right"> -0.14%</td></tr></table>
<table><tr><td rowspan="2">Benchmark</td><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th></tr><tr><td align="right">Time</td><td align="right">Time</td><td align="right">%</th></tr><tr><td>🟣 <b>clap</b>:check:unchanged</td><td align="right">0.4606s</td><td align="right">0.4617s</td><td align="right"> 0.24%</td></tr><tr><td>🟣 <b>hyper</b>:check:unchanged</td><td align="right">0.1335s</td><td align="right">0.1336s</td><td align="right"> 0.08%</td></tr><tr><td>🟣 <b>regex</b>:check:unchanged</td><td align="right">0.3324s</td><td align="right">0.3346s</td><td align="right"> 0.65%</td></tr><tr><td>🟣 <b>syn</b>:check:unchanged</td><td align="right">0.6268s</td><td align="right">0.6307s</td><td align="right"> 0.64%</td></tr><tr><td>🟣 <b>syntex_syntax</b>:check:unchanged</td><td align="right">1.8248s</td><td align="right">1.8508s</td><td align="right">💔 1.43%</td></tr><tr><td>Total</td><td align="right">3.3779s</td><td align="right">3.4113s</td><td align="right"> 0.99%</td></tr><tr><td>Summary</td><td align="right">1.0000s</td><td align="right">1.0061s</td><td align="right"> 0.61%</td></tr></table>
It's based on https://github.com/rust-lang/rust/pull/108167.
r? `@cjgillot`
Currently a `{D,Subd}iagnosticMessage` can be created from any type that
impls `Into<String>`. That includes `&str`, `String`, and `Cow<'static,
str>`, which are reasonable. It also includes `&String`, which is pretty
weird, and results in many places making unnecessary allocations for
patterns like this:
```
self.fatal(&format!(...))
```
This creates a string with `format!`, takes a reference, passes the
reference to `fatal`, which does an `into()`, which clones the
reference, doing a second allocation. Two allocations for a single
string, bleh.
This commit changes the `From` impls so that you can only create a
`{D,Subd}iagnosticMessage` from `&str`, `String`, or `Cow<'static,
str>`. This requires changing all the places that currently create one
from a `&String`. Most of these are of the `&format!(...)` form
described above; each one removes an unnecessary static `&`, plus an
allocation when executed. There are also a few places where the existing
use of `&String` was more reasonable; these now just use `clone()` at
the call site.
As well as making the code nicer and more efficient, this is a step
towards possibly using `Cow<'static, str>` in
`{D,Subd}iagnosticMessage::{Str,Eager}`. That would require changing
the `From<&'a str>` impls to `From<&'static str>`, which is doable, but
I'm not yet sure if it's worthwhile.
Remove `QueryEngine` trait
This removes the `QueryEngine` trait and `Queries` from `rustc_query_impl` and replaced them with function pointers and fields in `QuerySystem`. As a side effect `OnDiskCache` is moved back into `rustc_middle` and the `OnDiskCache` trait is also removed.
This has a couple of benefits.
- `TyCtxt` is used in the query system instead of the removed `QueryCtxt` which is larger.
- Function pointers are more flexible to work with. A variant of https://github.com/rust-lang/rust/pull/107802 is included which avoids the double indirection. For https://github.com/rust-lang/rust/pull/108938 we can name entry point `__rust_end_short_backtrace` to avoid some overhead. For https://github.com/rust-lang/rust/pull/108062 it avoids the duplicate `QueryEngine` structs.
- `QueryContext` now implements `DepContext` which avoids many `dep_context()` calls in `rustc_query_system`.
- The `rustc_driver` size is reduced by 0.33%, hopefully that means some bootstrap improvements.
- This avoids the unsafe code around the `QueryEngine` trait.
r? `@cjgillot`
Currently it creates an `Option` and then does `map`/`unwrap_or` and
`map_or_else` on it, which is hard to read.
This commit simplifies things by moving more code into the two arms of
the if/else.
Rewrite MemDecoder around pointers not a slice
This is basically https://github.com/rust-lang/rust/pull/109910 but I'm being a lot more aggressive. The pointer-based structure means that it makes a lot more sense to absorb more complexity into `MemDecoder`, most of the diff is just complexity moving from one place to another.
The primary argument for this structure is that we only incur a single bounds check when doing multi-byte reads from a `MemDecoder`. With the slice-based implementation we need to do those with `data[position..position + len]` , which needs to account for `position + len` wrapping. It would be possible to dodge the first bounds check if we stored a slice that starts at `position`, but that would require updating the pointer and length on every read.
This PR also embeds the failure path in a separate function, which means that this PR should subsume all the perf wins observed in https://github.com/rust-lang/rust/pull/109867.
Add `rustc_fluent_macro` to decouple fluent from `rustc_macros`
Fluent, with all the icu4x it brings in, takes quite some time to compile. `fluent_messages!` is only needed in further downstream rustc crates, but is blocking more upstream crates like `rustc_index`. By splitting it out, we allow `rustc_macros` to be compiled earlier, which speeds up `x check compiler` by about 5 seconds (and even more after the needless dependency on `serde_json` is removed from `rustc_data_structures`).
Encode hashes as bytes, not varint
In a few places, we store hashes as `u64` or `u128` and then apply `derive(Decodable, Encodable)` to the enclosing struct/enum. It is more efficient to encode hashes directly than try to apply some varint encoding. This PR adds two new types `Hash64` and `Hash128` which are produced by `StableHasher` and replace every use of storing a `u64` or `u128` that represents a hash.
Distribution of the byte lengths of leb128 encodings, from `x build --stage 2` with `incremental = true`
Before:
```
( 1) 373418203 (53.7%, 53.7%): 1
( 2) 196240113 (28.2%, 81.9%): 3
( 3) 108157958 (15.6%, 97.5%): 2
( 4) 17213120 ( 2.5%, 99.9%): 4
( 5) 223614 ( 0.0%,100.0%): 9
( 6) 216262 ( 0.0%,100.0%): 10
( 7) 15447 ( 0.0%,100.0%): 5
( 8) 3633 ( 0.0%,100.0%): 19
( 9) 3030 ( 0.0%,100.0%): 8
( 10) 1167 ( 0.0%,100.0%): 18
( 11) 1032 ( 0.0%,100.0%): 7
( 12) 1003 ( 0.0%,100.0%): 6
( 13) 10 ( 0.0%,100.0%): 16
( 14) 10 ( 0.0%,100.0%): 17
( 15) 5 ( 0.0%,100.0%): 12
( 16) 4 ( 0.0%,100.0%): 14
```
After:
```
( 1) 372939136 (53.7%, 53.7%): 1
( 2) 196240140 (28.3%, 82.0%): 3
( 3) 108014969 (15.6%, 97.5%): 2
( 4) 17192375 ( 2.5%,100.0%): 4
( 5) 435 ( 0.0%,100.0%): 5
( 6) 83 ( 0.0%,100.0%): 18
( 7) 79 ( 0.0%,100.0%): 10
( 8) 50 ( 0.0%,100.0%): 9
( 9) 6 ( 0.0%,100.0%): 19
```
The remaining 9 or 10 and 18 or 19 are `u64` and `u128` respectively that have the high bits set. As far as I can tell these are coming primarily from `SwitchTargets`.
Fluent, with all the icu4x it brings in, takes quite some time to
compile. `fluent_messages!` is only needed in further downstream rustc
crates, but is blocking more upstream crates like `rustc_index`. By
splitting it out, we allow `rustc_macros` to be compiled earlier, which
speeds up `x check compiler` by about 5 seconds (and even more after the
needless dependency on `serde_json` is removed from
`rustc_data_structures`).
Various minor Idx-related tweaks
Nothing particularly exciting here, but a couple of things I noticed as I was looking for more index conversions to simplify.
cc https://github.com/rust-lang/compiler-team/issues/606
r? `@WaffleLapkin`
incr.comp.: Make sure dependencies are recorded when feeding queries during eval-always queries.
This PR makes sure we don't drop dependency edges when feeding queries during an eval-always query.
Background: During eval-always queries, no dependencies are recorded because the system knows to unconditionally re-evaluate them regardless of any actual dependencies. This works fine for these queries themselves but leads to a problem when feeding other queries: When queries are fed, we set up their dependency edges by copying the current set of dependencies of the feeding query. But because this set is empty for eval-always queries, we record no edges at all -- which has the effect that the fed query instances always look "green" to the system, although they should always be "red".
The fix is to explicitly add a dependency on the artificial "always red" dep-node when feeding during eval-always queries.
Fixes https://github.com/rust-lang/rust/issues/108481
Maybe also fixes issue https://github.com/rust-lang/rust/issues/88488.
cc `@jyn514`
r? `@cjgillot` or `@oli-obk`
Use Rayon's TLV directly
This accesses Rayon's `TLV` thread local directly avoiding wrapper functions. This makes rustc work with https://github.com/rust-lang/rustc-rayon/pull/10.
r? `@cuviper`
Refactor `try_execute_query`
This merges `JobOwner::try_start` into `try_execute_query`, removing `TryGetJob` in the processes. 3 new functions are extracted from `try_execute_query`: `execute_job`, `cycle_error` and `wait_for_query`. This makes the control flow a bit clearer and improves performance.
Based on https://github.com/rust-lang/rust/pull/109046.
<table><tr><td rowspan="2">Benchmark</td><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th></tr><tr><td align="right">Time</td><td align="right">Time</td><td align="right">%</th></tr><tr><td>🟣 <b>clap</b>:check</td><td align="right">1.7134s</td><td align="right">1.7061s</td><td align="right"> -0.43%</td></tr><tr><td>🟣 <b>hyper</b>:check</td><td align="right">0.2519s</td><td align="right">0.2510s</td><td align="right"> -0.35%</td></tr><tr><td>🟣 <b>regex</b>:check</td><td align="right">0.9517s</td><td align="right">0.9481s</td><td align="right"> -0.38%</td></tr><tr><td>🟣 <b>syn</b>:check</td><td align="right">1.5389s</td><td align="right">1.5338s</td><td align="right"> -0.33%</td></tr><tr><td>🟣 <b>syntex_syntax</b>:check</td><td align="right">5.9488s</td><td align="right">5.9258s</td><td align="right"> -0.39%</td></tr><tr><td>Total</td><td align="right">10.4048s</td><td align="right">10.3647s</td><td align="right"> -0.38%</td></tr><tr><td>Summary</td><td align="right">1.0000s</td><td align="right">0.9962s</td><td align="right"> -0.38%</td></tr></table>
r? `@cjgillot`
Split `execute_job` into `execute_job_incr` and `execute_job_non_incr`
`execute_job` was a bit large, so this splits it in 2. Performance was neutral locally, but this may affect bootstrap times.
Optimize dep node backtrace and ignore fatal errors
This attempts to optimize https://github.com/rust-lang/rust/pull/91742 while also passing through fatal errors.
r? `@cjgillot`
Check that a query has not completed and is not executing before starting it
This fixes a race in the query system where we only checked if the query was currently executing, but not if it was already completed, causing queries to re-execute.
r? `@cjgillot`
Ensure value is on the on-disk cache before returning from `ensure()`.
The current logic for `ensure()` a query just checks that the node is green in the dependency graph.
However, a lot of places use `ensure()` to prevent the query from being called later. This is the case before stealing a query result.
If the query is actually green but the value is not available in the on-disk cache, `ensure` would return, but a subsequent call to the full query would run the code, and attempt to read from a stolen value.
This PR conforms the query system to the usage by checking whether the queried value is loadable from disk before returning.
Sadly, I can't manage to craft a proper test...
Should fix all instances of "attempted to read from stolen value".
Simplify message paths
This makes it easier to open the messages file. Right now I have to first click on the `locales` dir to open it, and then on the `en-US.ftl` file. `Cargo.toml` and `build.rs` files are also in the top level, and I think there should not be more than one file, so a directory isn't really needed. The [chosen strategy for pontoon adoption](https://rust-lang.zulipchat.com/#narrow/stream/336883-i18n/topic/pontoon.20and.20next.20steps) is out of tree. Even if this descision is changed in the future, the `messages.ftl` approach is also compatible with non-english translations living in-tree, as long as the non-english translations don't live in the `compiler/rustc_foo/` directories but in different ones. That would also be helpful for grepability purposes.
The commit was the result of automated changes:
```
for p in compiler/rustc_*; do mv $p/locales/en-US.ftl $p/messages.ftl; rmdir $p/locales; done
for p in compiler/rustc_*; do sed -i "s#\.\./locales/en-US.ftl#../messages.ftl#" $p/src/lib.rs; done
```
r? `@davidtwco`
This makes it easier to open the messages file while developing on features.
The commit was the result of automatted changes:
for p in compiler/rustc_*; do mv $p/locales/en-US.ftl $p/messages.ftl; rmdir $p/locales; done
for p in compiler/rustc_*; do sed -i "s#\.\./locales/en-US.ftl#../messages.ftl#" $p/src/lib.rs; done
Make `rustc_query_system` take `QueryConfig` by instance.
This allows for easy switching between virtual tables and specialized instances for queries. It also has the benefit of less turbofish. `QueryStorage` has also been merged with `QueryCache`.
Split out from https://github.com/rust-lang/rust/pull/107937.
r? `@cjgillot`
errors: generate typed identifiers in each crate
Instead of loading the Fluent resources for every crate in `rustc_error_messages`, each crate generates typed identifiers for its own diagnostics and creates a static which are pulled together in the `rustc_driver` crate and provided to the diagnostic emitter.
There are advantages and disadvantages to this change..
#### Advantages
- Changing a diagnostic now only recompiles the crate for that diagnostic and those crates that depend on it, rather than `rustc_error_messages` and all crates thereafter.
- This approach can be used to support first-party crates that want to supply translatable diagnostics (e.g. `rust-lang/thorin` in https://github.com/rust-lang/rust/pull/102612#discussion_r985372582, cc `@JhonnyBillM)`
- We can extend this a little so that tools built using rustc internals (like clippy or rustdoc) can add their own diagnostic resources (much more easily than those resources needing to be available to `rustc_error_messages`)
#### Disadvantages
- Crates can only refer to the diagnostic messages defined in the current crate (or those from dependencies), rather than all diagnostic messages.
- `rustc_driver` (or some other crate we create for this purpose) has to directly depend on *everything* that has error messages.
- It already transitively depended on all these crates.
#### Pending work
- [x] I don't know how to make `rustc_codegen_gcc`'s translated diagnostics work with this approach - because `rustc_driver` can't depend on that crate and so can't get its resources to provide to the diagnostic emission. I don't really know how the alternative codegen backends are actually wired up to the compiler at all.
- [x] Update `triagebot.toml` to track the moved FTL files.
r? `@compiler-errors`
cc #100717
Instead of loading the Fluent resources for every crate in
`rustc_error_messages`, each crate generates typed identifiers for its
own diagnostics and creates a static which are pulled together in the
`rustc_driver` crate and provided to the diagnostic emitter.
Signed-off-by: David Wood <david.wood@huawei.com>
Use a lock-free datastructure for source_span
follow up to the perf regression in https://github.com/rust-lang/rust/pull/105462
The main regression is likely the CStore, but let's evaluate the perf impact of this on its own
Don't call `with_reveal_all_normalized` in const-eval when `param_env` has inference vars in it
**what:** This slightly shifts the order of operations from an existing hack:
5b6ed253c4/compiler/rustc_middle/src/ty/consts/kind.rs (L225-L230)
in order to avoid calling a tcx query (`TyCtxt::reveal_opaque_types_in_bounds`, via `ParamEnv::with_reveal_all_normalized`) when a param-env has inference variables in it.
**why:** This allows us to enable fingerprinting of query keys/values outside of incr-comp in deubg mode, to make sure we catch other places where we're passing infer vars and other bad things into query keys. Currently that (bbf33836b9) crashes because we introduce inference vars into a param-env in the blanket-impl finder in rustdoc 😓5b6ed253c4/src/librustdoc/clean/blanket_impl.rs (L43)
See the CI failure here: https://github.com/rust-lang/rust/actions/runs/4058194838/jobs/6984834619
Factor query arena allocation out from query caches
This moves the logic for arena allocation out from the query caches into conditional code in the query system. The specialized arena caches are removed. A new `QuerySystem` type is added in `rustc_middle` which contains the arenas, providers and query caches.
Performance seems to be slightly regressed:
<table><tr><td rowspan="2">Benchmark</td><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th></tr><tr><td align="right">Time</td><td align="right">Time</td><td align="right">%</th></tr><tr><td>🟣 <b>clap</b>:check</td><td align="right">1.8053s</td><td align="right">1.8109s</td><td align="right"> 0.31%</td></tr><tr><td>🟣 <b>hyper</b>:check</td><td align="right">0.2600s</td><td align="right">0.2597s</td><td align="right"> -0.10%</td></tr><tr><td>🟣 <b>regex</b>:check</td><td align="right">0.9973s</td><td align="right">1.0006s</td><td align="right"> 0.34%</td></tr><tr><td>🟣 <b>syn</b>:check</td><td align="right">1.6048s</td><td align="right">1.6051s</td><td align="right"> 0.02%</td></tr><tr><td>🟣 <b>syntex_syntax</b>:check</td><td align="right">6.2992s</td><td align="right">6.3159s</td><td align="right"> 0.26%</td></tr><tr><td>Total</td><td align="right">10.9664s</td><td align="right">10.9922s</td><td align="right"> 0.23%</td></tr><tr><td>Summary</td><td align="right">1.0000s</td><td align="right">1.0017s</td><td align="right"> 0.17%</td></tr></table>
Incremental performance is a bit worse:
<table><tr><td rowspan="2">Benchmark</td><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th></tr><tr><td align="right">Time</td><td align="right">Time</td><td align="right">%</th></tr><tr><td>🟣 <b>clap</b>:check:initial</td><td align="right">2.2103s</td><td align="right">2.2247s</td><td align="right"> 0.65%</td></tr><tr><td>🟣 <b>hyper</b>:check:initial</td><td align="right">0.3335s</td><td align="right">0.3349s</td><td align="right"> 0.41%</td></tr><tr><td>🟣 <b>regex</b>:check:initial</td><td align="right">1.2597s</td><td align="right">1.2650s</td><td align="right"> 0.42%</td></tr><tr><td>🟣 <b>syn</b>:check:initial</td><td align="right">2.0521s</td><td align="right">2.0613s</td><td align="right"> 0.45%</td></tr><tr><td>🟣 <b>syntex_syntax</b>:check:initial</td><td align="right">7.8275s</td><td align="right">7.8583s</td><td align="right"> 0.39%</td></tr><tr><td>Total</td><td align="right">13.6832s</td><td align="right">13.7442s</td><td align="right"> 0.45%</td></tr><tr><td>Summary</td><td align="right">1.0000s</td><td align="right">1.0046s</td><td align="right"> 0.46%</td></tr></table>
It does seem like LLVM optimizers struggle a bit with the current state of the query system.
Based on top of https://github.com/rust-lang/rust/pull/107782 and https://github.com/rust-lang/rust/pull/107802.
r? `@cjgillot`