Begin fixing all the broken doctests in `compiler/`
Begins to fix#95994.
All of them pass now but 24 of them I've marked with `ignore HELP (<explanation>)` (asking for help) as I'm unsure how to get them to work / if we should leave them as they are.
There are also a few that I marked `ignore` that could maybe be made to work but seem less important.
Each `ignore` has a rough "reason" for ignoring after it parentheses, with
- `(pseudo-rust)` meaning "mostly rust-like but contains foreign syntax"
- `(illustrative)` a somewhat catchall for either a fragment of rust that doesn't stand on its own (like a lone type), or abbreviated rust with ellipses and undeclared types that would get too cluttered if made compile-worthy.
- `(not-rust)` stuff that isn't rust but benefits from the syntax highlighting, like MIR.
- `(internal)` uses `rustc_*` code which would be difficult to make work with the testing setup.
Those reason notes are a bit inconsistently applied and messy though. If that's important I can go through them again and try a more principled approach. When I run `rg '```ignore \(' .` on the repo, there look to be lots of different conventions other people have used for this sort of thing. I could try unifying them all if that would be helpful.
I'm not sure if there was a better existing way to do this but I wrote my own script to help me run all the doctests and wade through the output. If that would be useful to anyone else, I put it here: https://github.com/Elliot-Roberts/rust_doctest_fixing_tool
Enable tracing for all queries
This allows you to log everything within a specific query, e.g.
```
env RUSTC_LOG=[mir_borrowck]
```
dumping all borrowck queries may be a bit verbose, so you can also restrict it to just an item of your choice:
```
env RUSTC_LOG=[mir_borrowck{key=\.\*name_of_item\.\*}]
```
the regex `.*` in the key name are because the key is a debug printed DefId, so you'd get all kinds of things like hashes in there. The tracing logs will show you the key, so you can restrict it further if you want.
Cached stable hash cleanups
r? `@nnethercote`
Add a sanity assertion in debug mode to check that the cached hashes are actually the ones we get if we compute the hash each time.
Add a new data structure that bundles all the hash-caching work to make it easier to re-use it for different interned data structures
This commit updates the signatures of all diagnostic functions to accept
types that can be converted into a `DiagnosticMessage`. This enables
existing diagnostic calls to continue to work as before and Fluent
identifiers to be provided. The `SessionDiagnostic` derive just
generates normal diagnostic calls, so these APIs had to be modified to
accept Fluent identifiers.
In addition, loading of the "fallback" Fluent bundle, which contains the
built-in English messages, has been implemented.
Each diagnostic now has "arguments" which correspond to variables in the
Fluent messages (necessary to render a Fluent message) but no API for
adding arguments has been added yet. Therefore, diagnostics (that do not
require interpolation) can be converted to use Fluent identifiers and
will be output as before.
Lazy type-alias-impl-trait take two
### user visible change 1: RPIT inference from recursive call sites
Lazy TAIT has an insta-stable change. The following snippet now compiles, because opaque types can now have their hidden type set from wherever the opaque type is mentioned.
```rust
fn bar(b: bool) -> impl std::fmt::Debug {
if b {
return 42
}
let x: u32 = bar(false); // this errors on stable
99
}
```
The return type of `bar` stays opaque, you can't do `bar(false) + 42`, you need to actually mention the hidden type.
### user visible change 2: divergence between RPIT and TAIT in return statements
Note that `return` statements and the trailing return expression are special with RPIT (but not TAIT). So
```rust
#![feature(type_alias_impl_trait)]
type Foo = impl std::fmt::Debug;
fn foo(b: bool) -> Foo {
if b {
return vec![42];
}
std::iter::empty().collect() //~ ERROR `Foo` cannot be built from an iterator
}
fn bar(b: bool) -> impl std::fmt::Debug {
if b {
return vec![42]
}
std::iter::empty().collect() // Works, magic (accidentally stabilized, not intended)
}
```
But when we are working with the return value of a recursive call, the behavior of RPIT and TAIT is the same:
```rust
type Foo = impl std::fmt::Debug;
fn foo(b: bool) -> Foo {
if b {
return vec![];
}
let mut x = foo(false);
x = std::iter::empty().collect(); //~ ERROR `Foo` cannot be built from an iterator
vec![]
}
fn bar(b: bool) -> impl std::fmt::Debug {
if b {
return vec![];
}
let mut x = bar(false);
x = std::iter::empty().collect(); //~ ERROR `impl Debug` cannot be built from an iterator
vec![]
}
```
### user visible change 3: TAIT does not merge types across branches
In contrast to RPIT, TAIT does not merge types across branches, so the following does not compile.
```rust
type Foo = impl std::fmt::Debug;
fn foo(b: bool) -> Foo {
if b {
vec![42_i32]
} else {
std::iter::empty().collect()
//~^ ERROR `Foo` cannot be built from an iterator over elements of type `_`
}
}
```
It is easy to support, but we should make an explicit decision to include the additional complexity in the implementation (it's not much, see a721052457cf513487fb4266e3ade65c29b272d2 which needs to be reverted to enable this).
### PR formalities
previous attempt: #92007
This PR also includes #92306 and #93783, as they were reverted along with #92007 in #93893fixes#93411fixes#88236fixes#89312fixes#87340fixes#86800fixes#86719fixes#84073fixes#83919fixes#82139fixes#77987fixes#74282fixes#67830fixes#62742fixes#54895
Avoid query cache sharding code in single-threaded mode
In non-parallel compilers, this is just adding needless overhead at compilation time (since there is only one shard statically anyway). This amounts to roughly ~10 seconds reduction in bootstrap time, with overall neutral (some wins, some losses) performance results.
Parallel compiler performance should be largely unaffected by this PR; sharding is kept there.
Simplify rustc_serialize by dropping support for decoding into JSON
This PR currently bundles two (somewhat separate) tasks.
First, it removes the JSON Decoder trait impl, which permitted going from JSON to Rust structs. For now, we keep supporting JSON deserialization, but only to `Json` (an equivalent of serde_json::Value). The primary hard to remove user there is for custom targets -- which need some form of JSON deserialization -- but they already have a custom ad-hoc pass for moving from Json to a Rust struct.
A [comment](e7aca89598/compiler/rustc_target/src/spec/mod.rs (L1653)) there suggests that it would be impractical to move them to a Decodable-based impl, at least without backwards compatibility concerns. I suspect that if we were widely breaking compat there, it would make sense to use serde_json at this point which would produce better error messages; the types in rustc_target are relatively isolated so we would not particularly suffer from using serde_derive.
The second part of the PR (all but the first commit) is to simplify the Decoder API by removing the non-primitive `read_*` functions. These primarily add indirection (through a closure), which doesn't directly cause a performance issue (the unique closure types essentially guarantee monomorphization), but does increase the amount of work rustc and LLVM need to do. This could be split out to a separate PR, but is included here in part to help motivate the first part.
Future work might consist of:
* Specializing enum discriminant encoding to avoid leb128 for small enums (since we know the variant count, we can directly use read/write u8 in almost all cases)
* Adding new methods to support faster deserialization (e.g., access to the underlying byte stream)
* Currently these are somewhat ad-hoc supported by specializations for e.g. `Vec<u8>`, but other types which could benefit don't today.
* Removing the Decoder trait entirely in favor of a concrete type -- today, we only really have one impl of it modulo wrappers used for specialization-based dispatch.
Highly recommend review with whitespace changes off, as the removal of closures frequently causes things to be de-indented.
This was largely just caching the shard value at this point, which is not
particularly useful -- in the use sites the key was being hashed nearby anyway.
Refactor query system to maintain a global job id counter
This replaces the per-shard counters with a single global counter, simplifying
the JobId struct down to just a u64 and removing the need to pipe a DepKind
generic through a bunch of code. The performance implications on non-parallel
compilers are likely minimal (this switches to `Cell<u64>` as the backing
storage over a `u64`, but the latter was already inside a `RefCell` so it's not
really a significance divergence). On parallel compilers, the cost of a single
global u64 counter may be more significant: it adds a serialization point in
theory. On the other hand, we can imagine changing the counter to have a
thread-local component if it becomes worrisome or some similar structure.
The new design is sufficiently simpler that it warrants the potential for slight
changes down the line if/when we get parallel compilation to be more of a
default.
A u64 counter, instead of u32 (the old per-shard width), is chosen to avoid
possibly overflowing it and causing problems; it is effectively impossible that
we would overflow a u64 counter in this context.
This replaces the per-shard counters with a single global counter, simplifying
the JobId struct down to just a u64 and removing the need to pipe a DepKind
generic through a bunch of code. The performance implications on non-parallel
compilers are likely minimal (this switches to `Cell<u64>` as the backing
storage over a `u64`, but the latter was already inside a `RefCell` so it's not
really a significance divergence). On parallel compilers, the cost of a single
global u64 counter may be more significant: it adds a serialization point in
theory. On the other hand, we can imagine changing the counter to have a
thread-local component if it becomes worrisome or some similar structure.
The new design is sufficiently simpler that it warrants the potential for slight
changes down the line if/when we get parallel compilation to be more of a
default.
A u64 counter, instead of u32 (the old per-shard width), is chosen to avoid
possibly overflowing it and causing problems; it is effectively impossible that
we would overflow a u64 counter in this context.
Lazy type-alias-impl-trait
Previously opaque types were processed by
1. replacing all mentions of them with inference variables
2. memorizing these inference variables in a side-table
3. at the end of typeck, resolve the inference variables in the side table and use the resolved type as the hidden type of the opaque type
This worked okayish for `impl Trait` in return position, but required lots of roundabout type inference hacks and processing.
This PR instead stops this process of replacing opaque types with inference variables, and just keeps the opaque types around.
Whenever an opaque type `O` is compared with another type `T`, we make the comparison succeed and record `T` as the hidden type. If `O` is compared to `U` while there is a recorded hidden type for it, we grab the recorded type (`T`) and compare that against `U`. This makes implementing
* https://github.com/rust-lang/rfcs/pull/2515
much simpler (previous attempts on the inference based scheme were very prone to ICEs and general misbehaviour that was not explainable except by random implementation defined oddities).
r? `@nikomatsakis`
fixes#93411fixes#88236
by using an opaque type obligation to bubble up comparisons between opaque types and other types
Also uses proper obligation causes so that the body id works, because out of some reason nll uses body ids for logic instead of just diagnostics.
`Decoder` has two impls:
- opaque: this impl is already partly infallible, i.e. in some places it
currently panics on failure (e.g. if the input is too short, or on a
bad `Result` discriminant), and in some places it returns an error
(e.g. on a bad `Option` discriminant). The number of places where
either happens is surprisingly small, just because the binary
representation has very little redundancy and a lot of input reading
can occur even on malformed data.
- json: this impl is fully fallible, but it's only used (a) for the
`.rlink` file production, and there's a `FIXME` comment suggesting it
should change to a binary format, and (b) in a few tests in
non-fundamental ways. Indeed #85993 is open to remove it entirely.
And the top-level places in the compiler that call into decoding just
abort on error anyway. So the fallibility is providing little value, and
getting rid of it leads to some non-trivial performance improvements.
Much of this commit is pretty boring and mechanical. Some notes about
a few interesting parts:
- The commit removes `Decoder::{Error,error}`.
- `InternIteratorElement::intern_with`: the impl for `T` now has the same
optimization for small counts that the impl for `Result<T, E>` has,
because it's now much hotter.
- Decodable impls for SmallVec, LinkedList, VecDeque now all use
`collect`, which is nice; the one for `Vec` uses unsafe code, because
that gave better perf on some benchmarks.
Update rayon and rustc-rayon
This updates rayon for various tools and rustc-rayon for the compiler's parallel mode.
- rayon v1.3.1 -> v1.5.1
- rayon-core v1.7.1 -> v1.9.1
- rustc-rayon v0.3.1 -> v0.3.2
- rustc-rayon-core v0.3.1 -> v0.3.2
... and indirectly, this updates all of crossbeam-* to their latest versions.
Fixes#92677 by removing crossbeam-queue, but there's still a lingering question about how tidy discovers "runtime" dependencies. None of this is truly in the standard library's dependency tree at all.
Ensure that `Fingerprint` caching respects hashing configuration
Fixes#92266
In some `HashStable` impls, we use a cache to avoid re-computing
the same `Fingerprint` from the same structure (e.g. an `AdtDef`).
However, the `StableHashingContext` used can be configured to
perform hashing in different ways (e.g. skipping `Span`s). This
configuration information is not included in the cache key,
which will cause an incorrect `Fingerprint` to be used if
we hash the same structure with different `StableHashingContext`
settings.
To fix this, the configuration settings of `StableHashingContext`
are split out into a separate `HashingControls` struct. This
struct is used as part of the cache key, ensuring that our caches
always produce the correct result for the given settings.
With this in place, we now turn off `Span` hashing during the
entire process of computing the hash included in legacy symbols.
This current has no effect, but will matter when a future PR
starts hashing more `Span`s that we currently skip.
Don't perform any new queries while reading a query result on disk
In addition to being very confusing, this can cause us to add dep node edges between two queries that would not otherwise have an edge.
We now panic if any new dep node edges are created during the deserialization of a query result. This requires serializing the full `AdtDef` to disk, instead of just serializing the `DefId` and invoking the `adt_def` query during deserialization.
I'll probably split this up into several smaller PRs for perf runs.
Fixes#92266
In some `HashStable` impls, we use a cache to avoid re-computing
the same `Fingerprint` from the same structure (e.g. an `AdtDef`).
However, the `StableHashingContext` used can be configured to
perform hashing in different ways (e.g. skipping `Span`s). This
configuration information is not included in the cache key,
which will cause an incorrect `Fingerprint` to be used if
we hash the same structure with different `StableHashingContext`
settings.
To fix this, the configuration settings of `StableHashingContext`
are split out into a separate `HashingControls` struct. This
struct is used as part of the cache key, ensuring that our caches
always produce the correct result for the given settings.
With this in place, we now turn off `Span` hashing during the
entire process of computing the hash included in legacy symbols.
This current has no effect, but will matter when a future PR
starts hashing more `Span`s that we currently skip.
Remove special-cased stable hashing for HIR module
All other 'containers' (e.g. `impl` blocks) hashed their contents
in the normal, order-dependent way. However, `Mod` was hashing
its contents in a (sort-of) order-independent way. However, the
exact order is exposed to consumers through `Mod.item_ids`,
and through query results like `hir_module_items`. Therefore,
stable hashing needs to take the order of items into account,
to avoid fingerprint ICEs.
Unforuntately, I was unable to directly build a reproducer
for the ICE, due to the behavior of `Fingerprint::combine_commutative`.
This operation swaps the upper and lower `u64` when constructing the
result, which makes the function non-associative. Since we start
the hashing of module items by combining `Fingerprint::ZERO` with
the first item, it's difficult to actually build an example where
changing the order of module items leaves the final hash unchanged.
However, this appears to have been hit in practice in #92218
While we're not able to reproduce it, the fact that proc-macros
are involved (which can give an entire module the same span, preventing
any span-related invalidations) makes me confident that the root
cause of that issue is our method of hashing module items.
This PR removes all of the special handling for `Mod`, instead deriving
a `HashStable` implementation. This makes `Mod` consistent with other
'contains' like `Impl`, which hash their contents through the typical
derive of `HashStable`.
All other 'containers' (e.g. `impl` blocks) hashed their contents
in the normal, order-dependent way. However, `Mod` was hashing
its contents in a (sort-of) order-independent way. However, the
exact order is exposed to consumers through `Mod.item_ids`,
and through query results like `hir_module_items`. Therefore,
stable hashing needs to take the order of items into account,
to avoid fingerprint ICEs.
Unforuntately, I was unable to directly build a reproducer
for the ICE, due to the behavior of `Fingerprint::combine_commutative`.
This operation swaps the upper and lower `u64` when constructing the
result, which makes the function non-associative. Since we start
the hashing of module items by combining `Fingerprint::ZERO` with
the first item, it's difficult to actually build an example where
changing the order of module items leaves the final hash unchanged.
However, this appears to have been hit in practice in #92218
While we're not able to reproduce it, the fact that proc-macros
are involved (which can give an entire module the same span, preventing
any span-related invalidations) makes me confident that the root
cause of that issue is our method of hashing module items.
This PR removes all of the special handling for `Mod`, instead deriving
a `HashStable` implementation. This makes `Mod` consistent with other
'contains' like `Impl`, which hash their contents through the typical
derive of `HashStable`.
Currently, you can use `#[rustc_clean]` to assert to that a particular
query (technically, a `DepNode`) is green or red. However, a green
`DepNode` does not mean that the query result was actually deserialized
from disk - we might have never re-run a query that needed the result.
Some incremental tests are written as regression tests for ICEs that
occured during query result decoding. Using
`#[rustc_clean(loaded_from_disk="typeck")]`, you can now assert
that the result of a particular query (e.g. `typeck`) was actually
loaded from disk, in addition to being green.
This commit is intended to follow the stabilization disposition of the
FCP that has now finished in #84223. This stabilizes the ability to flag
thread local initializers as `const` expressions which enables the macro
to generate more efficient code for accessing it, notably removing
runtime checks for initialization.
More information can also be found in #84223 as well as the tests where
the feature usage was removed in this PR.
Closes#84223
Enable verification for 1/32th of queries loaded from disk
This is a limited enabling of incremental verification for query results loaded from disk, which previously did not run without -Zincremental-verify-ich. If enabled for all queries, we see a probably unacceptable hit of ~50% in the worst case, so this pairs back the verification to a more limited set based on the hash key.
Per collected [perf results](https://github.com/rust-lang/rust/pull/84227#issuecomment-953350582), this is a regression of at most 7% on coercions opt incr-unchanged, and typically less than 0.5% on other benchmarks (largely limited to incr-unchanged). I believe this is acceptable performance to land, and we can either ratchet it up or down fairly easily.
We have no real sense of whether this will lead to a large amount of assertions in the wild, but since those assertions may lead to miscompilations today, it seems potentially warranted. We have a good bit of lead time until the next stable release, though the holiday season will also start soon; we may wish to discuss the timing of enabling this and weigh the desire to prevent (possible) miscompilations against assertions.
cc `@rust-lang/wg-incr-comp`
Revert "Add rustc lint, warning when iterating over hashmaps"
Fixes perf regressions introduced in https://github.com/rust-lang/rust/pull/90235 by temporarily reverting the relevant PR.
Build the query vtable directly.
Continuation of https://github.com/rust-lang/rust/pull/89978.
This shrinks the query interface and attempts to reduce the amount of function pointer calls.
Add support for artifact size profiling
This adds support for profiling artifact file sizes (incremental compilation artifacts and query cache to begin with).
Eventually we want to track this in perf.rlo so we can ensure that file sizes do not change dramatically on each pull request.
This relies on support in measureme: https://github.com/rust-lang/measureme/pull/169. Once that lands we can update this PR to not point to a git dependency.
This was worked on together with `@michaelwoerister.`
r? `@wesleywiser`
Adopt let_else across the compiler
This performs a substitution of code following the pattern:
```
let <id> = if let <pat> = ... { identity } else { ... : ! };
```
To simplify it to:
```
let <pat> = ... { identity } else { ... : ! };
```
By adopting the `let_else` feature (cc #87335).
The PR also updates the syn crate because the currently used version of the crate doesn't support `let_else` syntax yet.
Note: Generally I'm the person who *removes* usages of unstable features from the compiler, not adds more usages of them, but in this instance I think it hopefully helps the feature get stabilized sooner and in a better state. I have written a [comment](https://github.com/rust-lang/rust/issues/87335#issuecomment-944846205) on the tracking issue about my experience and what I feel could be improved before stabilization of `let_else`.
Index and hash HIR as part of lowering
Part of https://github.com/rust-lang/rust/pull/88186
~Based on https://github.com/rust-lang/rust/pull/88880 (see merge commit).~
Once HIR is lowered, it is later indexed by the `index_hir` query and hashed for `crate_hash`. This PR moves those post-processing steps to lowering itself. As a side objective, the HIR crate data structure is refactored as an `IndexVec<LocalDefId, Option<OwnerInfo<'hir>>>` where `OwnerInfo` stores all the relevant information for an HIR owner.
r? `@michaelwoerister`
cc `@petrochenkov`
This performs a substitution of code following the pattern:
let <id> = if let <pat> = ... { identity } else { ... : ! };
To simplify it to:
let <pat> = ... { identity } else { ... : ! };
By adopting the let_else feature.
This was already only enabled in debug_assertions builds. Generally, it seems
like most use cases that would use this could also use the -Zself-profile flag
which also tracks cache hits (in all builds), and so the extra cfg's and such
are not really necessary.
This is largely just a small cleanup though, which primarily is intended to make
other changes easier by avoiding the need to deal with this field.
Simplify lazy DefPathHash decoding by using an on-disk hash table.
This PR simplifies the logic around mapping `DefPathHash` values encountered during incremental compilation to valid `DefId`s in the current session. It is able to do so by using an on-disk hash table encoding that allows for looking up values directly, i.e. without deserializing the entire table.
The main simplification comes from not having to keep track of `DefPathHashes` being used during the compilation session.
Specify a log level in tracing instrument macro explicitly.
Additionally reduce the used log level from a default info level to a
debug level (all of those appear to be developer oriented logs, so there
should be no need to include them in release builds).
Refactor query forcing
The control flow in those functions was very complex, with several layers of continuations.
I tried to simplify the implementation, while keeping essentially the same logic.
Now, all code paths go through `try_execute_query` for the actual query execution.
Communication with the `dep_graph` and the live caches are the only difference between query getting/ensuring/forcing.
Previously, `QueryJobInfo` was composed of two parts: a `QueryInfo` and
a `QueryJob`. However, both `QueryInfo` and `QueryJob` have a `span`
field, which seem to be the same. So, the `span` was recorded twice.
Now, `QueryJobInfo` is composed of a `QueryStackFrame` (the other field
of `QueryInfo`) and a `QueryJob`. So, now, the `span` is only recorded
once.
try_execute_query is now able to centralize the path for query
get/ensure/force.
try_execute_query now takes the dep_node as a parameter, so it can
accommodate `force`. This dep_node is an Option to avoid computing it in
the `get` fast path.
try_execute_query now returns both the result and the dep_node_index to
allow the caller to handle the dep graph.
The caller is responsible for marking the dependency.
`with_taks_impl` is only called from `with_eval_always_task` and
`with_task` . The former is only used in query invocation, while the
latter is also used to start the `tcx` and to trigger codegen.
This move should not change significantly the number of calls to this
assertion.
When an incremental fingerprint mismatch occurs, we debug-print
our `DepNode` and query result. Unfortunately, the debug printing
process may cause us to run additional queries, which can result
in a re-entrant fingerprint mismatch error.
To avoid a double panic, this commit adds a thread-local variable
to detect re-entrant calls.
rfc3052 followup: Remove authors field from Cargo manifests
Since RFC 3052 soft deprecated the authors field, hiding it from
crates.io, docs.rs, and making Cargo not add it by default, and it is
not generally up to date/useful information for contributors, we may as well
remove it from crates in this repo.
Since RFC 3052 soft deprecated the authors field anyway, hiding it from
crates.io, docs.rs, and making Cargo not add it by default, and it is
not generally up to date/useful information, we should remove it from
crates in this repo.
Remove unused feature gates
The first commit removes a usage of a feature gate, but I don't expect it to be controversial as the feature gate was only used to workaround a limitation of rust in the past. (closures never being `Clone`)
The second commit uses `#[allow_internal_unstable]` to avoid leaking the `trusted_step` feature gate usage from inside the index newtype macro. It didn't work for the `min_specialization` feature gate though.
The third commit removes (almost) all feature gates from the compiler that weren't used anyway.
Make `Step` trait safe to implement
This PR makes a few modifications to the `Step` trait that I believe better position it for stabilization in the short term. In particular,
1. `unsafe trait TrustedStep` is introduced, indicating that the implementation of `Step` for a given type upholds all stated invariants (which have remained unchanged). This is gated behind a new `trusted_step` feature, as stabilization is realistically blocked on min_specialization.
2. The `Step` trait is internally specialized on the `TrustedStep` trait, which avoids a serious performance regression.
3. `TrustedLen` is implemented for `T: TrustedStep` as the latter's invariants subsume the former's.
4. The `Step` trait is no longer `unsafe`, as the invariants must not be relied upon by unsafe code (unless the type implements `TrustedStep`).
5. `TrustedStep` is implemented for all types that implement `Step` in the standard library and compiler.
6. The `step_trait_ext` feature is merged into the `step_trait` feature. I was unable to find any reasoning for the features being split; the `_unchecked` methods need not necessarily be stabilized at the same time, but I think it is useful to have them under the same feature flag.
All existing implementations of `Step` will be broken, as it is not possible to `unsafe impl` a safe trait. Given this trait only exists on nightly, I feel this breakage is acceptable. The blanket `impl<T: Step> TrustedLen for T` will likely cause some minor breakage, but this should be covered by the equivalent impl for `TrustedStep`.
Hopefully these changes are sufficient to place `Step` in decent position for stabilization, which would allow user-defined types to be used with `a..b` syntax.
This means that we're no longer generating the iteration/locking code for each
invocation site of iter_results, rather just once per query.
This is a 15% win in instruction counts when compiling the rustc_query_impl crate.
Found with https://github.com/est31/warnalyzer.
Dubious changes:
- Is anyone else using rustc_apfloat? I feel weird completely deleting
x87 support.
- Maybe some of the dead code in rustc_data_structures, in case someone
wants to use it in the future?
- Don't change rustc_serialize
I plan to scrap most of the json module in the near future (see
https://github.com/rust-lang/compiler-team/issues/418) and fixing the
tests needed more work than I expected.
TODO: check if any of the comments on the deleted code should be kept.
Update to rustc-rayon 0.3.1
This pulls in rust-lang/rustc-rayon#8 to fix#81425. (h/t `@ammaraskar)`
That revealed weak constraints on `rustc_arena::DropArena`, because its
`DropType` was holding type-erased raw pointers to generic `T`. We can
implement `Send` for `DropType` (under `cfg(parallel_compiler)`) by
requiring all `T: Send` before they're type-erased.
Issue #82920 showed that the kind of bugs caught by this flag have
soundness implications.
This causes performance regressions of up to 15.2% during incremental
compilation, but this is necessary to catch miscompilations caused by
bugs in query implementations.
This pulls in rust-lang/rustc-rayon#8 to fix#81425. (h/t @ammaraskar)
That revealed weak constraints on `rustc_arena::DropArena`, because its
`DropType` was holding type-erased raw pointers to generic `T`. We can
implement `Send` for `DropType` (under `cfg(parallel_compiler)`) by
requiring all `T: Send` before they're type-erased.
Check the result cache before the DepGraph when ensuring queries
Split out of https://github.com/rust-lang/rust/pull/70951
Calling `ensure` on already forced queries is a common operation.
Looking at the results cache first is faster than checking the DepGraph for a green node.
[experiment] remove `#[inline]` from rustc_query_system::plumbing
These functions have a ton of generic parameters and are instantiated
over and over again. Hopefully this will reduce binary bloat and speed
up bootstrapping times.
r? `@cjgillot`
These functions have a ton of generic parameters and are instantiated
over and over again. Hopefully this will reduce binary bloat and speed
up bootstrapping times.
Enforce that query results implement Debug
Currently, we require that query keys implement `Debug`, but we do not do the same for query values. This can make incremental compilation bugs difficult to debug - there isn't a good place to print out the result loaded from disk.
This PR adds `Debug` bounds to several query-related functions, allowing us to debug-print the query value when an 'unstable fingerprint' error occurs. This required adding `#[derive(Debug)]` to a fairly large number of types - hopefully, this doesn't have much of an impact on compiler bootstrapping times.
Serialize dependency graph directly from DepGraph
Reduce memory usage by serializing dep graph directly from `DepGraph`,
rather than copying it into `SerializedDepGraph` and serializing that.
Reduce memory consumption by sharing the previous dependency graph's
edges with the current graph when it is known to be valid to do so. It
is known to be valid whenever we mark a node green because all of its
dependencies were green. It is *not* known to be valid when we mark a
node green because we re-executed its query and its result was the same
as in the previous compilation session. In that case, the dependency set
might have changed (we don't try to determine whether or not it changed
and whether or not we can share).
Reduce memory consumption by taking advantage of red/green algorithm
properties to share the previous dependency graph's node data with the
current graph instead of storing node data redundantly. Red nodes can
share the `DepNode`, and green nodes can share the `DepNode` and
`Fingerprint`. Edges will be shared when possible in a later change.
Register nodes that we've reused from the previous session explicitly
with `OnDiskCache`. Previously, we relied on this happening as a side
effect of accessing the nodes in the `PreviousDepGraph`. For the sake of
performance and avoiding unintended side effects, register explictily.
Fixes#79890
Previously, we just copied a `RawDefId` from the 'old' map to the 'new'
map. However, the `RawDefId` for a given `DefPathHash` may be different
in the current compilation session. Using `def_path_hash_to_def_id`
ensures that the `RawDefId` we use is valid in the current session.
Fixes#79661
In incremental compilation mode, we update a `DefPathHash -> DefId`
mapping every time we create a `DepNode` for a foreign `DefId`.
This mapping is written out to the on-disk incremental cache, and is
read by the next compilation session to allow us to lazily decode
`DefId`s.
When we decode a `DepNode` from the current incremental cache, we need
to ensure that any previously-recorded `DefPathHash -> DefId` mapping
gets recorded in the new mapping that we write out. However, PR #74967
didn't do this in all cases, leading to us being unable to decode a
`DefPathHash` in certain circumstances.
This PR refactors some of the code around `DepNode` deserialization to
prevent this kind of mistake from happening again.
Make fewer types generic over QueryContext
While trying to refactor `rustc_query_system::query::QueryContext` to make it dyn-safe, I noticed some smaller things:
* QueryConfig doesn't need to be generic over QueryContext
* ~~The `kind` field on QueryJobId is unused~~
* Some unnecessary where clauses
* Many types in `job.rs` where generic over `QueryContext` but only needed `QueryContext::Query`.
If handle_cycle_error() could be refactored to not take `error: CycleError<CTX::Query>`, all those bounds could be removed as well.
Changing `find_cycle_in_stack()` in job.rs to not take a `tcx` argument is the only functional change here. Everything else is just updating type signatures. (aka compile-error driven development ^^)
~~Currently there is a weird bug where memory usage suddenly skyrockets when running UI tests. I'll investigate that tomorrow.
A perf run probably won't make sense before that is fixed.~~
EDIT: `kind` actually is used by `Eq`, and re-adding it fixed the memory issue.