This avoids the name clash with `rustc_serialize::Encoder` (a trait),
and allows lots qualifiers to be removed and imports to be simplified
(e.g. fewer `as` imports).
(This was previously merged as commit 5 in #94732 and then was reverted
in #97905 because of a perf regression caused by commit 4 in #94732.)
This avoids the name clash with `rustc_serialize::Encoder` (a trait),
and allows lots qualifiers to be removed and imports to be simplified
(e.g. fewer `as` imports).
There are two impls of the `Encoder` trait: `opaque::Encoder` and
`opaque::FileEncoder`. The former encodes into memory and is infallible, the
latter writes to file and is fallible.
Currently, standard `Result`/`?`/`unwrap` error handling is used, but this is a
bit verbose and has non-trivial cost, which is annoying given how rare failures
are (especially in the infallible `opaque::Encoder` case).
This commit changes how `Encoder` fallibility is handled. All the `emit_*`
methods are now infallible. `opaque::Encoder` requires no great changes for
this. `opaque::FileEncoder` now implements a delayed error handling strategy.
If a failure occurs, it records this via the `res` field, and all subsequent
encoding operations are skipped if `res` indicates an error has occurred. Once
encoding is complete, the new `finish` method is called, which returns a
`Result`. In other words, there is now a single `Result`-producing method
instead of many of them.
This has very little effect on how any file errors are reported if
`opaque::FileEncoder` has any failures.
Much of this commit is boring mechanical changes, removing `Result` return
values and `?` or `unwrap` from expressions. The more interesting parts are as
follows.
- serialize.rs: The `Encoder` trait gains an `Ok` associated type. The
`into_inner` method is changed into `finish`, which returns
`Result<Vec<u8>, !>`.
- opaque.rs: The `FileEncoder` adopts the delayed error handling
strategy. Its `Ok` type is a `usize`, returning the number of bytes
written, replacing previous uses of `FileEncoder::position`.
- Various methods that take an encoder now consume it, rather than being
passed a mutable reference, e.g. `serialize_query_result_cache`.
`SourceFile::lines` is a big part of metadata. It's stored in a compressed form
(a difference list) to save disk space. Decoding it is a big fraction of
compile time for very small crates/programs.
This commit introduces a new type `SourceFileLines` which has a `Lines`
form and a `Diffs` form. The latter is used when the metadata is first
read, and it is only decoded into the `Lines` form when line data is
actually needed. This avoids the decoding cost for many files,
especially in `std`. It's a performance win of up to 15% for tiny
crates/programs where metadata decoding is a high part of compilation
costs.
A `Lock` is needed because the methods that access lines data (which can
trigger decoding) take `&self` rather than `&mut self`. To allow for this,
`SourceFile::lines` now takes a `FnMut` that operates on the lines slice rather
than returning the lines slice.
Remove `crate` visibility modifier
FCP to remove this syntax is just about complete in #53120. Once it completes, this should be merged ASAP to avoid merge conflicts.
The first two commits remove usage of the feature in this repository, while the last removes the feature itself.
Avoid query cache sharding code in single-threaded mode
In non-parallel compilers, this is just adding needless overhead at compilation time (since there is only one shard statically anyway). This amounts to roughly ~10 seconds reduction in bootstrap time, with overall neutral (some wins, some losses) performance results.
Parallel compiler performance should be largely unaffected by this PR; sharding is kept there.
This was largely just caching the shard value at this point, which is not
particularly useful -- in the use sites the key was being hashed nearby anyway.
Specifically, rename the `Const` struct as `ConstS` and re-introduce `Const` as
this:
```
pub struct Const<'tcx>(&'tcx Interned<ConstS>);
```
This now matches `Ty` and `Predicate` more closely, including using
pointer-based `eq` and `hash`.
Notable changes:
- `mk_const` now takes a `ConstS`.
- `Const` was copy, despite being 48 bytes. Now `ConstS` is not, so need a
we need separate arena for it, because we can't use the `Dropless` one any
more.
- Many `&'tcx Const<'tcx>`/`&Const<'tcx>` to `Const<'tcx>` changes
- Many `ct.ty` to `ct.ty()` and `ct.val` to `ct.val()` changes.
- Lots of tedious sigil fiddling.
Specifically, change `Ty` from this:
```
pub type Ty<'tcx> = &'tcx TyS<'tcx>;
```
to this
```
pub struct Ty<'tcx>(Interned<'tcx, TyS<'tcx>>);
```
There are two benefits to this.
- It's now a first class type, so we can define methods on it. This
means we can move a lot of methods away from `TyS`, leaving `TyS` as a
barely-used type, which is appropriate given that it's not meant to
be used directly.
- The uniqueness requirement is now explicit, via the `Interned` type.
E.g. the pointer-based `Eq` and `Hash` comes from `Interned`, rather
than via `TyS`, which wasn't obvious at all.
Much of this commit is boring churn. The interesting changes are in
these files:
- compiler/rustc_middle/src/arena.rs
- compiler/rustc_middle/src/mir/visit.rs
- compiler/rustc_middle/src/ty/context.rs
- compiler/rustc_middle/src/ty/mod.rs
Specifically:
- Most mentions of `TyS` are removed. It's very much a dumb struct now;
`Ty` has all the smarts.
- `TyS` now has `crate` visibility instead of `pub`.
- `TyS::make_for_test` is removed in favour of the static `BOOL_TY`,
which just works better with the new structure.
- The `Eq`/`Ord`/`Hash` impls are removed from `TyS`. `Interned`s impls
of `Eq`/`Hash` now suffice. `Ord` is now partly on `Interned`
(pointer-based, for the `Equal` case) and partly on `TyS`
(contents-based, for the other cases).
- There are many tedious sigil adjustments, i.e. adding or removing `*`
or `&`. They seem to be unavoidable.
Refactor query system to maintain a global job id counter
This replaces the per-shard counters with a single global counter, simplifying
the JobId struct down to just a u64 and removing the need to pipe a DepKind
generic through a bunch of code. The performance implications on non-parallel
compilers are likely minimal (this switches to `Cell<u64>` as the backing
storage over a `u64`, but the latter was already inside a `RefCell` so it's not
really a significance divergence). On parallel compilers, the cost of a single
global u64 counter may be more significant: it adds a serialization point in
theory. On the other hand, we can imagine changing the counter to have a
thread-local component if it becomes worrisome or some similar structure.
The new design is sufficiently simpler that it warrants the potential for slight
changes down the line if/when we get parallel compilation to be more of a
default.
A u64 counter, instead of u32 (the old per-shard width), is chosen to avoid
possibly overflowing it and causing problems; it is effectively impossible that
we would overflow a u64 counter in this context.
Delete -Zquery-stats infrastructure
These statistics are computable from the self-profile data and/or ad-hoc collectable as needed, and in the meantime contribute to rustc bootstrap times -- locally, this PR shaves ~2.5% from rustc_query_impl builds in instruction counts.
If this does lose some functionality we want to keep, I think we should migrate it to self-profile (or a similar interface) rather than this ad-hoc reporting.
This replaces the per-shard counters with a single global counter, simplifying
the JobId struct down to just a u64 and removing the need to pipe a DepKind
generic through a bunch of code. The performance implications on non-parallel
compilers are likely minimal (this switches to `Cell<u64>` as the backing
storage over a `u64`, but the latter was already inside a `RefCell` so it's not
really a significance divergence). On parallel compilers, the cost of a single
global u64 counter may be more significant: it adds a serialization point in
theory. On the other hand, we can imagine changing the counter to have a
thread-local component if it becomes worrisome or some similar structure.
The new design is sufficiently simpler that it warrants the potential for slight
changes down the line if/when we get parallel compilation to be more of a
default.
A u64 counter, instead of u32 (the old per-shard width), is chosen to avoid
possibly overflowing it and causing problems; it is effectively impossible that
we would overflow a u64 counter in this context.
These statistics are computable from the self-profile data and/or ad-hoc
collectable as needed, and in the meantime contribute to rustc bootstrap times.
`Decoder` has two impls:
- opaque: this impl is already partly infallible, i.e. in some places it
currently panics on failure (e.g. if the input is too short, or on a
bad `Result` discriminant), and in some places it returns an error
(e.g. on a bad `Option` discriminant). The number of places where
either happens is surprisingly small, just because the binary
representation has very little redundancy and a lot of input reading
can occur even on malformed data.
- json: this impl is fully fallible, but it's only used (a) for the
`.rlink` file production, and there's a `FIXME` comment suggesting it
should change to a binary format, and (b) in a few tests in
non-fundamental ways. Indeed #85993 is open to remove it entirely.
And the top-level places in the compiler that call into decoding just
abort on error anyway. So the fallibility is providing little value, and
getting rid of it leads to some non-trivial performance improvements.
Much of this commit is pretty boring and mechanical. Some notes about
a few interesting parts:
- The commit removes `Decoder::{Error,error}`.
- `InternIteratorElement::intern_with`: the impl for `T` now has the same
optimization for small counts that the impl for `Result<T, E>` has,
because it's now much hotter.
- Decodable impls for SmallVec, LinkedList, VecDeque now all use
`collect`, which is nice; the one for `Vec` uses unsafe code, because
that gave better perf on some benchmarks.
Fixes#92163Fixes#92014
When writing to the incremental cache, we encode all `Span`s
we encounter, regardless of whether or not their `SourceFile`
comes from the local crate, or from a foreign crate.
When we decode a `Span`, we use the `StableSourceFileId` we encoded
to locate the matching `SourceFile` in the current session. If this
id corresponds to a `SourceFile` from another crate, then we need to
have already imported that `SourceFile` into our current session.
This usually happens automatically during resolution / macro expansion,
when we try to resolve definitions from other crates. In certain cases,
however, we may try to load a `Span` from a transitive dependency
without having ever imported the `SourceFile`s from that crate, leading
to an ICE.
This PR fixes the issue by calling `imported_source_files()`
when we encounter a `SourceFile` with a foreign `CrateNum`.
This ensures that all `SourceFile`s from that crate are imported
into the current session.
Remove `SymbolStr`
This was originally proposed in https://github.com/rust-lang/rust/pull/74554#discussion_r466203544. As well as removing the icky `SymbolStr` type, it allows the removal of a lot of `&` and `*` occurrences.
Best reviewed one commit at a time.
r? `@oli-obk`
By changing `as_str()` to take `&self` instead of `self`, we can just
return `&str`. We're still lying about lifetimes, but it's a smaller lie
than before, where `SymbolStr` contained a (fake) `&'static str`!
Build the query vtable directly.
Continuation of https://github.com/rust-lang/rust/pull/89978.
This shrinks the query interface and attempts to reduce the amount of function pointer calls.
Add support for artifact size profiling
This adds support for profiling artifact file sizes (incremental compilation artifacts and query cache to begin with).
Eventually we want to track this in perf.rlo so we can ensure that file sizes do not change dramatically on each pull request.
This relies on support in measureme: https://github.com/rust-lang/measureme/pull/169. Once that lands we can update this PR to not point to a git dependency.
This was worked on together with `@michaelwoerister.`
r? `@wesleywiser`
This was already only enabled in debug_assertions builds. Generally, it seems
like most use cases that would use this could also use the -Zself-profile flag
which also tracks cache hits (in all builds), and so the extra cfg's and such
are not really necessary.
This is largely just a small cleanup though, which primarily is intended to make
other changes easier by avoiding the need to deal with this field.
Refactor fingerprint reconstruction
This PR replaces can_reconstruct_query_key with fingerprint_style, which returns the style of the fingerprint for that query. This allows us to avoid trying to extract a DefId (or equivalent) from keys which *are* reconstructible because they're () but not as DefIds.
This is done with the goal of fixing -Zdump-dep-graph, which seems to have broken a while ago (I didn't try to bisect). Currently even on a `fn main() {}` file it'll ICE (you need to also pass -Zquery-dep-graph for it to work at all), and this patch indirectly fixes the cause of that ICE. This also adds a test for it continuing to work.
Turn vtable_allocation() into a query
This PR removes the untracked vtable-const-allocation cache from the `tcx` and turns the `vtable_allocation()` method into a query.
The change is pretty straightforward and should be backportable without too much effort.
Fixes https://github.com/rust-lang/rust/issues/89598.
The previous macro_rules! parsers failed when an additional modifier was added
with ambiguity errors. The error is pretty unclear as to what exactly the cause
here is, but this change simplifies the argument parsing code such that the
error is avoided.
Querify `FnAbi::of_{fn_ptr,instance}` as `fn_abi_of_{fn_ptr,instance}`.
*Note: opening this PR as draft because it's based on #88499*
This more or less replicates the `LayoutOf::layout_of` setup from #88499, to replace `FnAbi::of_{fn_ptr,instance}` with `FnAbiOf::fn_abi_of_{fn_ptr,instance}`, and also route them through queries (which `layout_of` has used for a while).
The two changes at the use sites (other than the names) are:
* return type is now wrapped in `&'tcx`
* the value *is* interned, which may affect performance
* the `extra_args` list is now an interned `&'tcx ty::List<Ty<'tcx>>`
* should be cheap (it's empty for anything other than C variadics)
Theoretically, a `FnAbiOfHelpers` implementer could choose to keep the `Result<...>` instead of eagerly erroring, but the only existing users of these APIs are codegen backends, so they don't (want to) take advantage of this.
At least miri could make use of this, since it prefers propagating errors (it "just" doesn't use `FnAbi` yet - cc `@RalfJung).`
The way this is done is probably less efficient than what is possible, because the queries handle the correctness-oriented API (i.e. the split into `fn` pointers vs instances), whereas a lower-level query could end up with more reuse between different instances with identical signatures.
r? `@nagisa` cc `@oli-obk` `@bjorn3`
This encoding allows for random access without an expensive upfront decoding
state which in turn allows simplifying the DefPathIndex lookup logic without
regressing performance.
generic_const_exprs: use thir for abstract consts instead of mir
Changes `AbstractConst` building to use `thir` instead of `mir` so that there's less chance of consts unifying when they shouldn't because lowering to mir dropped information (see `abstract-consts-as-cast-5.rs` test)
r? `@lcnr`
Mmap the incremental data instead of reading it.
Instead of reading the full incremental state using `fs::read_file`, we memmap it using a private read-only file-backed map.
This allows the system to reclaim any memory we are not using, while ensuring we are not polluted by
outside modifications to the file.
Suggested in https://github.com/rust-lang/rust/pull/83036#issuecomment-800458082 by `@bjorn3`
`tcx.def_kind()` could theoretically invoke another query, which could
cause an infinite query loop. Accessing the HIR map directly makes that
less likely to happen.
I also changed it to use `as_local()` (`tcx.def_kind()` seems to
implicitly call `expect_local()`) and `opt_def_kind()` to reduce the
chance of panicking on valid code.
Trait upcasting coercion (part2)
This is the second part of trait upcasting coercion implementation.
Currently this is blocked on #86264 .
The third part might be implemented using unsafety checking
r? `@bjorn3`
Since RFC 3052 soft deprecated the authors field anyway, hiding it from
crates.io, docs.rs, and making Cargo not add it by default, and it is
not generally up to date/useful information, we should remove it from
crates in this repo.
During function type-checking, we normalize any associated types in
the function signature (argument types + return type), and then
create WF obligations for each of the normalized types. The HIR wf code
does not currently support this case, so any errors that we get have
imprecise spans.
This commit extends `ObligationCauseCode::WellFormed` to support
recording a function parameter, allowing us to get the corresponding
HIR type if an error occurs. Function typechecking is modified to
pass this information during signature normalization and WF checking.
The resulting code is fairly verbose, due to the fact that we can
no longer normalize the entire signature with a single function call.
As part of the refactoring, we now perform HIR-based WF checking
for several other 'typed items' (statics, consts, and inherent impls).
As a result, WF and projection errors in a function signature now
have a precise span, which points directly at the responsible type.
If a function signature is constructed via a macro, this will allow
the error message to point at the code 'most responsible' for the error
(e.g. a user-supplied macro argument).
During well-formed checking, we walk through all types 'nested' in
generic arguments. For example, WF-checking `Option<MyStruct<u8>>`
will cause us to check `MyStruct<u8>` and `u8`. However, this is done
on a `rustc_middle::ty::Ty`, which has no span information. As a result,
any errors that occur will have a very general span (e.g. the
definintion of an associated item).
This becomes a problem when macros are involved. In general, an
associated type like `type MyType = Option<MyStruct<u8>>;` may
have completely different spans for each nested type in the HIR. Using
the span of the entire associated item might end up pointing to a macro
invocation, even though a user-provided span is available in one of the
nested types.
This PR adds a framework for HIR-based well formed checking. This check
is only run during error reporting, and is used to obtain a more precise
span for an existing error. This is accomplished by individually
checking each 'nested' type in the HIR for the type, allowing us to
find the most-specific type (and span) that produces a given error.
The majority of the changes are to the error-reporting code. However,
some of the general trait code is modified to pass through more
information.
Since this has no soundness implications, I've implemented a minimal
version to begin with, which can be extended over time. In particular,
this only works for HIR items with a corresponding `DefId` (e.g. it will
not work for WF-checking performed within function bodies).
Remove unused feature gates
The first commit removes a usage of a feature gate, but I don't expect it to be controversial as the feature gate was only used to workaround a limitation of rust in the past. (closures never being `Clone`)
The second commit uses `#[allow_internal_unstable]` to avoid leaking the `trusted_step` feature gate usage from inside the index newtype macro. It didn't work for the `min_specialization` feature gate though.
The third commit removes (almost) all feature gates from the compiler that weren't used anyway.
Reduce the amount of untracked state in TyCtxt
Access to untracked global state may generate instances of #84970.
The GlobalCtxt contains the lowered HIR, the resolver outputs and interners.
By wrapping the resolver inside a query, we make sure those accesses are properly tracked.
As a no_hash query, all dependent queries essentially become `eval_always`,
what they should have been from the beginning.
This means that we're no longer generating the iteration/locking code for each
invocation site of iter_results, rather just once per query.
This is a 15% win in instruction counts when compiling the rustc_query_impl crate.
normalize mir::Constant differently from ty::Const in preparation for valtrees
Valtrees are unable to represent many kind of constant values (this is on purpose). For constants that are used at runtime, we do not need a valtree representation and can thus use a different form of evaluation. In order to make this explicit and less fragile, I added a `fold_constant` method to `TypeFolder` and implemented it for normalization. Normalization can now, when it wants to eagerly evaluate a constant, normalize `mir::Constant` directly into a `mir::ConstantKind::Val` instead of relying on the `ty::Const` evaluation.
In the future we can get rid of the `ty::Const` in there entirely and add our own `Unevaluated` variant to `mir::ConstantKind`. This would allow us to remove the `promoted` field from `ty::ConstKind::Unevaluated`, as promoteds can never occur in the type system.
cc `@rust-lang/wg-const-eval`
r? `@lcnr`
Implement (but don't use) valtree and refactor in preparation of use
This PR does not cause any functional change. It refactors various things that are needed to make valtrees possible. This refactoring got big enough that I decided I'd want it reviewed as a PR instead of trying to make one huge PR with all the changes.
cc `@rust-lang/wg-const-eval` on the following commits:
* 2027184 implement valtree
* eeecea9 fallible Scalar -> ScalarInt
* 042f663 ScalarInt convenience methods
cc `@eddyb` on ef04a6d
cc `@rust-lang/wg-mir-opt` for cf1700c (`mir::Constant` can now represent either a `ConstValue` or a `ty::Const`, and it is totally possible to have two different representations for the same value)
valtree is a version of constants that is inherently safe to be used within types.
This is in contrast to ty::Const which can have different representations of the same value. These representation differences can show up in hashing or equality comparisons, breaking type equality of otherwise equal types.
valtrees do not have this problem.
This pulls in rust-lang/rustc-rayon#8 to fix#81425. (h/t @ammaraskar)
That revealed weak constraints on `rustc_arena::DropArena`, because its
`DropType` was holding type-erased raw pointers to generic `T`. We can
implement `Send` for `DropType` (under `cfg(parallel_compiler)`) by
requiring all `T: Send` before they're type-erased.