Don't call `with_reveal_all_normalized` in const-eval when `param_env` has inference vars in it
**what:** This slightly shifts the order of operations from an existing hack:
5b6ed253c4/compiler/rustc_middle/src/ty/consts/kind.rs (L225-L230)
in order to avoid calling a tcx query (`TyCtxt::reveal_opaque_types_in_bounds`, via `ParamEnv::with_reveal_all_normalized`) when a param-env has inference variables in it.
**why:** This allows us to enable fingerprinting of query keys/values outside of incr-comp in deubg mode, to make sure we catch other places where we're passing infer vars and other bad things into query keys. Currently that (bbf33836b9) crashes because we introduce inference vars into a param-env in the blanket-impl finder in rustdoc 😓5b6ed253c4/src/librustdoc/clean/blanket_impl.rs (L43)
See the CI failure here: https://github.com/rust-lang/rust/actions/runs/4058194838/jobs/6984834619
Convert all the crates that have had their diagnostic migration
completed (except save_analysis because that will be deleted soon and
apfloat because of the licensing problem).
This may introduce additional mono _but_ may help const fold things
better and especially may help not constructing a `QueryVTable` anymore
which is cheap but not free.
- Add a `HandleCycleError` enum to rustc_query_system, along with a `handle_cycle_error` function
- Move `Value` to rustc_query_system, so `handle_cycle_error` can use it
- Move the `Value` impls from rustc_query_impl to rustc_middle. This is necessary due to orphan rules.
Replace `rustc_data_structures::thin_vec::ThinVec` with `thin_vec::ThinVec`
`rustc_data_structures::thin_vec::ThinVec` looks like this:
```
pub struct ThinVec<T>(Option<Box<Vec<T>>>);
```
It's just a zero word if the vector is empty, but requires two
allocations if it is non-empty. So it's only usable in cases where the
vector is empty most of the time.
This commit removes it in favour of `thin_vec::ThinVec`, which is also
word-sized, but stores the length and capacity in the same allocation as
the elements. It's good in a wider variety of situation, e.g. in enum
variants where the vector is usually/always non-empty.
The commit also:
- Sorts some `Cargo.toml` dependency lists, to make additions easier.
- Sorts some `use` item lists, to make additions easier.
- Changes `clean_trait_ref_with_bindings` to take a
`ThinVec<TypeBinding>` rather than a `&[TypeBinding]`, because this
avoid some unnecessary allocations.
r? `@spastorino`
`rustc_data_structures::thin_vec::ThinVec` looks like this:
```
pub struct ThinVec<T>(Option<Box<Vec<T>>>);
```
It's just a zero word if the vector is empty, but requires two
allocations if it is non-empty. So it's only usable in cases where the
vector is empty most of the time.
This commit removes it in favour of `thin_vec::ThinVec`, which is also
word-sized, but stores the length and capacity in the same allocation as
the elements. It's good in a wider variety of situation, e.g. in enum
variants where the vector is usually/always non-empty.
The commit also:
- Sorts some `Cargo.toml` dependency lists, to make additions easier.
- Sorts some `use` item lists, to make additions easier.
- Changes `clean_trait_ref_with_bindings` to take a
`ThinVec<TypeBinding>` rather than a `&[TypeBinding]`, because this
avoid some unnecessary allocations.
This commit updates the signatures of all diagnostic functions to accept
types that can be converted into a `DiagnosticMessage`. This enables
existing diagnostic calls to continue to work as before and Fluent
identifiers to be provided. The `SessionDiagnostic` derive just
generates normal diagnostic calls, so these APIs had to be modified to
accept Fluent identifiers.
In addition, loading of the "fallback" Fluent bundle, which contains the
built-in English messages, has been implemented.
Each diagnostic now has "arguments" which correspond to variables in the
Fluent messages (necessary to render a Fluent message) but no API for
adding arguments has been added yet. Therefore, diagnostics (that do not
require interpolation) can be converted to use Fluent identifiers and
will be output as before.
Avoid query cache sharding code in single-threaded mode
In non-parallel compilers, this is just adding needless overhead at compilation time (since there is only one shard statically anyway). This amounts to roughly ~10 seconds reduction in bootstrap time, with overall neutral (some wins, some losses) performance results.
Parallel compiler performance should be largely unaffected by this PR; sharding is kept there.
This was largely just caching the shard value at this point, which is not
particularly useful -- in the use sites the key was being hashed nearby anyway.