Stashed errors used to be counted as errors, but could then be
cancelled, leading to `ErrorGuaranteed` soundness holes. #120828 changed
that, closing the soundness hole. But it introduced other difficulties
because you sometimes have to account for pending stashed errors when
making decisions about whether errors have occured/will occur and it's
easy to overlook these.
This commit aims for a middle ground.
- Stashed errors (not warnings) are counted immediately as emitted
errors, avoiding the possibility of forgetting to consider them.
- The ability to cancel (or downgrade) stashed errors is eliminated, by
disallowing the use of `steal_diagnostic` with errors, and introducing
the more restrictive methods `try_steal_{modify,replace}_and_emit_err`
that can be used instead.
Other things:
- `DiagnosticBuilder::stash` and `DiagCtxt::stash_diagnostic` now both
return `Option<ErrorGuaranteed>`, which enables the removal of two
`delayed_bug` calls and one `Ty::new_error_with_message` call. This is
possible because we store error guarantees in
`DiagCtxt::stashed_diagnostics`.
- Storing the guarantees also saves us having to maintain a counter.
- Calls to the `stashed_err_count` method are no longer necessary
alongside calls to `has_errors`, which is a nice simplification, and
eliminates two more `span_delayed_bug` calls and one FIXME comment.
- Tests are added for three of the four fixed PRs mentioned below.
- `issue-121108.rs`'s output improved slightly, omitting a non-useful
error message.
Fixes#121451.
Fixes#121477.
Fixes#121504.
Fixes#121508.
Currently `has_errors` excludes lint errors. This commit changes it to
include lint errors.
The motivation for this is that for most places it doesn't matter
whether lint errors are included or not. But there are multiple places
where they must be includes, and only one place where they must not be
included. So it makes sense for `has_errors` to do the thing that fits
the most situations, and the new `has_errors_excluding_lint_errors`
method in the one exceptional place.
The same change is made for `err_count`. Annoyingly, this requires the
introduction of `err_count_excluding_lint_errs` for one place, to
preserve existing error printing behaviour. But I still think the change
is worthwhile overall.
For some cases where it's clear that an error has already occurred,
e.g.:
- there's a comment stating exactly that, or
- things like HIR lowering, where we are lowering an error kind
The commit also tweaks some comments around delayed bug sites.
There are a couple of places where we call
`inner.emitter.emit_diagnostic` directly rather than going through
`inner.emit_diagnostic`, to guarantee the diagnostic is printed. This
feels dubious to me, particularly the bypassing of `TRACK_DIAGNOSTIC`.
This commit removes those.
- In `print_error_count`, it uses `ForceWarning` instead of `Warning`.
- It removes `DiagCtxtInner::failure_note`, because it only has three
uses and direct use of `emit_diagnostic` is consistent with other
similar locations.
- It removes `force_print_diagnostic`, and adds `struct_failure_note`,
and updates `print_query_stack` accordingly, which makes it more
normal. That location doesn't seem to need forced printing anyway.
Invert diagnostic lints.
That is, change `diagnostic_outside_of_impl` and `untranslatable_diagnostic` from `allow` to `deny`, because more than half of the compiler has been converted to use translated diagnostics.
This commit removes more `deny` attributes than it adds `allow` attributes, which proves that this change is warranted.
r? ````@davidtwco````
That is, change `diagnostic_outside_of_impl` and
`untranslatable_diagnostic` from `allow` to `deny`, because more than
half of the compiler has be converted to use translated diagnostics.
This commit removes more `deny` attributes than it adds `allow`
attributes, which proves that this change is warranted.
Error codes are integers, but `String` is used everywhere to represent
them. Gross!
This commit introduces `ErrCode`, an integral newtype for error codes,
replacing `String`. It also introduces a constant for every error code,
e.g. `E0123`, and removes the `error_code!` macro. The constants are
imported wherever used with `use rustc_errors::codes::*`.
With the old code, we have three different ways to specify an error code
at a use point:
```
error_code!(E0123) // macro call
struct_span_code_err!(dcx, span, E0123, "msg"); // bare ident arg to macro call
\#[diag(name, code = "E0123")] // string
struct Diag;
```
With the new code, they all use the `E0123` constant.
```
E0123 // constant
struct_span_code_err!(dcx, span, E0123, "msg"); // constant
\#[diag(name, code = E0123)] // constant
struct Diag;
```
The commit also changes the structure of the error code definitions:
- `rustc_error_codes` now just defines a higher-order macro listing the
used error codes and nothing else.
- Because that's now the only thing in the `rustc_error_codes` crate, I
moved it into the `lib.rs` file and removed the `error_codes.rs` file.
- `rustc_errors` uses that macro to define everything, e.g. the error
code constants and the `DIAGNOSTIC_TABLES`. This is in its new
`codes.rs` file.
We have several methods indicating the presence of errors, lint errors,
and delayed bugs. I find it frustrating that it's very unclear which one
you should use in any particular spot. This commit attempts to instill a
basic principle of "use the least general one possible", because that
reflects reality in practice -- `has_errors` is the least general one
and has by far the most uses (esp. via `abort_if_errors`).
Specifics:
- Add some comments giving some usage guidelines.
- Prefer `has_errors` to comparing `err_count` to zero.
- Remove `has_errors_or_span_delayed_bugs` because it's a weird one: in
the cases where we need to count delayed bugs, we should really be
counting lint errors as well.
- Rename `is_compilation_going_to_fail` as
`has_errors_or_lint_errors_or_span_delayed_bugs`, for consistency with
`has_errors` and `has_errors_or_lint_errors`.
- Change a few other `has_errors_or_lint_errors` calls to `has_errors`,
as per the "least general" principle.
This didn't turn out to be as neat as I hoped when I started, but I
think it's still an improvement.
Avoid code generation for ThinVec<Diagnostic>'s destructor in the query system
This avoids 2 instances of the destructor of `ThinVec<Diagnostic>` from being included in `execute_job`. It also outlines the cold branch in `store_side_effects` / `store_side_effects_for_anon_node`.
Get rid of the hir_owner query.
This query was meant as a firewall between `hir_owner_nodes` which is supposed to change often, and the queries that only depend on the item signature. That firewall was inefficient, leaking the contents of the HIR body through `HirId`s.
`hir_owner` incurs a significant cost, as we need to hash HIR twice in multiple modes. This PR proposes to remove it, and simplify the hashing scheme.
For the future, `def_kind`, `def_span`... are much more efficient for incremental decoupling, and should be preferred.
Cache local DefId-keyed queries without hashing
This caches local DefId-keyed queries using just an IndexVec. This costs ~5% extra max-rss at most but brings significant runtime improvement, up to 13% cycle counts (mean: 4%) on primary benchmarks. It's possible that further tweaks could reduce the memory overhead further but this win seems worth landing despite the increased memory, particularly with regards to eliminating the present set in non-incr or storing it inline (skip list?) with the main data.
We tried applying this scheme to all keys in the [first perf run] but found that it carried a significant memory hit (50%). instructions/cycle counts were also much more mixed, though that may have been due to the lack of the present set optimization (needed for fast iter() calls in incremental scenarios).
Closes https://github.com/rust-lang/rust/issues/45275
[first perf run]: https://perf.rust-lang.org/compare.html?start=30dfb9e046aeb878db04332c74de76e52fb7db10&end=6235575300d8e6e2cc6f449cb9048722ef43f9c7&stat=instructions:u
Foreign maps are used to cache external DefIds, typically backed by
metadata decoding. In the future we might skip caching `V` there (since
loading from metadata usually is already cheap enough), but for now this
cuts down on the impact to memory usage and time to None-init a bunch of
memory. Foreign data is usually much sparser, since we're not usually
loading *all* entries from the foreign crate(s).
We have `span_delayed_bug` and often pass it a `DUMMY_SP`. This commit
adds `delayed_bug`, which matches pairs like `err`/`span_err` and
`warn`/`span_warn`.
This works for most of its call sites. This is nice, because `emit` very
much makes sense as a consuming operation -- indeed,
`DiagnosticBuilderState` exists to ensure no diagnostic is emitted
twice, but it uses runtime checks.
For the small number of call sites where a consuming emit doesn't work,
the commit adds `DiagnosticBuilder::emit_without_consuming`. (This will
be removed in subsequent commits.)
Likewise, `emit_unless` becomes consuming. And `delay_as_bug` becomes
consuming, while `delay_as_bug_without_consuming` is added (which will
also be removed in subsequent commits.)
All this requires significant changes to `DiagnosticBuilder`'s chaining
methods. Currently `DiagnosticBuilder` method chaining uses a
non-consuming `&mut self -> &mut Self` style, which allows chaining to
be used when the chain ends in `emit()`, like so:
```
struct_err(msg).span(span).emit();
```
But it doesn't work when producing a `DiagnosticBuilder` value,
requiring this:
```
let mut err = self.struct_err(msg);
err.span(span);
err
```
This style of chaining won't work with consuming `emit` though. For
that, we need to use to a `self -> Self` style. That also would allow
`DiagnosticBuilder` production to be chained, e.g.:
```
self.struct_err(msg).span(span)
```
However, removing the `&mut self -> &mut Self` style would require that
individual modifications of a `DiagnosticBuilder` go from this:
```
err.span(span);
```
to this:
```
err = err.span(span);
```
There are *many* such places. I have a high tolerance for tedious
refactorings, but even I gave up after a long time trying to convert
them all.
Instead, this commit has it both ways: the existing `&mut self -> Self`
chaining methods are kept, and new `self -> Self` chaining methods are
added, all of which have a `_mv` suffix (short for "move"). Changes to
the existing `forward!` macro lets this happen with very little
additional boilerplate code. I chose to add the suffix to the new
chaining methods rather than the existing ones, because the number of
changes required is much smaller that way.
This doubled chainging is a bit clumsy, but I think it is worthwhile
because it allows a *lot* of good things to subsequently happen. In this
commit, there are many `mut` qualifiers removed in places where
diagnostics are emitted without being modified. In subsequent commits:
- chaining can be used more, making the code more concise;
- more use of chaining also permits the removal of redundant diagnostic
APIs like `struct_err_with_code`, which can be replaced easily with
`struct_err` + `code_mv`;
- `emit_without_diagnostic` can be removed, which simplifies a lot of
machinery, removing the need for `DiagnosticBuilderState`.
Query panic!() to useful diagnostic
Changes some more ICEs from bare panic!()s
Adds an `expect_job()` helper method as that is a moral equivalent of what was happening at the uses.
re:#118955
Remove `DiagCtxt` API duplication
`DiagCtxt` defines the internal API for creating and emitting diagnostics: methods like `struct_err`, `struct_span_warn`, `note`, `create_fatal`, `emit_bug`. There are over 50 methods.
Some of these methods are then duplicated across several other types: `Session`, `ParseSess`, `Parser`, `ExtCtxt`, and `MirBorrowckCtxt`. `Session` duplicates the most, though half the ones it does are unused. Each duplicated method just calls forward to the corresponding method in `DiagCtxt`. So this duplication exists to (in the best case) shorten chains like `ecx.tcx.sess.parse_sess.dcx.emit_err()` to `ecx.emit_err()`.
This API duplication is ugly and has been bugging me for a while. And it's inconsistent: there's no real logic about which methods are duplicated, and the use of `#[rustc_lint_diagnostic]` and `#[track_caller]` attributes vary across the duplicates.
This PR removes the duplicated API methods and makes all diagnostic creation and emission go through `DiagCtxt`. It also adds `dcx` getter methods to several types to shorten chains. This approach scales *much* better than API duplication; indeed, the PR adds `dcx()` to numerous types that didn't have API duplication: `TyCtxt`, `LoweringCtxt`, `ConstCx`, `FnCtxt`, `TypeErrCtxt`, `InferCtxt`, `CrateLoader`, `CheckAttrVisitor`, and `Resolver`. These result in a lot of changes from `foo.tcx.sess.emit_err()` to `foo.dcx().emit_err()`. (You could do this with more types, but it gets into diminishing returns territory for types that don't emit many diagnostics.)
After all these changes, some call sites are more verbose, some are less verbose, and many are the same. The total number of lines is reduced, mostly because of the removed API duplication. And consistency is increased, because calls to `emit_err` and friends are always preceded with `.dcx()` or `.dcx`.
r? `@compiler-errors`
Unify SourceFile::name_hash and StableSourceFileId
This PR adapts the existing `StableSourceFileId` type so that it can be used instead of the `name_hash` field of `SourceFile`. This simplifies a few things that were kind of duplicated before.
The PR should also fix issues https://github.com/rust-lang/rust/issues/112700 and https://github.com/rust-lang/rust/issues/115835, but I was not able to reproduce these issues in a regression test. As far as I can tell, the root cause of these issues is that the id of the originating crate is not hashed in the `HashStable` impl of `Span` and thus cache entries that should have been considered invalidated were loaded. After this PR, the `stable_id` field of `SourceFile` includes information about the originating crate, so that ICE should not occur anymore.
`IntoDiagnostic` defaults to `ErrorGuaranteed`, because errors are the
most common diagnostic level. It makes sense to do likewise for the
closely-related (and much more widely used) `DiagnosticBuilder` type,
letting us write `DiagnosticBuilder<'a, ErrorGuaranteed>` as just
`DiagnosticBuilder<'a>`. This cuts over 200 lines of code due to many
multi-line things becoming single line things.
This commit replaces this pattern:
```
err.into_diagnostic(dcx)
```
with this pattern:
```
dcx.create_err(err)
```
in a lot of places.
It's a little shorter, makes the error level explicit, avoids some
`IntoDiagnostic` imports, and is a necessary prerequisite for the next
commit which will add a `level` arg to `into_diagnostic`.
This requires adding `track_caller` on `create_err` to avoid mucking up
the output of `tests/ui/track-diagnostics/track4.rs`. It probably should
have been there already.
Currently, `emit_diagnostic` takes `&mut self`.
This commit changes it so `emit_diagnostic` takes `self` and the new
`emit_diagnostic_without_consuming` function takes `&mut self`.
I find the distinction useful. The former case is much more common, and
avoids a bunch of `mut` and `&mut` occurrences. We can also restrict the
latter with `pub(crate)` which is nice.
detects redundant imports that can be eliminated.
for #117772 :
In order to facilitate review and modification, split the checking code and
removing redundant imports code into two PR.
Add support for making lib features internal
We have the notion of an "internal" lang feature: a feature that is never intended to be stabilized, and using which can cause ICEs and other issues without that being considered a bug.
This extends that idea to lib features as well. It is an alternative to https://github.com/rust-lang/rust/pull/115623: instead of using an attribute to declare lib features internal, we simply do this based on the name. Everything ending in `_internals` or `_internal` is considered internal.
Then we rename `core_intrinsics` to `core_intrinsics_internal`, which fixes https://github.com/rust-lang/rust/issues/115597.
Cut code size for feature hashing
This locally cuts ~32 kB of .text instructions.
This isn't really a clear win in terms of readability. IMO the code size benefits are worth it (even if they're not necessarily present in the x86_64 hyperoptimized build, I expect them to translate similarly to other platforms). Ultimately there's lots of "small ish" low hanging fruit like this that I'm seeing that seems worth tackling to me, and could translate into larger wins in aggregate.
Call FileEncoder::finish in rmeta encoding
Fixes https://github.com/rust-lang/rust/issues/117254
The bug here was that rmeta encoding never called FileEncoder::finish. Now it does. Most of the changes here are needed to support that, since rmeta encoding wants to finish _then_ access the File in the encoder, so finish can't move out.
I tried adding a `cfg(debug_assertions)` exploding Drop impl to FileEncoder that checked for finish being called before dropping, but fatal errors cause unwinding so this isn't really possible. If we encounter a fatal error with a dirty FileEncoder, the Drop impl ICEs even though the implementation is correct. If we try to paper over that by wrapping FileEncoder in ManuallyDrop then that just erases the fact that Drop automatically checks that we call finish on all paths.
I also changed the name of DepGraph::encode to DepGraph::finish_encoding, because that's what it does and it makes the fact that it is the path to FileEncoder::finish less confusing.
r? `@WaffleLapkin`
Currently we always do this:
```
use rustc_fluent_macro::fluent_messages;
...
fluent_messages! { "./example.ftl" }
```
But there is no need, we can just do this everywhere:
```
rustc_fluent_macro::fluent_messages! { "./example.ftl" }
```
which is shorter.
The `fluent_messages!` macro produces uses of
`crate::{D,Subd}iagnosticMessage`, which means that every crate using
the macro must have this import:
```
use rustc_errors::{DiagnosticMessage, SubdiagnosticMessage};
```
This commit changes the macro to instead use
`rustc_errors::{D,Subd}iagnosticMessage`, which avoids the need for the
imports.
print query map for deadlock when using parallel front end
print query map for deadlock when using parallel front end, so that we can analyze where and why deadlock occurs
By default, `newtype_index!` types get a default `Encodable`/`Decodable`
impl. You can opt out of this with `custom_encodable`. Opting out is the
opposite to how Rust normally works with autogenerated (derived) impls.
This commit inverts the behaviour, replacing `custom_encodable` with
`encodable` which opts into the default `Encodable`/`Decodable` impl.
Only 23 of the 59 `newtype_index!` occurrences need `encodable`.
Even better, there were eight crates with a dependency on
`rustc_serialize` just from unused default `Encodable`/`Decodable`
impls. This commit removes that dependency from those eight crates.
- Sort dependencies and features sections.
- Add `tidy` markers to the sorted sections so they stay sorted.
- Remove empty `[lib`] sections.
- Remove "See more keys..." comments.
Excluded files:
- rustc_codegen_{cranelift,gcc}, because they're external.
- rustc_lexer, because it has external use.
- stable_mir, because it has external use.
Don't store lazyness in `DefKind::TyAlias`
1. Don't store lazyness of a type alias in its `DefKind`, but instead via a query.
2. This allows us to treat type aliases as lazy if `#[feature(lazy_type_alias)]` *OR* if the alias contains a TAIT, rather than having checks for both in separate parts of the codebase.
r? `@oli-obk` cc `@fmease`
Simplify/Optimize FileEncoder
FileEncoder is basically a BufWriter except that it exposes access to the not-written-to-yet region of the buffer so that some users can write directly to the buffer. This strategy is awesome because it lets us avoid calling memcpy for small copies, but the previous strategy was based on the writer accessing a `&mut [MaybeUninit<u8>; N]` and returning a `&[u8]` which is an API which currently mandates the use of unsafe code, making that interface in general not that appealing.
So this PR cleans up the FileEncoder implementation and builds on that general idea of direct buffer access in order to prevent `memcpy` calls in a few key places when encoding the dep graph and rmeta tables. The interface used here is now 100% safe, but with the caveat that internally we need to avoid trusting the number of bytes that the provided function claims to have written.
The original primary objective of this PR was to clean up the FileEncoder implementation so that the fix for the following issues would be easy to implement. The fix for these issues is to correctly update self.buffered even when writes fail, which I think it's easy to verify manually is now done, because all the FileEncoder methods are small.
Fixes https://github.com/rust-lang/rust/issues/115298
Fixes https://github.com/rust-lang/rust/issues/114671
Fixes https://github.com/rust-lang/rust/issues/114045
Fixes https://github.com/rust-lang/rust/issues/108100
Fixes https://github.com/rust-lang/rust/issues/106787
Add optimized lock methods for `Sharded` and refactor `Lock`
This adds methods to `Sharded` which pick a shard and also locks it. These branch on parallelism just once instead of twice, improving performance.
Benchmark for `cfg(parallel_compiler)` and 1 thread:
<table><tr><td rowspan="2">Benchmark</td><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th></tr><tr><td align="right">Time</td><td align="right">Time</td><td align="right">%</th></tr><tr><td>🟣 <b>clap</b>:check</td><td align="right">1.6461s</td><td align="right">1.6345s</td><td align="right"> -0.70%</td></tr><tr><td>🟣 <b>hyper</b>:check</td><td align="right">0.2414s</td><td align="right">0.2394s</td><td align="right"> -0.83%</td></tr><tr><td>🟣 <b>regex</b>:check</td><td align="right">0.9205s</td><td align="right">0.9143s</td><td align="right"> -0.67%</td></tr><tr><td>🟣 <b>syn</b>:check</td><td align="right">1.4981s</td><td align="right">1.4869s</td><td align="right"> -0.75%</td></tr><tr><td>🟣 <b>syntex_syntax</b>:check</td><td align="right">5.7629s</td><td align="right">5.7256s</td><td align="right"> -0.65%</td></tr><tr><td>Total</td><td align="right">10.0690s</td><td align="right">10.0008s</td><td align="right"> -0.68%</td></tr><tr><td>Summary</td><td align="right">1.0000s</td><td align="right">0.9928s</td><td align="right"> -0.72%</td></tr></table>
cc `@SparrowLii`
Use a specialized varint + bitpacking scheme for DepGraph encoding
The previous scheme here uses leb128 to encode the edge tables that represent the incr comp dependency graph. The problem with that scheme is that leb128 has overhead for larger values, and generally relies on the distribution of encoded values being heavily skewed towards smaller values. That is definitely not the case for a dep node index, since they are handed out sequentially and the whole range is covered, the distribution is actually biased in the opposite direction: Most dep nodes are large.
This PR implements a different varint encoding scheme. Instead of applying varint encoding to individual dep node indices (which is extremely branchy) we now apply it per node.
While being built, each node now stores its edges in a `SmallVec` with a bit of extra logic to track the max value of each edge. Then we varint encode the whole batch. This is a gamble: We save on space by only claiming 2 bits per node instead of ~3 bits per edge which is a nice savings but needs to balance out with the space overhead that a single large index in a node with a lot of edges will encode unnecessary bytes in each of that node's edge indices.
Then, to keep the runtime overhead of this encoding scheme down we deserialize our indices by loading 4 bytes for each then masking off the bytes that are't ours. This is much less code and branches than leb128, but relies on having some readable bytes past the end of each edge list. We explicitly add such padding to the in-memory data during decoding. And we also do this decoding lazily, turning a dense on-disk encoding into a peak memory reduction.
Then we apply a bit-packing scheme; since in https://github.com/rust-lang/rust/pull/115391 we now have unused bits on `DepKind`, we use those unused bits (currently there are 7!) to store the 2 bits that we need for the byte width of the edges in each node, then use the remaining bits to store the length of the edge list, if it fits.
r? `@nnethercote`
Remove conditional use of `Sharded` from query state
`Sharded` is already a zero cost abstraction, so it shouldn't affect the performance of the single thread compiler if LLVM does its job.
r? `@cjgillot`
Make `Sharded` an enum and specialize it for the single thread case
This changes `Sharded` to use a single shard by an enum, reducing the size of `Sharded` for greater cache efficiency.
Performance improvement with 1 thread and `cfg(parallel_compiler)`:
<table><tr><td rowspan="2">Benchmark</td><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th></tr><tr><td align="right">Time</td><td align="right">Time</td><td align="right">%</th></tr><tr><td>🟣 <b>clap</b>:check</td><td align="right">1.7009s</td><td align="right">1.6748s</td><td align="right">💚 -1.53%</td></tr><tr><td>🟣 <b>hyper</b>:check</td><td align="right">0.2525s</td><td align="right">0.2451s</td><td align="right">💚 -2.90%</td></tr><tr><td>🟣 <b>regex</b>:check</td><td align="right">0.9519s</td><td align="right">0.9353s</td><td align="right">💚 -1.74%</td></tr><tr><td>🟣 <b>syn</b>:check</td><td align="right">1.5504s</td><td align="right">1.5280s</td><td align="right">💚 -1.45%</td></tr><tr><td>🟣 <b>syntex_syntax</b>:check</td><td align="right">5.9536s</td><td align="right">5.8873s</td><td align="right">💚 -1.11%</td></tr><tr><td>Total</td><td align="right">10.4092s</td><td align="right">10.2706s</td><td align="right">💚 -1.33%</td></tr><tr><td>Summary</td><td align="right">1.0000s</td><td align="right">0.9825s</td><td align="right">💚 -1.75%</td></tr></table>
I did see an unexpected 0.23% change for the serial compiler, so this could use a perf run to see if that reproduces.
cc `@SparrowLii`