Avoid specialization in the metadata serialization code
With the exception of a perf-only specialization for byte slices and byte vectors.
This uses the same trick of introducing a new trait and having the Encodable and Decodable derives add a bound to it as used for TyEncoder/TyDecoder. The new code is clearer about which encoder/decoder uses which impl and it reduces the dependency of rustc on specialization, making it easier to remove support for specialization entirely or turn it into a construct that is only allowed for perf optimizations if we decide to do this.
Rollup of 8 pull requests
Successful merges:
- #119151 (Hide foreign `#[doc(hidden)]` paths in import suggestions)
- #119350 (Imply outlives-bounds on lazy type aliases)
- #119354 (Make `negative_bounds` internal & fix some of its issues)
- #119506 (Use `resolutions(()).effective_visiblities` to avoid cycle errors in `report_object_error`)
- #119554 (Fix scoping for let chains in match guards)
- #119563 (Check yield terminator's resume type in borrowck)
- #119589 (cstore: Remove unnecessary locking from `CrateMetadata`)
- #119622 (never patterns: Document behavior of never patterns with macros-by-example)
r? `@ghost`
`@rustbot` modify labels: rollup
cstore: Remove unnecessary locking from `CrateMetadata`
Locks and atomics in `CrateMetadata` fields were necessary before https://github.com/rust-lang/rust/pull/107765 when `CStore` was cloneable, but now they are not necessary and can be removed after restructuring the code a bit to please borrow checker.
All remaining locked fields in `CrateMetadata` are lazily populated caches.
Replace a number of FxHashMaps/Sets with stable-iteration-order alternatives
This PR replaces almost all of the remaining `FxHashMap`s in query results with either `FxIndexMap` or `UnordMap`. The only case that is missing is the `EffectiveVisibilities` struct which turned out to not be straightforward to transform. Once that is done too, we can remove the `HashStable` implementation from `HashMap`.
The first commit adds the `StableCompare` trait which is a companion trait to `StableOrd`. Some types like `Symbol` can be compared in a cross-session stable way, but their `Ord` implementation is not stable. In such cases, a `StableCompare` implementation can be provided to offer a lightweight way for stable sorting. The more heavyweight option is to sort via `ToStableHashKey`, but then sorting needs to have access to a stable hashing context and `ToStableHashKey` can also be expensive as in the case of `Symbol` where it has to allocate a `String`.
The rest of the commits are rather mechanical and don't overlap, so they are best reviewed individually.
Part of [MCP 533](https://github.com/rust-lang/compiler-team/issues/533).
Report I/O errors from rmeta encoding with emit_fatal
https://github.com/rust-lang/rust/issues/119456 reminded me that I never did systematic testing to provoke the out-of-disk ICEs so I grepped through a recent crater run (https://github.com/rust-lang/rust/pull/119440#issuecomment-1873393963) for more out-of-disk ICEs on current master and yep there's 2 in there.
So I finally cooked up a way to provoke for these crashes. I wrote a little `cdylib` crate that has a `#[no_mangle] pub extern "C" fn write` which occasionally reports `ENOSPC`, and prints a backtrace when it does.
<details><summary><strong>code for the dylib</strong></summary>
<p>
```rust
// cargo add libc rand backtrace
use rand::Rng;
#[no_mangle]
pub extern "C" fn write(
fd: libc::c_int,
buf: *const libc::c_void,
count: libc::size_t,
) -> libc::ssize_t {
if fd > 2 && rand::thread_rng().gen::<u8>() == 0 {
let mut count = 0;
backtrace::trace(|frame| {
backtrace::resolve_frame(frame, |symbol| {
if let Some(name) = symbol.name() {
if count > 3 {
eprintln!("{}", name);
}
}
count += 1;
});
true
});
unsafe {
*libc::__errno_location() = libc::ENOSPC;
}
return -1;
} else {
unsafe {
let res =
libc::syscall(libc::SYS_write, fd as usize, buf as usize, count as usize) as isize;
if res < 0 {
*libc::__errno_location() = -res as i32;
-1
} else {
res
}
}
}
}
```
</p>
</details>
Then `LD_PRELOAD` that dylib and repeatedly build a big project until it ICEs, such as with this:
```bash
while true; do
cargo clean
LD_PRELOAD=/home/ben/evil/target/release/libevil.so cargo +stage1 check 2> errors
if grep "thread 'rustc' panicked" errors; then
break
fi
done
```
My "big project" for testing was an otherwise-empty project with `cargo add axum`.
Before this PR, the above procedure finds a crash in between 1 and 15 minutes. With this PR, I have not found a crash in 30 minutes, and I'll be leaving this to run overnight (starting now). (A night has now passed, no crashes were found)
I believe the problem is that even though since https://github.com/rust-lang/rust/pull/117301 we correctly check `FileEncoder` for errors on all paths, we use `emit_err`, so there is a window of time between the call to `emit_err` and the full error reporting where rustc believes it has emitted a valid rmeta file and will permit Cargo to launch a build for a dependent crate. Changing these calls to `emit_fatal` closes that window.
I think there are a number of other cases where `emit_err` has been used instead of the more-correct `emit_fatal` such as e51e98dde6/compiler/rustc_codegen_ssa/src/back/write.rs (L542) but unlike rmeta encoding I am not aware of those cases of those causing problems.
r? ``@WaffleLapkin``
`Diagnostic` has 40 methods that return `&mut Self` and could be
considered setters. Four of them have a `set_` prefix. This doesn't seem
necessary for a type that implements the builder pattern. This commit
removes the `set_` prefixes on those four methods.
This involves lots of breaking changes. There are two big changes that
force changes. The first is that the bitflag types now don't
automatically implement normal derive traits, so we need to derive them
manually.
Additionally, bitflags now have a hidden inner type by default, which
breaks our custom derives. The bitflags docs recommend using the impl
form in these cases, which I did.
Spans are now stored in a more compact form which cuts down on at least
1 byte per span (indirect/direct encoding) and at most 3 bytes per span
(indirect/direct encoding, context byte, length byte). As a result,
libcore metadata shrinks by 1.5MB.
Support encoding spans with relative offsets
The relative offset is often smaller than the absolute offset, and with
the LEB128 encoding, this ends up cutting the overall metadata size
considerably (~1.5 megabytes on libcore). We can support both relative
and absolute encodings essentially for free since we already take a full
byte to differentiate between direct and indirect encodings (so an extra
variant is quite cheap).
The relative offset is often smaller than the absolute offset, and with
the LEB128 encoding, this ends up cutting the overall metadata size
considerably (~1.5 megabytes on libcore). We can support both relative
and absolute encodings essentially for free since we already take a full
byte to differentiate between direct and indirect encodings (so an extra
variant is quite cheap).
Make closures carry their own ClosureKind
Right now, we use the "`movability`" field of `hir::Closure` to distinguish a closure and a coroutine. This is paired together with the `CoroutineKind`, which is located not in the `hir::Closure`, but the `hir::Body`. This is strange and redundant.
This PR introduces `ClosureKind` with two variants -- `Closure` and `Coroutine`, which is put into `hir::Closure`. The `CoroutineKind` is thus removed from `hir::Body`, and `Option<Movability>` no longer needs to be a stand-in for "is this a closure or a coroutine".
r? eholk
Remove `DiagCtxt` API duplication
`DiagCtxt` defines the internal API for creating and emitting diagnostics: methods like `struct_err`, `struct_span_warn`, `note`, `create_fatal`, `emit_bug`. There are over 50 methods.
Some of these methods are then duplicated across several other types: `Session`, `ParseSess`, `Parser`, `ExtCtxt`, and `MirBorrowckCtxt`. `Session` duplicates the most, though half the ones it does are unused. Each duplicated method just calls forward to the corresponding method in `DiagCtxt`. So this duplication exists to (in the best case) shorten chains like `ecx.tcx.sess.parse_sess.dcx.emit_err()` to `ecx.emit_err()`.
This API duplication is ugly and has been bugging me for a while. And it's inconsistent: there's no real logic about which methods are duplicated, and the use of `#[rustc_lint_diagnostic]` and `#[track_caller]` attributes vary across the duplicates.
This PR removes the duplicated API methods and makes all diagnostic creation and emission go through `DiagCtxt`. It also adds `dcx` getter methods to several types to shorten chains. This approach scales *much* better than API duplication; indeed, the PR adds `dcx()` to numerous types that didn't have API duplication: `TyCtxt`, `LoweringCtxt`, `ConstCx`, `FnCtxt`, `TypeErrCtxt`, `InferCtxt`, `CrateLoader`, `CheckAttrVisitor`, and `Resolver`. These result in a lot of changes from `foo.tcx.sess.emit_err()` to `foo.dcx().emit_err()`. (You could do this with more types, but it gets into diminishing returns territory for types that don't emit many diagnostics.)
After all these changes, some call sites are more verbose, some are less verbose, and many are the same. The total number of lines is reduced, mostly because of the removed API duplication. And consistency is increased, because calls to `emit_err` and friends are always preceded with `.dcx()` or `.dcx`.
r? `@compiler-errors`
Unify SourceFile::name_hash and StableSourceFileId
This PR adapts the existing `StableSourceFileId` type so that it can be used instead of the `name_hash` field of `SourceFile`. This simplifies a few things that were kind of duplicated before.
The PR should also fix issues https://github.com/rust-lang/rust/issues/112700 and https://github.com/rust-lang/rust/issues/115835, but I was not able to reproduce these issues in a regression test. As far as I can tell, the root cause of these issues is that the id of the originating crate is not hashed in the `HashStable` impl of `Span` and thus cache entries that should have been considered invalidated were loaded. After this PR, the `stable_id` field of `SourceFile` includes information about the originating crate, so that ICE should not occur anymore.
Remove metadata decoding DefPathHash cache
My expectation is that this cache is largely useless. Decoding a DefPathHash from metadata is essentially a pair of memory loads - there's no heavyweight processing involved. Caching it behind a HashMap just adds extra cost and incurs hashing overheads for the indices.
Based on https://github.com/rust-lang/rust/pull/119238.
Skip duplicate stable crate ID encoding into metadata
Instead, we store just the local crate hash as a bare u64. On decoding,
we recombine it with the crate's stable crate ID stored separately in
metadata. The end result is that we save ~8 bytes/DefIndex in metadata
size.
One key detail here is that we no longer distinguish in encoded metadata
between present and non-present DefPathHashes. It used to be highly
likely we could distinguish as we used DefPathHash::default(), an
all-zero representation. However in theory even that is fallible as
nothing strictly prevents the StableCrateId from being zero. In review it
was pointed out that we should never have a missing hash for a DefIndex anyway,
so this shouldn't matter.
This cache is largely useless. Decoding a DefPathHash from metadata is
essentially a pair of memory loads - there's no heavyweight processing
involved. Caching it behind a HashMap just adds extra cost and incurs
hashing overheads.
Instead, we store just the local crate hash as a bare u64. On decoding,
we recombine it with the crate's stable crate ID stored separately in
metadata. The end result is that we save ~8 bytes/DefIndex in metadata
size.
One key detail here is that we no longer distinguish in encoded metadata
between present and non-present DefPathHashes. It used to be highly
likely we could distinguish as we used DefPathHash::default(), an
all-zero representation. However in theory even that is fallible as
nothing strictly prevents the StableCrateId from being zero.
And make all hand-written `IntoDiagnostic` impls generic, by using
`DiagnosticBuilder::new(dcx, level, ...)` instead of e.g.
`dcx.struct_err(...)`.
This means the `create_*` functions are the source of the error level.
This change will let us remove `struct_diagnostic`.
Note: `#[rustc_lint_diagnostics]` is added to `DiagnosticBuilder::new`,
it's necessary to pass diagnostics tests now that it's used in
`into_diagnostic` functions.
Renamings:
- find -> opt_hir_node
- get -> hir_node
- find_by_def_id -> opt_hir_node_by_def_id
- get_by_def_id -> hir_node_by_def_id
Fix rebase changes using removed methods
Use `tcx.hir_node_by_def_id()` whenever possible in compiler
Fix clippy errors
Fix compiler
Apply suggestions from code review
Co-authored-by: Vadim Petrochenkov <vadim.petrochenkov@gmail.com>
Add FIXME for `tcx.hir()` returned type about its removal
Simplify with with `tcx.hir_node_by_def_id`
Use a u64 for the rmeta root position
Waffle noticed this in https://github.com/rust-lang/rust/pull/117301#discussion_r1405410174
We've upgraded the other file offsets to u64, and this one only costs 4 bytes per file. Also the way the truncation was being done before was extremely easy to miss, I sure missed it! It's not clear to me if not having this change effectively made the other upgrades from u32 to u64 ineffective, but we can have it now.
r? `@WaffleLapkin`