Support reading uncompressed proc macro metadata
rust-lang/rust#113695 makes the dylib metadata uncompressed for perf reasons. This commit allows reading both the current compressed and future uncompressed dylib metadata.
rust-lang/rust#113695 makes the dylib metadata uncompressed for perf
reasons. This commit allows reading both the current compressed and
future uncompressed dylib metadata.
Use u64 for incr comp allocation offsets
Fixes https://github.com/rust-lang/rust/issues/76037
Fixes https://github.com/rust-lang/rust/issues/95780
Fixes https://github.com/rust-lang/rust/issues/111613
These issues are all reporting ICEs caused by using `u32` to store offsets to allocations in the incremental compilation cache. This PR aims to lift that limitation by changing the offset type in question to `u64`.
There are two perf runs in this PR. The first reports a regression, and the second does not. The changes are the same in both. I rebased the PR then did the second perf run because I noticed that the primary regression in it was very commonly seen in spurious regression reports.
I do not know what the perf run will report when this is merged. I would not be surprised to see regression or neutral, but the cachegrind diffs for the regression point at `try_mark_previous_green` which is a common source of inexplicable regressions and I don't think should be perturbed by this PR.
I'm not opposed to adding a regression test such as
```rust
fn main() {
println!("{}", [37; 1 << 30].len());
}
```
But that program takes 1 minute to compile and consumes 4.6 GB of memory then writes that much to disk. Is that a concerning amount of resource use for a test?
r? `@nnethercote`
Streamline size estimates (take 2)
This was merged in #113684 but then [something happened](https://github.com/rust-lang/rust/pull/113684#issuecomment-1636811985):
> There has been a bors issue that lead to the merge commit of this PR getting purged from master.
> You'll have to make a new PR to reapply it.
So this is exactly the same changes.
`@bors` r=wesleywiser
Add support for inherent projections in new solver
Not hard to support these, and it cuts out a really big chunk of failing UI tests with `--compare-mode=next-solver`
r? `@lcnr` (feel free to reassign, anyone can review this)
They're quite rare, and ignoring them simplifies things quite a bit, and
further reduces the number of calls to `MonoItem::size_estimate` to the
number of placed items (one per root item, and one or more per reachable
inlined item).
This means we call `MonoItem::size_estimate` (which involves a query)
less often: just once per mono item, and then once more per inline item
placement. After that we can reuse the stored value as necessary. This
means `CodegenUnit::compute_size_estimate` is cheaper.
Add Platform Support documentation for MIPS Release 6 targets
This is a follow-up to our to-announce MCP, rust-lang/compiler-team#638, where we proposed to assign several maintainers for MIPS R6 targets and was told to explain that this set of targets are experimental in nature.
This documentation describes Rust support for `mipsisa*r6*-unknown-linux-gnu*` targets (mainly `mipsisa64r6el-unknown-linux-gnuabi64`), including toolchain setup, building, and testing procedures.
Don't call `predicate_must_hold`-esque functions during fulfillment in intercrate
Fixes#113415
Given that this only happens in `translate_substs`, I don't actually think that this is something that you can weaponize, but it's still sketchy regardless.
r? `@lcnr`
Check entry type as part of item type checking.
This code is currently executed inside the root `analysis` query.
Instead, check it during `check_for_entry_fn(CRATE_DEF_ID)` to hopefully avoid some re-executions.
`CRATE_DEF_ID` is chosen by considering that entry fn are typically at crate root, so the corresponding HIR should already be in the dependencies.
Add `#[rustc_confusables]` attribute to allow targeted "no method" error suggestions on standard library types
After this PR, the standard library developer can annotate methods on e.g. `BTreeSet::push` with `#[rustc_confusables("insert")]`. When the user mistypes `btreeset.push()`, `BTreeSet::insert` will be suggested if there are no other candidates to suggest. This PR lays the foundations for contributors to add `rustc_confusables` annotations to standard library types for targeted suggestions, as specified in #59450, or to address cases such as #108437.
### Example
Assume `BTreeSet` is the standard library type:
```
// Standard library definition
#![feature(rustc_attrs)]
struct BTreeSet;
impl BTreeSet {
#[rustc_confusables("push")]
fn insert(&self) {}
}
// User code
fn main() {
let x = BTreeSet {};
x.push();
}
```
A new suggestion (which has lower precedence than suggestions for misspellings and only is shown when there are no misspellings suggestions) will be added to hint the user maybe they intended to write `x.insert()` instead:
```
error[E0599]: no method named `push` found for struct `BTreeSet` in the current scope
--> test.rs:12:7
|
3 | struct BTreeSet;
| --------------- method `push` not found for this struct
...
12 | x.push();
| ^^^^ method not found in `BTreeSet`
|
help: you might have meant to use `insert`
|
12 | x.insert();
| ~~~~~~
error: aborting due to previous error
```
Hide `compiler_builtins` in the prelude
This crate is a private implementation detail. We only need to insert it into the crate graph for linking and should not expose any of its public API.
Fixes#113533
"no method" errors on standard library types
The standard library developer can annotate methods on e.g.
`BTreeSet::push` with `#[rustc_confusables("insert")]`. When the user
mistypes `btreeset.push()`, `BTreeSet::insert` will be suggested if
there are no other candidates to suggest.
De-duplicate consecutive libs when printing native-static-libs
This PR adds a de-duplicate step just before printing the `native-static-libs`.
This step de-duplicates all the consecutive libs based only on the relevant comparison elements (this exclude spans, ast elements, ...).
Fixes https://github.com/rust-lang/rust/issues/113209
Remove `LLVMRustCoverageHashCString`
Coverage has two FFI functions for computing the hash of a byte string. One takes a ptr/len pair (`LLVMRustCoverageHashByteArray`), and the other takes a NUL-terminated C string (`LLVMRustCoverageHashCString`).
But on closer inspection, the C string version is unnecessary. The calling-side code converts a Rust `&str` into a `CString`, and the C++ code then immediately turns it back into a ptr/len string before actually hashing it. So we can just call the ptr/len version directly instead.
---
This PR also fixes a bug in the C++ declaration of `LLVMRustCoverageHashByteArray`. It should be `size_t`, since that's what is declared and passed on the Rust side, and it's what `StrRef`'s constructor expects to receive on the callee side.