Use llvm::computeLTOCacheKey to determine post-ThinLTO CGU reuse
During incremental ThinLTO compilation, we attempt to re-use the
optimized (post-ThinLTO) bitcode file for a module if it is 'safe' to do
so.
Up until now, 'safe' has meant that the set of modules that our current
modules imports from/exports to is unchanged from the previous
compilation session. See PR #67020 and PR #71131 for more details.
However, this turns out be insufficient to guarantee that it's safe
to reuse the post-LTO module (i.e. that optimizing the pre-LTO module
would produce the same result). When LLVM optimizes a module during
ThinLTO, it may look at other information from the 'module index', such
as whether a (non-imported!) global variable is used. If this
information changes between compilation runs, we may end up re-using an
optimized module that (for example) had dead-code elimination run on a
function that is now used by another module.
Fortunately, LLVM implements its own ThinLTO module cache, which is used
when ThinLTO is performed by a linker plugin (e.g. when clang is used to
compile a C proect). Using this cache directly would require extensive
refactoring of our code - but fortunately for us, LLVM provides a
function that does exactly what we need.
The function `llvm::computeLTOCacheKey` is used to compute a SHA-1 hash
from all data that might influence the result of ThinLTO on a module.
In addition to the module imports/exports that we manually track, it
also hashes information about global variables (e.g. their liveness)
which might be used during optimization. By using this function, we
shouldn't have to worry about new LLVM passes breaking our module re-use
behavior.
In LLVM, the output of this function forms part of the filename used to
store the post-ThinLTO module. To keep our current filename structure
intact, this PR just writes out the mapping 'CGU name -> Hash' to a
file. To determine if a post-LTO module should be reused, we compare
hashes from the previous session.
This should unblock PR #75199 - by sheer chance, it seems to have hit
this issue due to the particular CGU partitioning and optimization
decisions that end up getting made.
Recognize discriminant reads as no-ops in RemoveNoopLandingPads
The cleanup blocks often contain read of discriminants. Teach
RemoveNoopLandingPads to recognize them as no-ops to remove
additional no-op landing pads.
Add -Z codegen-backend dylib to deps
When the codegen-backend dylib changes, the program should be rebuilt.
---
Unfortunately I was unable to test this works locally due to running into a TLS issue when running the custom backend, `thread 'rustc' panicked at 'no ImplicitCtxt stored in tls', compiler/rustc_middle/src/ty/context.rs:1750:54`, which seems similar to https://github.com/rust-lang/rust/issues/62717 but has a completely different cause and backtrace.
`@eddyb` said to ping `@Mark-Simulacrum` about what they think about this, so, ping!
Provide structured suggestions when finding structs when expecting a trait
When finding an ADT in a trait object definition provide some solutions. Fix#45817.
Given `<Param as Trait>::Assoc: Ty` suggest `Param: Trait<Assoc = Ty>`. Fix#75829.
Allow generic parameters in intra-doc links
Fixes#62834.
---
The contents of the generics will be mostly ignored (except for warning
if fully-qualified syntax is used, which is currently unsupported in
intra-doc links - see issue #74563).
* Allow links like `Vec<T>`, `Result<T, E>`, and `Option<Box<T>>`
* Allow links like `Vec::<T>::new()`
* Warn on
* Unbalanced angle brackets (e.g. `Vec<T` or `Vec<T>>`)
* Missing type to apply generics to (`<T>` or `<Box<T>>`)
* Use of fully-qualified syntax (`<Vec as IntoIterator>::into_iter`)
* Invalid path separator (`Vec:<T>:new`)
* Too many angle brackets (`Vec<<T>>`)
* Empty angle brackets (`Vec<>`)
Note that this implementation *does* allow some constructs that aren't
valid in the actual Rust syntax, for example `Box::<T>new()`. That may
not be supported in rustdoc in the future; it is an implementation
detail.
They were not formatted correctly, so rustdoc was interpreting some
parts as code. Also cleaned up some other query docs that weren't
causing issues, but were formatted incorrectly.
Add TraitDef::find_map_relevant_impl
This PR adds a method to `TraitDef`. While `for_each_relevant_impl` covers the general use case, sometimes it's not necessary to scan through all the relevant implementations, so this PR introduces a new method, `find_map_relevant_impl`. I've also replaced the `for_each_relevant_impl` calls where possible.
I'm hoping for a tiny bit of efficiency gain here and there.
Cleanup of `eat_while()` in lexer
The size of a lexer Token was inflated by the largest `TokenKind` variants `LiteralKind::RawStr` and `RawByteStr`, because
* it used `usize` although `u32` is sufficient in rustc, since crates must be smaller than 4GB,
* and it stored the 20 bytes big `RawStrError` enum for error reporting.
If a raw string is invalid, it now needs to be reparsed to get the `RawStrError` data, but that is a very cold code path.
Technically this breaks other tools that depend on rustc_lexer because they are now also restricted to a max file size of 4GB. But this shouldn't matter in practice, and rustc_lexer isn't stable anyway.
Can I also get a perf run?
Edit: This makes no difference in performance. The PR now only contains a small cleanup.
Add asm! support for mips64
- [x] Updated `src/doc/unstable-book/src/library-features/asm.md`.
- [ ] No vector type support. I don't know much about those types.
cc #76839
rustc_target: Refactor away `TargetResult`
Follow-up to https://github.com/rust-lang/rust/pull/77202.
Construction of a built-in target is always infallible now, so `TargetResult` is no longer necessary.
The second commit contains some further cleanup based on built-in target construction being infallible.
The cleanup blocks often contain read of discriminants. Teach
RemoveNoopLandingPads to recognize them as no-ops to remove
additional no-op landing pads.
Implementation of RFC2867
https://github.com/rust-lang/rust/issues/74727
So I've started work on this, I think my next steps are to make use of the `instruction_set` value in the llvm codegen but this is the point where I begin to get a bit lost. I'm looking at the code but it would be nice to have some guidance on what I've currently done and what I'm doing next 😄
RunCompiler::new takes non-optional params, and optional
params can be set using set_*field_name* method.
finally `run` will forward all fields to `run_compiler`.
Upgrade to tracing-subscriber 0.2.13
The primary motivation is to get the changes from
https://github.com/tokio-rs/tracing/pull/990. Example output:
```
$ RUSTDOC_LOG=debug rustdoc +rustc2
warning: some trace filter directives would enable traces that are disabled statically
| `debug` would enable the DEBUG level for all targets
= note: the static max level is `info`
= help: to enable DEBUG logging, remove the `max_level_info` feature
```
r? `@Mark-Simulacrum`
cc `@hawkw` ❤️
Use `pretty::create_dump_file` for dumping dataflow results
The old code wasn't incorporating promoteds into the path, meaning other `dot` files could get clobbered. Use the MIR dump infrastructure to generate paths so that this doesn't occur in the future.
Detect blocks that could be struct expr bodies
This approach lives exclusively in the parser, so struct expr bodies
that are syntactically correct on their own but are otherwise incorrect
will still emit confusing errors, like in the following case:
```rust
fn foo() -> Foo {
bar: Vec::new()
}
```
```
error[E0425]: cannot find value `bar` in this scope
--> src/file.rs:5:5
|
5 | bar: Vec::new()
| ^^^ expecting a type here because of type ascription
error[E0214]: parenthesized type parameters may only be used with a `Fn` trait
--> src/file.rs:5:15
|
5 | bar: Vec::new()
| ^^^^^ only `Fn` traits may use parentheses
error[E0107]: wrong number of type arguments: expected 1, found 0
--> src/file.rs:5:10
|
5 | bar: Vec::new()
| ^^^^^^^^^^ expected 1 type argument
```
If that field had a trailing comma, that would be a parse error and it
would trigger the new, more targetted, error:
```
error: struct literal body without path
--> file.rs:4:17
|
4 | fn foo() -> Foo {
| _________________^
5 | | bar: Vec::new(),
6 | | }
| |_^
|
help: you might have forgotten to add the struct literal inside the block
|
4 | fn foo() -> Foo { Path {
5 | bar: Vec::new(),
6 | } }
|
```
Partially address last remaining part of #34255.
perf: UninhabitedEnumBranching avoid n^2
Avoid n² complexity. This showed up in a profile for match-stress-enum that has 8192 variants
I have only profiled locally against `match-stress-enum`, so we should have it perf tested to make sure it does not regress other crates.
The primary motivation is to get the changes from
https://github.com/tokio-rs/tracing/pull/990. Example output:
```
$ RUSTDOC_LOG=debug rustdoc +rustc2
warning: some trace filter directives would enable traces that are disabled statically
| `debug` would enable the DEBUG level for all targets
= note: the static max level is `info`
= help: to enable DEBUG logging, remove the `max_level_info` feature
```
- Remove useless test
This was testing for an ICE when passing `RUST_LOG=rustc_middle`. I
noticed it because it started giving the tracing warning (because tests
are not run with debug-logging enabled). Since this bug seems unlikely
to re-occur, I just removed it altogether.