Ignore unwinding edges when checking for unconditional recursion
The unconditional recursion lint determines if all execution paths
eventually lead to a self-recursive call.
The implementation always follows unwinding edges which limits its
practical utility. For example, it would not lint function `f` because a
call to `g` might unwind. It also wouldn't lint function `h` because an
overflow check preceding the self-recursive call might unwind:
```rust
pub fn f() {
g();
f();
}
pub fn g() { /* ... */ }
pub fn h(a: usize) {
h(a + 1);
}
```
To avoid the issue, assume that terminators that might continue
execution along non-unwinding edges do so.
Fixes#78474.
The unconditional recursion lint determines if all execution paths
eventually lead to a self-recursive call.
The implementation always follows unwinding edges which limits its
practical utility. For example, it would not lint function `f` because a
call to `g` might unwind. It also wouldn't lint function `h` because an
overflow check preceding the self-recursive call might unwind:
```rust
pub fn f() {
g();
f();
}
pub fn g() { /* ... */ }
pub fn h(a: usize) {
h(a + 1);
}
```
To avoid the issue, assume that terminators that might continue
execution along non-unwinding edges do so.
Fix the unsoundness in the `early_otherwise_branch` mir opt pass
Closes#78496 .
This change is a significant rewrite of much of the pass. Exactly what it does is documented in the source file (with ascii art!), and all the changes that are made to the MIR that are not trivially sound are carefully documented. That being said, this is my first time touching MIR, so there are probably some invariants I did not know about that I broke.
This version of the optimization is also somewhat more flexible than the original; for example, we do not care how or where the value on which the parent is switching is computed. There is no requirement that any types be the same. This could be made even more flexible in the future by allowing a wider range of statements in the bodies of `BBC, BBD` (as long as they are all the same of course). This should be a good first step though.
Probably needs a perf run.
r? `@oli-obk` who reviewed things the last time this was touched
Incorporate distance limit from `find_best_match_for_name` directly into
Levenshtein distance computation.
Use the string size difference as a lower bound on the distance and exit
early when it exceeds the specified limit.
After finding a candidate within a limit, lower the limit further to
restrict the search space.
rustdoc: Pre-calculate traits that are in scope for doc links
This eliminates one more late use of resolver (part of #83761).
At early doc link resolution time we go through parent modules of items from the current crate, reexports of items from other crates, trait items, and impl items collected by `collect-intra-doc-links` pass, determine traits that are in scope in each such module, and put those traits into a map used by later rustdoc passes.
r? `@jyn514`
Fix ICE when parsing bad turbofish with lifetime argument
Generalize conditions where we suggest adding the turbofish operator, so we don't ICE during code like
```rust
fn foo() {
A<'a,>
}
```
but instead suggest adding a turbofish.
Fixes#93282
Remove deduplication of early lints
We already have a general mechanism for deduplicating reported
lints, so there's no need to have an additional one for early lints
specifically. This allows us to remove some `PartialEq` impls.
Store a `Symbol` instead of an `Ident` in `AssocItem`
This is the same idea as #92533, but for `AssocItem` instead
of `VariantDef`/`FieldDef`.
With this change, we no longer have any uses of
`#[stable_hasher(project(...))]`
Remove ordering traits from `OutlivesConstraint`
In two cases where this ordering was used, I've replaced the sorting to use a key that does not rely on `DefId` being `Ord`. This is part of #90317. If I understand correctly, whether this is correct depends on whether the `RegionVid`s are tracked during incremental compilation. But I might be mistaken in this approach. cc `@cjgillot`
Use error-on-mismatch policy for PAuth module flags.
This agrees with Clang, and avoids an error when using LTO with mixed
C/Rust. LLVM considers different behaviour flags to be a mismatch,
even when the flag value itself is the same.
This also makes the flag setting explicit for all uses of
LLVMRustAddModuleFlag.
----
I believe that this fixes#92885, but have only reproduced it locally on Linux hosts so cannot confirm that it fixes the issue as reported.
I have not included a test for this because it is covered by an existing test (`src/test/run-make-fulldeps/cross-lang-lto-clang`). It is not without its problems, though:
* The test requires Clang and `--run-clang-based-tests-with=...` to run, and this is not the case on the CI.
* Any test I add would have a similar requirement.
* With this patch applied, the test gets further, but it still fails (for other reasons). I don't think that affects #92885.
Work around missing code coverage data causing llvm-cov failures
If we do not add code coverage instrumentation to the `Body` of a
function, then when we go to generate the function record for it, we
won't write any data and this later causes llvm-cov to fail when
processing data for the entire coverage report.
I've identified two main cases where we do not currently add code
coverage instrumentation to the `Body` of a function:
1. If the function has a single `BasicBlock` and it ends with a
`TerminatorKind::Unreachable`.
2. If the function is created using a proc macro of some kind.
For case 1, this is typically not important as this most often occurs as
a result of function definitions that take or return uninhabited
types. These kinds of functions, by definition, cannot even be called so
they logically should not be counted in code coverage statistics.
For case 2, I haven't looked into this very much but I've noticed while
testing this patch that (other than functions which are covered by case
1) the skipped function coverage debug message is occasionally triggered
in large crate graphs by functions generated from a proc macro. This may
have something to do with weird spans being generated by the proc macro
but this is just a guess.
I think it's reasonable to land this change since currently, we fail to
generate *any* results from llvm-cov when a function has no coverage
instrumentation applied to it. With this change, we get coverage data
for all functions other than the two cases discussed above.
Fixes#93054 which occurs because of uncallable functions which shouldn't
have code coverage anyway.
I will open an issue for missing code coverage of proc macro generated
functions and leave a link here once I have a more minimal repro.
r? ``@tmandry``
cc ``@richkadel``
Move param count error emission to end of `check_argument_types`
The error emission here isn't exactly what is done in #92364, but replicating that is hard . The general move should make for a smaller diff.
Also included the `(usize, Ty, Ty)` to -> `Option<(Ty, Ty)>` commit.
r? ``@estebank``
Properly track `DepNode`s in trait evaluation provisional cache
Fixes#92987
During evaluation of an auto trait predicate, we may encounter a cycle.
This causes us to store the evaluation result in a special 'provisional
cache;. If we later end up determining that the type can legitimately
implement the auto trait despite the cycle, we remove the entry from
the provisional cache, and insert it into the evaluation cache.
Additionally, trait evaluation creates a special anonymous `DepNode`.
All queries invoked during the predicate evaluation are added as
outoging dependency edges from the `DepNode`. This `DepNode` is then
store in the evaluation cache - if a different query ends up reading
from the cache entry, it will also perform a read of the stored
`DepNode`. As a result, the cached evaluation will still end up
(transitively) incurring all of the same dependencies that it would
if it actually performed the uncached evaluation (e.g. a call to
`type_of` to determine constituent types).
Previously, we did not correctly handle the interaction between the
provisional cache and the created `DepNode`. Storing an evaluation
result in the provisional cache would cause us to lose the `DepNode`
created during the evaluation. If we later moved the entry from the
provisional cache to the evaluation cache, we would use the `DepNode`
associated with the evaluation that caused us to 'complete' the cycle,
not the evaluatoon where we first discovered the cycle. As a result,
future reads from the evaluation cache would miss some incremental
compilation dependencies that would have otherwise been added if the
evaluation was *not* cached.
Under the right circumstances, this could lead to us trying to force
a query with a no-longer-existing `DefPathHash`, since we were missing
the (red) dependency edge that would have caused us to bail out before
attempting forcing.
This commit makes the provisional cache store the `DepNode` create
during the provisional evaluation. When we move an entry from the
provisional cache to the evaluation cache, we create a *new* `DepNode`
that has dependencies going to *both* of the evaluation `DepNodes` we
have available. This ensures that cached reads will incur all of
the necessary dependency edges.
The previous PR, #93165, still performed the drop range analysis
despite ignoring the results. Unfortunately, there were ICEs in
the analysis as well, so some packages failed to build (see the
issue #93197 for an example). This change further disables the
analysis and just provides dummy results in that case.
This agrees with Clang, and avoids an error when using LTO with mixed
C/Rust. LLVM considers different behaviour flags to be a mismatch,
even when the flag value itself is the same.
This also makes the flag setting explicit for all uses of
LLVMRustAddModuleFlag.
Revert "Do not hash leading zero bytes of i64 numbers in Sip128 hasher"
Reverts rust-lang/rust#92103. It had a (in retrospect, obvious) correctness problem where changing the order of two adjacent values would produce identical hashes, which is problematic in stable hashing (see [this comment](https://github.com/rust-lang/rust/pull/92103#issuecomment-1014625442)).
I'll try to send the PR again with a fix for this issue.
r? `@the8472`
Check `const Drop` impls considering `~const` Bounds
This PR adds logic to trait selection to account for `~const` bounds in custom `impl const Drop` for types, elaborates the `const Drop` check in `rustc_const_eval` to check those bounds, and steals some drop linting fixes from #92922, thanks `@DrMeepster.`
r? `@fee1-dead` `@oli-obk` <sup>(edit: guess I can't request review from two people, lol)</sup>
since each of you wrote and reviewed #88558, respectively.
Since the logic here is more complicated than what existed, it's possible that this is a perf regression. But it works correctly with tests, and that makes me happy.
Fixes#92881
We already have a general mechanism for deduplicating reported
lints, so there's no need to have an additional one for early lints
specifically. This allows us to remove some `PartialEq` impls.
rustc_mir_itertools: Avoid needless `collect` with itertools
I don't think this should have measurable perf impact (at least not on perf.rlo benchmarks), it's mostly for readability.
Liberate late bound regions when collecting GAT substs in wfcheck
The issue here is that the [`GATSubstCollector`](https://github.com/rust-lang/rust/blob/master/compiler/rustc_typeck/src/check/wfcheck.rs#L604) does not currently do anything wrt `Binder`s, so the GAT substs it copies out have escaping late bound regions when it walks through types like `for<'x> fn() -> Self::Gat<'x>`.
I made that visitor call `liberate_late_bound_regions`, not sure if that's the right thing here or we need to do something else to replace these bound vars with placeholders. I'm not familiar with other code doing anything similar.. But the issue is indeed no longer ICEing.
Fixes#92954
r? `@jackh726`
since you last touched this code, feel free to reassign
Normalize field access types during borrowck
I think a normalize was just left out here, since we normalize analogously throughout this file.
Fixes#93141
Add preliminary support for inline assembly for msp430.
The `llvm_asm` macro was removed recently, and the MSP430 backend relies on inline assembly to build useful embedded apps. I conveniently "found" time to implement basic support for the new inline `asm` macro syntax with the help of `@Amanieu` :D.
In addition to tests in the compiler, I have tested this locally against deployed MSP430 code and have not found any noticeable differences in firmware operation or `objdump` disassemblies between the old `llvm_asm` and the new `asm` syntax.
rustc_lint: Some early linting refactorings
The first one removes and renames some fields and methods from `EarlyContext`.
The second one uses the set of registered tools (for tool attributes and tool lints) in a more centralized way.
The third one removes creation of a fake `ast::Crate` from `fn pre_expansion_lint`.
Pre-expansion linting is done with per-module granularity on freshly loaded modules, and it previously synthesized an `ast::Crate` to visit non-root modules, now they are visited as modules.
The node ID used for pre-expansion linting is also made more precise (the loaded module ID is used).
Make `Decodable` and `Decoder` infallible.
`Decoder` has two impls:
- opaque: this impl is already partly infallible, i.e. in some places it
currently panics on failure (e.g. if the input is too short, or on a
bad `Result` discriminant), and in some places it returns an error
(e.g. on a bad `Option` discriminant). The number of places where
either happens is surprisingly small, just because the binary
representation has very little redundancy and a lot of input reading
can occur even on malformed data.
- json: this impl is fully fallible, but it's only used (a) for the
`.rlink` file production, and there's a `FIXME` comment suggesting it
should change to a binary format, and (b) in a few tests in
non-fundamental ways. Indeed #85993 is open to remove it entirely.
And the top-level places in the compiler that call into decoding just
abort on error anyway. So the fallibility is providing little value, and
getting rid of it leads to some non-trivial performance improvements.
Much of this PR is pretty boring and mechanical. Some notes about
a few interesting parts:
- The commit removes `Decoder::{Error,error}`.
- `InternIteratorElement::intern_with`: the impl for `T` now has the same
optimization for small counts that the impl for `Result<T, E>` has,
because it's now much hotter.
- Decodable impls for SmallVec, LinkedList, VecDeque now all use
`collect`, which is nice; the one for `Vec` uses unsafe code, because
that gave better perf on some benchmarks.
r? `@bjorn3`
Use consistent function parameter order for early context construction and early linting
Rename some functions to make it clear that they do not necessarily work on the whole crate
Disable drop range tracking in generators
Generator drop tracking caused an ICE for generators involving the Never type (Issue #93161). Since this breaks a test case with miri, we temporarily disable drop tracking so miri is unblocked while we properly fix the issue.
Previously, when rustc was provided an async function that tried to
mutate through a shared reference to an implicit self (as shown in the
ui test), rustc would suggest modifying the parameter signature
to `&mut` + the fully qualified name of the ty (in the case of the repro
`S`). If a user modified their code to match the suggestion, the
compiler would not accept it.
This commit modifies the suggestion so that when rustc is provided the
ui test that is also attached in this commit, it suggests (correctly)
`&mut self`. We try to be careful about distinguishing between implicit
and explicit self annotations, since the latter seem to be handled
correctly already.
Fixesrust-lang/rust#93093
Reject unsupported naked functions
Transition unsupported naked functions future incompatibility lint into an error:
* Naked functions must contain a single inline assembly block. Introduced as future incompatibility lint in 1.50 #79653. Change into an error fixes a soundness issue described in #32489.
* Naked functions must not use any forms of inline attribute. Introduced as future incompatibility lint in 1.56 #87652.
Closes#32490.
Closes#32489.
r? ```@Amanieu``` ```@npmccallum``` ```@joshtriplett```
Print a helpful message if unwinding aborts when it reaches a nounwind function
This is implemented by routing `TerminatorKind::Abort` back through the panic handler, but with a special flag in the `PanicInfo` which indicates that the panic handler should *not* attempt to unwind the stack and should instead abort immediately.
This is useful for the planned change in https://github.com/rust-lang/lang-team/issues/97 which would make `Drop` impls `nounwind` by default.
### Code
```rust
#![feature(c_unwind)]
fn panic() {
panic!()
}
extern "C" fn nounwind() {
panic();
}
fn main() {
nounwind();
}
```
### Before
```
$ ./test
thread 'main' panicked at 'explicit panic', test.rs:4:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Illegal instruction (core dumped)
```
### After
```
$ ./test
thread 'main' panicked at 'explicit panic', test.rs:4:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
thread 'main' panicked at 'panic in a function that cannot unwind', test.rs:7:1
stack backtrace:
0: 0x556f8f86ec9b - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::hdccefe11a6ac4396
1: 0x556f8f88ac6c - core::fmt::write::he152b28c41466ebb
2: 0x556f8f85d6e2 - std::io::Write::write_fmt::h0c261480ab86f3d3
3: 0x556f8f8654fa - std::panicking::default_hook::{{closure}}::h5d7346f3ff7f6c1b
4: 0x556f8f86512b - std::panicking::default_hook::hd85803a1376cac7f
5: 0x556f8f865a91 - std::panicking::rust_panic_with_hook::h4dc1c5a3036257ac
6: 0x556f8f86f079 - std::panicking::begin_panic_handler::{{closure}}::hdda1d83c7a9d34d2
7: 0x556f8f86edc4 - std::sys_common::backtrace::__rust_end_short_backtrace::h5b70ed0cce71e95f
8: 0x556f8f865592 - rust_begin_unwind
9: 0x556f8f85a764 - core::panicking::panic_no_unwind::h2606ab3d78c87899
10: 0x556f8f85b910 - test::nounwind::hade6c7ee65050347
11: 0x556f8f85b936 - test::main::hdc6e02cb36343525
12: 0x556f8f85b7e3 - core::ops::function::FnOnce::call_once::h4d02663acfc7597f
13: 0x556f8f85b739 - std::sys_common::backtrace::__rust_begin_short_backtrace::h071d40135adb0101
14: 0x556f8f85c149 - std::rt::lang_start::{{closure}}::h70dbfbf38b685e93
15: 0x556f8f85c791 - std::rt::lang_start_internal::h798f1c0268d525aa
16: 0x556f8f85c131 - std::rt::lang_start::h476a7ee0a0bb663f
17: 0x556f8f85b963 - main
18: 0x7f64c0822b25 - __libc_start_main
19: 0x556f8f85ae8e - _start
20: 0x0 - <unknown>
thread panicked while panicking. aborting.
Aborted (core dumped)
```
add support for the l4-bender linker on the x86_64-unknown-l4re-uclibc tier 3 target
This PR contains the work by ```@humenda``` to update support for the `x86_64-unknown-l4re-uclibc` tier 3 target (published at [humenda/rust](https://github.com/humenda/rust)), rebased and adapted to current rust in follow up commits by myself. The publishing of the rebased changes is authorized and preferred by the original author. As the goal was to distort the original work as little as possible, individual commits introduce changes that are incompatible to the newer code base that the changes were rebased on. These incompatibilities have been remedied in follow up commits, so that the PR as a whole should result in a clean update of the target.
If you prefer another strategy to mainline these changes while preserving attribution, please let me know.
`Decoder` has two impls:
- opaque: this impl is already partly infallible, i.e. in some places it
currently panics on failure (e.g. if the input is too short, or on a
bad `Result` discriminant), and in some places it returns an error
(e.g. on a bad `Option` discriminant). The number of places where
either happens is surprisingly small, just because the binary
representation has very little redundancy and a lot of input reading
can occur even on malformed data.
- json: this impl is fully fallible, but it's only used (a) for the
`.rlink` file production, and there's a `FIXME` comment suggesting it
should change to a binary format, and (b) in a few tests in
non-fundamental ways. Indeed #85993 is open to remove it entirely.
And the top-level places in the compiler that call into decoding just
abort on error anyway. So the fallibility is providing little value, and
getting rid of it leads to some non-trivial performance improvements.
Much of this commit is pretty boring and mechanical. Some notes about
a few interesting parts:
- The commit removes `Decoder::{Error,error}`.
- `InternIteratorElement::intern_with`: the impl for `T` now has the same
optimization for small counts that the impl for `Result<T, E>` has,
because it's now much hotter.
- Decodable impls for SmallVec, LinkedList, VecDeque now all use
`collect`, which is nice; the one for `Vec` uses unsafe code, because
that gave better perf on some benchmarks.
Improve string concatenation suggestion
Before:
error[E0369]: cannot add `&str` to `&str`
--> file.rs:2:22
|
2 | let _x = "hello" + " world";
| ------- ^ -------- &str
| | |
| | `+` cannot be used to concatenate two `&str` strings
| &str
|
help: `to_owned()` can be used to create an owned `String` from a string reference. String concatenation appends the string on the right to the string on the left and may require reallocation. This requires ownership of the string on the left
|
2 | let _x = "hello".to_owned() + " world";
| ~~~~~~~~~~~~~~~~~~
After:
error[E0369]: cannot add `&str` to `&str`
--> file.rs:2:22
|
2 | let _x = "hello" + " world";
| ------- ^ -------- &str
| | |
| | `+` cannot be used to concatenate two `&str` strings
| &str
|
= note: string concatenation requires an owned `String` on the left
help: create an owned `String` from a string reference
|
2 | let _x = "hello".to_owned() + " world";
| +++++++++++
Improve error message for key="value" cfg arguments.
Hi, I ran into difficulties using the `--cfg` flag syntax, first hit when googling for the error was issue https://github.com/rust-lang/rust/issues/66450. Reading that issue, it sounded like the best way to improve the experience was to improve the error message, this is low risk and doesn't introduce any additional argument parsing.
The issue mentions that it is entirely dependent on the shell, while this may be true, I think guiding the the user into the realization that the quotes may need to be escaped is helpful. The two suggested escapings both work in Bash and in the Windows command prompt.
fyi `@ehuss`
Ensure that early-bound function lifetimes are always 'local'
During borrowchecking, we treat any free (early-bound) regions on
the 'defining type' as `RegionClassification::External`. According
to the doc comments, we should only have 'external' regions when
checking a closure/generator.
However, a plain function can also have some if its regions
be considered 'early bound' - this occurs when the region is
constrained by an argument, appears in a `where` clause, or
in an opaque type. This was causing us to incorrectly mark these
regions as 'external', which caused some diagnostic code
to act as if we were referring to a 'parent' region from inside
a closure.
This PR marks all instantiated region variables as 'local'
when we're borrow-checking something other than a
closure/generator/inline-const.
If we do not add code coverage instrumentation to the `Body` of a
function, then when we go to generate the function record for it, we
won't write any data and this later causes llvm-cov to fail when
processing data for the entire coverage report.
I've identified two main cases where we do not currently add code
coverage instrumentation to the `Body` of a function:
1. If the function has a single `BasicBlock` and it ends with a
`TerminatorKind::Unreachable`.
2. If the function is created using a proc macro of some kind.
For case 1, this typically not important as this most often occurs as
the result of function definitions that take or return uninhabited
types. These kinds of functions, by definition, cannot even be called so
they logically should not be counted in code coverage statistics.
For case 2, I haven't looked into this very much but I've noticed while
testing this patch that (other than functions which are covered by case
1) the skipped function coverage debug message is occasionally triggered
in large crate graphs by functions generated from a proc macro. This may
have something to do with weird spans being generated by the proc macro
but this is just a guess.
I think it's reasonable to land this change since currently, we fail to
generate *any* results from llvm-cov when a function has no coverage
instrumentation applied to it. With this change, we get coverage data
for all functions other than the two cases discussed above.
Generator drop tracking caused an ICE for generators involving the Never
type (Issue #93161). Since this breaks miri, we temporarily disable drop
tracking so miri is unblocked while we properly fix the issue.
- Fix style errors.
- L4-bender does not yet support dynamic linking.
- Stack unwinding is not yet supported for x86_64-unknown-l4re-uclibc.
For now, just abort on panics.
- Use GNU-style linker options where possible. As suggested by review:
- Use standard GNU-style ld syntax for relro flags.
- Use standard GNU-style optimization flags and logic.
- Use standard GNU-style ld syntax for --subsystem.
- Don't read environment variables in L4Bender linker. Thanks to
CARGO_ENCODED_RUSTFLAGS introduced in #9601, l4-bender's arguments can
now be passed from the L4Re build system without resorting to custom
parsing of environment variables.
Update some rustc dependencies to deduplicate them
This PR updates `rand` and `itertools` in rustc (not the whole workspace) in order to deduplicate them (and hopefully slightly improve compile times).
~~Currently, `object` is still duplicated, but https://github.com/rust-lang/thorin/pull/15 and updating `thorin` in the future will remove the use of version 0.27.~~ Update: Thorin 0.2 has now been released, and this PR updates `rustc_codegen_ssa` to use it and deduplicate the `object` crate.
There's a final tiny rustc dependency, `cfg-if`, which will be left: as both versions 0.1.x and 1.0 looked to be heavily depended on, they will require a few cascading updates to be removed.
I have found this code very confusing at times. This commit clarifies
things.
In particular, the commit explains the requirements that the `Borrow`
impls put on the `Eq` and `Hash` impls, which are non-obvious. And it
puts the `Borrow` impls first, since they force `Eq` and `Hash` to have
particular forms.
The commit also notes `TyS`'s uniqueness requirements.
Rollup of 17 pull requests
Successful merges:
- #91032 (Introduce drop range tracking to generator interior analysis)
- #92856 (Exclude "test" from doc_auto_cfg)
- #92860 (Fix errors on blanket impls by ignoring the children of generated impls)
- #93038 (Fix star handling in block doc comments)
- #93061 (Only suggest adding `!` to expressions that can be macro invocation)
- #93067 (rustdoc mobile: fix scroll offset when jumping to internal id)
- #93086 (Add tests to ensure that `let_chains` works with `if_let_guard`)
- #93087 (Fix src/test/run-make/raw-dylib-alt-calling-convention)
- #93091 (⬆ chalk to 0.76.0)
- #93094 (src/test/rustdoc-json: Check for `struct_field`s in `variant_tuple_struct.rs`)
- #93098 (Show a more informative panic message when `DefPathHash` does not exist)
- #93099 (rustdoc: auto create output directory when "--output-format json")
- #93102 (Pretty printer algorithm revamp step 3)
- #93104 (Support --bless for pp-exact pretty printer tests)
- #93114 (update comment for `ensure_monomorphic_enough`)
- #93128 (Add script to prevent point releases with same number as existing ones)
- #93136 (Backport the 1.58.1 release notes to master)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Transition unsupported naked functions future incompatibility lint into
an error:
* Naked functions must contain a single inline assembly block.
Introduced as future incompatibility lint in 1.50 #79653.
Change into an error fixes a soundness issue described in #32489.
* Naked functions must not use any forms of inline attribute.
Introduced as future incompatibility lint in 1.56 #87652.
Pretty printer algorithm revamp step 3
This PR follows #93065 as a third chunk of minor modernizations backported from https://github.com/dtolnay/prettyplease into rustc_ast_pretty.
I've broken this up into atomic commits that hopefully are sensible in isolation. At every commit, the pretty printer is compilable and has runtime behavior that is identical to before and after the PR. None of the refactoring so far changes behavior.
This PR is the last chunk of non-behavior-changing cleanup. After this the **next PR** will begin backporting behavior changes from `prettyplease`, starting with block indentation:
```rust
macro_rules! print_expr {
($expr:expr) => {
println!("{}", stringify!($expr));
};
}
fn main() {
print_expr!(Struct { x: 0, y: 0 });
print_expr!(Structtttttttttttttttttttttttttttttttttttttttttttttttttt { xxxxxxxxx: 0, yyyyyyyyy: 0 });
}
```
Output currently on master (nowhere near modern Rust style):
```console
Struct{x: 0, y: 0,}
Structtttttttttttttttttttttttttttttttttttttttttttttttttt{xxxxxxxxx: 0,
yyyyyyyyy: 0,}
```
After the upcoming PR for block indentation (based on 401d60c042):
```console
Struct { x: 0, y: 0, }
Structtttttttttttttttttttttttttttttttttttttttttttttttttt {
xxxxxxxxx: 0,
yyyyyyyyy: 0,
}
```
And the PR after that, for intelligent trailing commas (based on e2a0297f17):
```console
Struct { x: 0, y: 0 }
Structtttttttttttttttttttttttttttttttttttttttttttttttttt {
xxxxxxxxx: 0,
yyyyyyyyy: 0,
}
```
Show a more informative panic message when `DefPathHash` does not exist
This should hopefully make it easier to debug incremental compilation
bugs like #93096 without affecting performance.
Fix star handling in block doc comments
Fixes#92872.
Some extra explanation about this PR and why https://github.com/rust-lang/rust/pull/92357 created this regression: when we merge doc comment kinds for example in:
```rust
/// he
/**
* hello
*/
#[doc = "boom"]
```
We don't want to remove the empty lines between them. However, to correctly compute the "horizontal trim", we still need it, so instead, I put back a part of the "vertical trim" directly in the "horizontal trim" computation so it doesn't impact the output buffer but allows us to correctly handle the stars.
r? ``@camelid``
Introduce drop range tracking to generator interior analysis
This PR addresses cases such as this one from #57478:
```rust
struct Foo;
impl !Send for Foo {}
let _: impl Send = || {
let guard = Foo;
drop(guard);
yield;
};
```
Previously, the `generator_interior` pass would unnecessarily include the type `Foo` in the generator because it was not aware of the behavior of `drop`. We fix this issue by introducing a drop range analysis that finds portions of the code where a value is guaranteed to be dropped. If a value is dropped at all suspend points, then it is no longer included in the generator type. Note that we are using "dropped" in a generic sense to include any case in which a value has been moved. That is, we do not only look at calls to the `drop` function.
There are several phases to the drop tracking algorithm, and we'll go into more detail below.
1. Use `ExprUseVisitor` to find values that are consumed and borrowed.
2. `DropRangeVisitor` uses consume and borrow information to gather drop and reinitialization events, as well as build a control flow graph.
3. We then propagate drop and reinitialization information through the CFG until we reach a fix point (see `DropRanges::propagate_to_fixpoint`).
4. When recording a type (see `InteriorVisitor::record`), we check the computed drop ranges to see if that value is definitely dropped at the suspend point. If so, we skip including it in the type.
## 1. Use `ExprUseVisitor` to find values that are consumed and borrowed.
We use `ExprUseVisitor` to identify the places where values are consumed. We track both the `hir_id` of the value, and the `hir_id` of the expression that consumes it. For example, in the expression `[Foo]`, the `Foo` is consumed by the array expression, so after the array expression we can consider the `Foo` temporary to be dropped.
In this process, we also collect values that are borrowed. The reason is that the MIR transform for generators conservatively assumes anything borrowed is live across a suspend point (see `rustc_mir_transform::generator::locals_live_across_suspend_points`). We match this behavior here as well.
## 2. Gather drop events, reinitialization events, and control flow graph
After finding the values of interest, we perform a post-order traversal over the HIR tree to find the points where these values are dropped or reinitialized. We use the post-order index of each event because this is how the existing generator interior analysis refers to the position of suspend points and the scopes of variables.
During this traversal, we also record branching and merging information to handle control flow constructs such as `if`, `match`, and `loop`. This is necessary because values may be dropped along some control flow paths but not others.
## 3. Iterate to fixed point
The previous pass found the interesting events and locations, but now we need to find the actual ranges where things are dropped. Upon entry, we have a list of nodes ordered by their position in the post-order traversal. Each node has a set of successors. For each node we additionally keep a bitfield with one bit per potentially consumed value. The bit is set if we the value is dropped along all paths entering this node.
To compute the drop information, we first reverse the successor edges to find each node's predecessors. Then we iterate through each node, and for each node we set its dropped value bitfield to the intersection of all incoming dropped value bitfields.
If any bitfield for any node changes, we re-run the propagation loop again.
## 4. Ignore dropped values across suspend points
At this point we have a data structure where we can ask whether a value is guaranteed to be dropped at any post order index for the HIR tree. We use this information in `InteriorVisitor` to check whether a value in question is dropped at a particular suspend point. If it is, we do not include that value's type in the generator type.
Note that we had to augment the region scope tree to include all yields in scope, rather than just the last one as we did before.
r? `@nikomatsakis`
Fix star handling in block doc comments
Fixes#92872.
Some extra explanation about this PR and why https://github.com/rust-lang/rust/pull/92357 created this regression: when we merge doc comment kinds for example in:
```rust
/// he
/**
* hello
*/
#[doc = "boom"]
```
We don't want to remove the empty lines between them. However, to correctly compute the "horizontal trim", we still need it, so instead, I put back a part of the "vertical trim" directly in the "horizontal trim" computation so it doesn't impact the output buffer but allows us to correctly handle the stars.
r? `@camelid`
Change lint message to be stronger for &T -> &mut T transmute
The old message implied that it's only UB if you use the reference to mutate, which (as far as I know) is not true. As in, the following program has UB, and a &T -> &mut T transmute is effectively an `unreachable_unchecked`.
```rust
fn main() {
#[allow(mutable_transmutes)]
unsafe {
let _ = std::mem::transmute::<&i32, &mut i32>(&0);
}
}
```
In the future, it might be a good idea to use the edition system to make this a hard error, since I don't think it is *ever* defined behaviour? Unless we rule that `&UnsafeCell<i32> -> &mut i32` is fine. (That, and you always could just use `.get()`, so you're not losing anything)
improve `_` constants in item signature handling
removing the "type" from the error messages does slightly worsen the error messages for types, but figuring out whether the placeholder is for a type or a constant and correctly dealing with that seemed fairly difficult to me so I took the easy way out ✨ Imo the error message is still clear enough.
r? `@BoxyUwU` cc `@estebank`
Point at correct argument when async fn output type lifetime disagrees with signature
Fixes most of #74256.
## Problems fixed
This PR fixes a couple of related problems in the error reporting code.
### Highlighting the wrong argument
First, the error reporting code was looking at the desugared return type of an `async fn` to decide which parameter to highlight. For example, a function like
```rust
async fn async_fn(self: &Struct, f: &u32) -> &u32
{ f }
```
desugars to
```rust
async fn async_fn<'a, 'b>(self: &'a Struct, f: &'b u32)
-> impl Future<Output = &'a u32> + 'a + 'b
{ f }
```
Since `f: &'b u32` is returned but the output type is `&'a u32`, the error would occur when checking that `'a: 'b`.
The reporting code would look to see if the "offending" lifetime `'b` was included in the return type, and because the code was looking at the desugared future type, it was included. So it defaulted to reporting that the source of the other lifetime `'a` (the `self` type) was the problem, when it was really the type of `f`. (Note that if it had chosen instead to look at `'a` first, it too would have been included in the output type, and it would have arbitrarily reported the error (correctly this time) on the type of `f`.)
Looking at the actual future type isn't useful for this reason; it captures all input lifetimes. Using the written return type for `async fn` solves this problem and results in less confusing error messages for the user.
This isn't a perfect fix, unfortunately; writing the "manually desugared" form of the above function still results in the wrong parameter being highlighted. Looking at the output type of every `impl Future` return type doesn't feel like a very principled approach, though it might work. The problem would remain for function signatures that look like the desugared one above but use different traits. There may be deeper changes required to pinpoint which part of each type is conflicting.
### Lying about await point capture causing lifetime conflicts
The second issue fixed by this PR is the unnecessary complexity in `try_report_anon_anon_conflict`. It turns out that the root cause I suggested in https://github.com/rust-lang/rust/issues/76547#issuecomment-692863608 wasn't really the root cause. Adding special handling to report that a variable was captured over an await point only made the error messages less correct and pointed to a problem other than the one that actually occurred.
Given the above discussion, it's easy to see why: `async fn`s capture all input lifetimes in their return type, so holding an argument across an await point should never cause a lifetime conflict! Removing the special handling simplified the code and improved the error messages (though they still aren't very good!)
## Future work
* Fix error reporting on the "desugared" form of this code
* Get the `suggest_adding_lifetime_params` suggestion firing on these examples
* cc #42703, I think
r? `@estebank`
Stabilize `-Z print-link-args` as `--print link-args`
We have stable options for adding linker arguments; we should have a
stable option to help debug linker arguments.
Add documentation for the new option. In the documentation, make it clear that
the *exact* format of the output is not a stable guarantee.
Fix variant index / discriminant confusion in uninhabited enum branching
Fix confusion between variant index and variant discriminant. The pass
incorrectly assumed that for `Variants::Single` variant index is the same as
variant discriminant.
r? `@wesleywiser`
- Also rename a trivial_const_drop to match style of other functions in
the util module.
- Also add a test for `const Drop` that doesn't depend on a `~const`
bound.
- Also comment a bit why we remove the const bound during dropck impl
check.
PrintStackElems with pbreak=PrintStackBreak::Fits always carried a
meaningless value offset=0. We can combine the two types PrintStackElem
+ PrintStackBreak into one PrintFrame enum that stores offset only for
Broken frames.
The pretty printer algorithm involves 2 VecDeques: a ring-buffer of
tokens and a deque of ring-buffer indices. Confusingly, those two deques
were being grown in opposite directions for no good reason. Ring-buffer
pushes would go on the "back" of the ring-buffer (i.e. higher indices)
while scan_stack pushes would go on the "front" (i.e. lower indices).
This commit flips the scan_stack accesses to grow the scan_stack and
ring-buffer in the same direction, where push does the same
operation as a Vec push i.e. inserting on the high-index end.
This is the same idea as #92533, but for `AssocItem` instead
of `VariantDef`/`FieldDef`.
With this change, we no longer have any uses of
`#[stable_hasher(project(...))]`
In two cases where this ordering was used, I've replaced the sorting
to use a key that does not include DefId. I'm not sure this is correct
in terms of our goals from #90317, or otherwise.
Pretty printer algorithm revamp step 2
This PR follows #92923 as a second chunk of modernizations backported from https://github.com/dtolnay/prettyplease into rustc_ast_pretty.
I've broken this up into atomic commits that hopefully are sensible in isolation. At every commit, the pretty printer is compilable and has runtime behavior that is identical to before and after the PR. None of the refactoring so far changes behavior.
The general theme of this chunk of commits is: the logic in the old pretty printer is doing some very basic things (pushing and popping tokens on a ring buffer) but expressed in a too-low-level way that I found makes it quite complicated/subtle to reason about. There are a number of obvious invariants that are "almost true" -- things like `self.left == self.buf.offset` and `self.right == self.buf.offset + self.buf.data.len()` and `self.right_total == self.left_total + self.buf.data.sum()`. The reason these things are "almost true" is the implementation tends to put updating one side of the invariant unreasonably far apart from updating the other side, leaving the invariant broken while unrelated stuff happens in between. The following code from master is an example of this:
e5e2b0be26/compiler/rustc_ast_pretty/src/pp.rs (L314-L317)
In this code the `advance_right` is reserving an entry into which to write a next token on the right side of the ring buffer, the `check_stack` is doing something totally unrelated to the right boundary of the ring buffer, and the `scan_push` is actually writing the token we previously reserved space for. Much of what this PR is doing is rearranging code to shrink the amount of stuff in between when an invariant is broken to when it is restored, until the whole thing can be factored out into one indivisible method call on the RingBuffer type.
The end state of the PR is that we can entirely eliminate `self.left` (because it's now just equal to `self.buf.offset` always) and `self.right` (because it's equal to `self.buf.offset + self.buf.data.len()` always) and the whole `Token::Eof` state which used to be the value of tokens that have been reserved space for but not yet written.
I found without these changes the pretty printer implementation to be hard to reason about and I wasn't able to confidently introduce improvements like trailing commas in `prettyplease` until after this refactor. The logic here is 43 years old at this point (Graydon translated it as directly as possible from the 1979 pretty printing paper) and while there are advantages to following the paper as closely as possible, in `prettyplease` I decided if we're going to adapt the algorithm to work better for Rust syntax, it was worthwhile making it easier to follow than the original.
mangling_v0: Skip extern blocks during mangling
There's no need to include the dummy `Nt` into the symbol name, items in extern blocks belong to their parent modules for all purposes except for inheriting the ABI and attributes.
Follow up to https://github.com/rust-lang/rust/pull/92032
(There's also a drive-by fix to the `rust-demangler` tool's tests, which don't run on CI, I initially attempted using them for testing this PR.)
Remove some unused ordering derivations based on `DefId`
Like #93018, this removes some unused/unneeded ordering derivations as part of ongoing work on #90317. Here, these changes are aimed at making https://github.com/rust-lang/rust/pull/90749 easier to review, test, and merge.
r? `@cjgillot`
Move expr- and item-related pretty printing functions to modules
Currently *compiler/rustc_ast_pretty/src/pprust/state.rs* is 2976 lines on master. The `tidy` limit is 3000, which is blocking #92243.
This PR adds a `mod expr;` and `mod item;` to move logic related to those AST nodes out of the single huge file.
Use iterator instead of recursion in `codegen_place`
This PR fixes the FIXME in `codegen_place` about using iterator instead of recursion when processing the `projection` field in `mir::PlaceRef`. At the same time, it also reduces the right drift.
Formally implement let chains
## Let chains
My longest and hardest contribution since #64010.
Thanks to `@Centril` for creating the RFC and special thanks to `@matthewjasper` for helping me since the beginning of this journey. In fact, `@matthewjasper` did much of the complicated MIR stuff so it's true to say that this feature wouldn't be possible without him. Thanks again `@matthewjasper!`
With the changes proposed in this PR, it will be possible to chain let expressions along side local variable declarations or ordinary conditional expressions. In other words, do much of what the `if_chain` crate already does.
## Other considerations
* `if let guard` and `let ... else` features need special care and should be handled in a following PR.
* Irrefutable patterns are allowed within a let chain context
* ~~Three Clippy lints were already converted to start dogfooding and help detect possible corner cases~~
cc #53667
Fixes#92987
During evaluation of an auto trait predicate, we may encounter a cycle.
This causes us to store the evaluation result in a special 'provisional
cache;. If we later end up determining that the type can legitimately
implement the auto trait despite the cycle, we remove the entry from
the provisional cache, and insert it into the evaluation cache.
Additionally, trait evaluation creates a special anonymous `DepNode`.
All queries invoked during the predicate evaluation are added as
outoging dependency edges from the `DepNode`. This `DepNode` is then
store in the evaluation cache - if a different query ends up reading
from the cache entry, it will also perform a read of the stored
`DepNode`. As a result, the cached evaluation will still end up
(transitively) incurring all of the same dependencies that it would
if it actually performed the uncached evaluation (e.g. a call to
`type_of` to determine constituent types).
Previously, we did not correctly handle the interaction between the
provisional cache and the created `DepNode`. Storing an evaluation
result in the provisional cache would cause us to lose the `DepNode`
created during the evaluation. If we later moved the entry from the
provisional cache to the evaluation cache, we would use the `DepNode`
associated with the evaluation that caused us to 'complete' the cycle,
not the evaluatoon where we first discovered the cycle. As a result,
future reads from the evaluation cache would miss some incremental
compilation dependencies that would have otherwise been added if the
evaluation was *not* cached.
Under the right circumstances, this could lead to us trying to force
a query with a no-longer-existing `DefPathHash`, since we were missing
the (red) dependency edge that would have caused us to bail out before
attempting forcing.
This commit makes the provisional cache store the `DepNode` create
during the provisional evaluation. When we move an entry from the
provisional cache to the evaluation cache, we create a *new* `DepNode`
that has dependencies going to *both* of the evaluation `DepNodes` we
have available. This ensures that cached reads will incur all of
the necessary dependency edges.