Invert infer `error_reporting` mod struture
Parallel change to #127493, which moves `rustc_infer::infer::error_reporting` to `rustc_infer::error_reporting::infer`. After this, we should just be able to merge this into `rustc_trait_selection::error_reporting::infer`, and pull down `TypeErrCtxt` into that crate. 👍
r? lcnr
Deny keyword lifetimes pre-expansion
https://github.com/rust-lang/rust/pull/126452#issuecomment-2179464266
> Secondly, we confirmed that we're OK with moving the validation of keywords in lifetimes to pre-expansion from post-expansion. We similarly consider this a bug fix. While the breakage of the convenience feature of the with_locals crate that relies on this is unfortunate, and we wish we had not overlooked this earlier for that reason, we're fortunate that the breakage is contained to only one crate, and we're going to accept this breakage as the extra complexity we'd need to carry in the compiler to work around this isn't deemed worth it.
T-lang considers it to be a bugfix to deny `'keyword` lifetimes in the parser, rather than during AST validation that only happens post-expansion. This has one breakage: https://github.com/rust-lang/rust/pull/126452#issuecomment-2171654756
This probably should get lang FCP'd just for consistency.
Delegation: support coercion for target expression
(solves https://github.com/rust-lang/rust/issues/118212#issuecomment-2160723092)
The implementation consist of 2 parts. Firstly, method call is generated instead of fully qualified call in AST->HIR lowering if there were no generic arguments or `Qpath` were provided. These restrictions are imposed due to the loss of information after desugaring. For example in
```rust
trait Trait {
fn foo(&self) {}
}
reuse <u8 as Trait>::foo;
```
We would like to generate such a code:
```rust
fn foo<u8: Trait>(x: &u8) {
x.foo(x)
}
```
however, the signature is inherited during HIR analysis where `u8` was discarded.
Then, we probe the single pre-resolved method.
P.S In the future, we would like to avoid restrictions on the callee path by `Self` autoref/autoderef in fully qualified calls, but at the moment it didn't work out.
r? `@petrochenkov`
Make sure trait def ids match before zipping args in `note_function_argument_obligation`
Fixes#126416Fixes#127745
Didn't add both tests b/c I felt like it was unnecessary.
Make parse error suggestions verbose and fix spans
Go over all structured parser suggestions and make them verbose style.
When suggesting to add or remove delimiters, turn them into multiple suggestion parts.
offset_from: always allow pointers to point to the same address
This PR implements the last remaining part of the t-opsem consensus in https://github.com/rust-lang/unsafe-code-guidelines/issues/472: always permits offset_from when both pointers have the same address, no matter how they are computed. This is required to achieve *provenance monotonicity*.
Tracking issue: https://github.com/rust-lang/rust/issues/117945
### What is provenance monotonicity and why does it matter?
Provenance monotonicity is the property that adding arbitrary provenance to any no-provenance pointer must never make the program UB. More specifically, in the program state, data in memory is stored as a sequence of [abstract bytes](https://rust-lang.github.io/unsafe-code-guidelines/glossary.html#abstract-byte), where each byte can optionally carry provenance. When a pointer is stored in memory, all of the bytes it is stored in carry that provenance. Provenance monotonicity means: if we take some byte that does not have provenance, and give it some arbitrary provenance, then that cannot change program behavior or introduce UB into a UB-free program.
We care about provenance monotonicity because we want to allow the optimizer to remove provenance-stripping operations. Removing a provenance-stripping operation effectively means the program after the optimization has provenance where the program before the optimization did not -- since the provenance removal does not happen in the optimized program. IOW, the compiler transformation added provenance to previously provenance-free bytes. This is exactly what provenance monotonicity lets us do.
We care about removing provenance-stripping operations because `*ptr = *ptr` is, in general, (likely) a provenance-stripping operation. Specifically, consider `ptr: *mut usize` (or any integer type), and imagine the data at `*ptr` is actually a pointer (i.e., we are type-punning between pointers and integers). Then `*ptr` on the right-hand side evaluates to the data in memory *without* any provenance (because [integers do not have provenance](https://rust-lang.github.io/rfcs/3559-rust-has-provenance.html#integers-do-not-have-provenance)). Storing that back to `*ptr` means that the abstract bytes `ptr` points to are the same as before, except their provenance is now gone. This makes `*ptr = *ptr` a provenance-stripping operation (Here we assume `*ptr` is fully initialized. If it is not initialized, evaluating `*ptr` to a value is UB, so removing `*ptr = *ptr` is trivially correct.)
### What does `offset_from` have to do with provenance monotonicity?
With `ptr = without_provenance(N)`, `ptr.offset_from(ptr)` is always well-defined and returns 0. By provenance monotonicity, I can now add provenance to the two arguments of `offset_from` and it must still be well-defined. Crucially, I can add *different* provenance to the two arguments, and it must still be well-defined. In other words, this must always be allowed: `ptr1.with_addr(N).offset_from(ptr2.with_addr(N))` (and it returns 0). But the current spec for `offset_from` says that the two pointers must either both be derived from an integer or both be derived from the same allocation, which is not in general true for arbitrary `ptr1`, `ptr2`.
To obtain provenance monotonicity, this PR hence changes the spec for offset_from to say that if both pointers have the same address, the function is always well-defined.
### What further consequences does this have?
It means the compiler can no longer transform `end2 = begin.offset(end.offset_from(begin))` into `end2 = end`. However, it can still be transformed into `end2 = begin.with_addr(end.addr())`, which later parts of the backend (when provenance has been erased) can trivially turn into `end2 = end`.
The only alternative I am aware of is a fundamentally different handling of zero-sized accesses, where a "no provenance" pointer is not allowed to do zero-sized accesses and instead we have a special provenance that indicates "may be used for zero-sized accesses (and nothing else)". `offset` and `offset_from` would then always be UB on a "no provenance" pointer, and permit zero-sized offsets on a "zero-sized provenance" pointer. This achieves provenance monotonicity. That is, however, a breaking change as it contradicts what we landed in https://github.com/rust-lang/rust/pull/117329. It's also a whole bunch of extra UB, which doesn't seem worth it just to achieve that transformation.
### What about the backend?
LLVM currently doesn't have an intrinsic for pointer difference, so we anyway cast to integer and subtract there. That's never UB so it is compatible with any relaxation we may want to apply.
If LLVM gets a `ptrsub` in the future, then plausibly it will be consistent with `ptradd` and [consider two equal pointers to be inbounds](https://github.com/rust-lang/rust/pull/124921#issuecomment-2205795829).
Add regression test for a gce + effects ICE
Fixes#125770
I'm not *exactly* sure why this stopped ICEing, I assume its something to do with the fact that there used to be a generic parameter on `Add` for the host generic and we have mismatched args here, which #125608 made no longer later cause issues. But now the desugaring is also different so? 🤷♀️
r? `@fee1-dead`
Gate the type length limit check behind a nightly flag
Effectively disables the type length limit by introducing a `-Zenforce-type-length-limit` which defaults to **`false`**, since making the length limit actually be enforced ended up having a worse fallout than expected. We still keep the code around, but the type length limit attr is now a noop (except for its usage in some diagnostics code?).
r? `@lcnr` -- up to you to decide what team consensus we need here since this reverses an FCP decision.
Reopens#125460 (if we decide to reopen it or keep it closed)
Effectively reverses the decision FCP'd in #125507Closes#127346
Only track mentioned places for jump threading
This PR aims to reduce the state space size in jump threading and dataflow const-prop opts.
The current implementation walks the types of all locals, and creates a place for each possible projection. This can easily lead to a large number of places and tracked values, most being useless to the actual pass.
With this PR, we instead collect places that appear syntactically in the MIR (first commit). However, this is not sufficient (second commit), and we miss places that we could track in aggregate assignments. The third commit tracks such assignments to mirror place projections, see the inline comment.
This is complementary to https://github.com/rust-lang/rust/pull/127036
r? `@oli-obk`
Add FileCheck annotations to mir-opt/dest-prop tests
Part of https://github.com/rust-lang/rust/issues/116971, adds FileCheck annotations to MIR-opt tests in tests/mir-opt/dest-prop.
I would like some feedback. Also, I don't know how to approach `union.rs`. I couldn't figure out what it is testing.
r? cjgillot
fix interleaved output in the default panic hook when multiple threads panic simultaneously
previously, we only held a lock for printing the backtrace itself. since all threads were printing to the same file descriptor, that meant random output in the default panic hook from one thread would be interleaved with the backtrace from another. now, we hold the lock for the full duration of the hook, and the output is ordered.
---
i noticed some odd things while working on this you may or may not already be aware of.
- libbacktrace is included as a submodule instead of a normal rustc crate, and as a result uses `cfg(backtrace_in_std)` instead of a more normal `cfg(feature = "rustc-dep-of-std")`. probably this is left over from before rust used a cargo-based build system?
- the default panic handler uses `trace_unsynchronized`, etc, in `sys::backtrace::print`. as a result, the lock only applies to concurrent *panic handlers*, not concurrent *threads*. in other words, if another, non-panicking, thread tried to print a backtrace at the same time as the panic handler, we may have UB, especially on windows.
- we have the option of changing backtrace to enable locking when `backtrace_in_std` is set so we can reuse their lock instead of trying to add our own.
Rollup of 11 pull requests
Successful merges:
- #126502 (Ignore allocation bytes in some mir-opt tests)
- #126922 (add lint for inline asm labels that look like binary)
- #127209 (Added the `xop` target-feature and the `xop_target_feature` feature gate)
- #127310 (Fix import suggestion ice)
- #127338 (Migrate `extra-filename-with-temp-outputs` and `issue-85019-moved-src-dir` `run-make` tests to rmake)
- #127381 (Migrate `issue-83045`, `rustc-macro-dep-files` and `env-dep-info` `run-make` tests to rmake)
- #127535 (Fire unsafe_code lint on unsafe extern blocks)
- #127619 (Suggest using precise capturing for hidden type that captures region)
- #127631 (Remove `fully_normalize`)
- #127632 (Implement `precise_capturing` support for rustdoc)
- #127660 (Rename the internal `const_strlen` to just `strlen`)
r? `@ghost`
`@rustbot` modify labels: rollup
Implement `precise_capturing` support for rustdoc
Implements rustdoc (+json) support for local (i.e. non-cross-crate-inlined) RPITs with `use<...>` precise capturing syntax.
Tests kinda suck. They're really hard to write 😰
r? `@fmease` or re-roll if you're too busy!
also cc `@aDotInTheVoid` for the json side
Tracking:
* https://github.com/rust-lang/rust/issues/127228#issuecomment-2201443216 (not fully fixed for cross-crate-inlined opaques)
* https://github.com/rust-lang/rust/issues/123432
Suggest using precise capturing for hidden type that captures region
Adjusts the "add `+ '_`" suggestion for opaques to instead suggest adding or reusing the `+ use<>` in the opaque.
r? oli-obk or please re-roll if you're busy!
Migrate `extra-filename-with-temp-outputs` and `issue-85019-moved-src-dir` `run-make` tests to rmake
Part of #121876 and the associated [Google Summer of Code project](https://blog.rust-lang.org/2024/05/01/gsoc-2024-selected-projects.html).
Please try:
try-job: armhf-gnu
// try-job: test-various // already tried
try-job: x86_64-msvc
try-job: aarch64-apple
Fix import suggestion ice
Fixes#127302#127302 only crash in edition 2015
#120074 can only reproduced in edition 2021
so I added revisions in test file.
Added the `xop` target-feature and the `xop_target_feature` feature gate
This is an effort towards #127208. This adds the `xop` target feature gated by `xop_target_feature`.
add lint for inline asm labels that look like binary
fixes#94426
Due to a bug/feature in LLVM, labels composed of only the digits `0` and `1` can sometimes be confused with binary literals, even if a binary literal would not be valid in that position.
This PR adds detection for such labels and also as a drive-by change, adds a note to cases such as `asm!(include_str!("file"))` that the label that it found came from an expansion of a macro, it wasn't found in the source code.
I expect this PR to upset some people that were using labels `0:` or `1:` without issue because they never hit the case where LLVM got it wrong, but adding a heuristic to the lint to prevent this is not feasible - it would involve writing a whole assembly parser for every target that we have assembly support for.
[zulip discussion](https://rust-lang.zulipchat.com/#narrow/stream/238009-t-compiler.2Fmeetings/topic/.5Bweekly.5D.202024-06-20/near/445870628)
r? ``@estebank``
Ignore allocation bytes in some mir-opt tests
This adds `rustc -Zdump-mir-exclude-alloc-bytes` to skip writing allocation bytes in MIR dumps, and applies it to tests that were failing on s390x due to its big-endian byte order.
Fixes#126261
Ensure floats are returned losslessly by the Rust ABI on 32-bit x86
Solves #115567 for the (default) `"Rust"` ABI. When compiling for 32-bit x86, this PR changes the `"Rust"` ABI to return floats indirectly instead of in x87 registers (with the exception of single `f32`s, which this PR returns in general purpose registers as they are small enough to fit in one). No change is made to the `"C"` ABI as that ABI requires x87 register usage and therefore will need a different solution.