Fix error span if arg to `asm!()` is a macro call
Fixes#129503
When the argument to `asm!()` is a macro call, e.g. `asm!(concat!("abc", "{} pqr"))`, and there's an error in the resulting template string, we do not take into account the presence of this macro call while computing the error span. This PR fixes that. Now we will use the entire thing between the parenthesis of `asm!()` as the error span in this situation e.g. for `asm!(concat!("abc", "{} pqr"))` the error span will be `concat!("abc", "{} pqr")`.
Use `&raw` in the compiler
Like #130865 did for the standard library, we can use `&raw` in the
compiler now that stage0 supports it. Also like the other issue, I did
not make any doc or test changes at this time.
Move Apple linker args from `rustc_target` to `rustc_codegen_ssa`
They are dependent on the deployment target and SDK version, but having these in `rustc_target` makes it hard to introduce that dependency. Part of the work needed to do https://github.com/rust-lang/rust/issues/118204, see https://github.com/rust-lang/rust/pull/129342 for some discussion.
Tested using:
```console
./x test tests/run-make/apple-deployment-target --target="aarch64-apple-darwin,aarch64-apple-ios,aarch64-apple-ios-macabi,aarch64-apple-ios-sim,aarch64-apple-tvos,aarch64-apple-tvos-sim,aarch64-apple-visionos,aarch64-apple-visionos-sim,aarch64-apple-watchos,aarch64-apple-watchos-sim,arm64_32-apple-watchos,armv7k-apple-watchos,armv7s-apple-ios,x86_64-apple-darwin,x86_64-apple-ios,x86_64-apple-ios-macabi,x86_64-apple-tvos,x86_64-apple-watchos-sim,x86_64h-apple-darwin"
IPHONEOS_DEPLOYMENT_TARGET=10.0 ./x test tests/run-make/apple-deployment-target --target=i386-apple-ios
```
`arm64e-apple-darwin` and `arm64e-apple-ios` have not been tested, see https://github.com/rust-lang/rust/issues/130085, neither is `i686-apple-darwin`, since that requires using an x86_64 macbook, and I currently can't get mine to work, see https://github.com/rust-lang/rust/issues/130434.
CC `@petrochenkov`
On implicit `Sized` bound on fn argument, point at type instead of pattern
Instead of
```
error[E0277]: the size for values of type `(dyn ThriftService<(), AssocType = _> + 'static)` cannot be known at compilation time
--> $DIR/issue-59324.rs:23:20
|
LL | fn with_factory<H>(factory: dyn ThriftService<()>) {}
| ^^^^^^^ doesn't have a size known at compile-time
```
output
```
error[E0277]: the size for values of type `(dyn ThriftService<(), AssocType = _> + 'static)` cannot be known at compilation time
--> $DIR/issue-59324.rs:23:29
|
LL | fn with_factory<H>(factory: dyn ThriftService<()>) {}
| ^^^^^^^^^^^^^^^^^^^^^ doesn't have a size known at compile-time
```
When the template string passed to asm!() is produced by
a macro call like concat!() we were producing wrong error
spans. Now in the case of a macro call we just use the entire
arg to asm!(), macro call and all, as the error span.
Like #130865 did for the standard library, we can use `&raw` in the
compiler now that stage0 supports it. Also like the other issue, I did
not make any doc or test changes at this time.
Instead of
```
error[E0277]: the size for values of type `(dyn ThriftService<(), AssocType = _> + 'static)` cannot be known at compilation time
--> $DIR/issue-59324.rs:23:20
|
LL | fn with_factory<H>(factory: dyn ThriftService<()>) {}
| ^^^^^^^ doesn't have a size known at compile-time
```
output
```
error[E0277]: the size for values of type `(dyn ThriftService<(), AssocType = _> + 'static)` cannot be known at compilation time
--> $DIR/issue-59324.rs:23:29
|
LL | fn with_factory<H>(factory: dyn ThriftService<()>) {}
| ^^^^^^^^^^^^^^^^^^^^^ doesn't have a size known at compile-time
```
Pass Module Analysis Manager to Standard Instrumentations
This PR introduces changes related to llvm::PassInstrumentationCallbacks. Now, we pass Module Analysis Manager to StandardInstrumentations::registerCallbacks, so it can take advantage of such instrumentations as IR verifier or preserved CFG checker. So basically this is NFC PR.
Fix diagnostics for coroutines with () as input.
This may be a more real-life example to trigger the diagnostic:
```rust
#![features(try_blocks, coroutine_trait, coroutines)]
use std::ops::Coroutine;
struct Request;
struct Response;
fn get_args() -> Result<String, String> { todo!() }
fn build_request(_arg: String) -> Request { todo!() }
fn work() -> impl Coroutine<Option<Response>, Yield = Request> {
#[coroutine]
|_| {
let r: Result<(), String> = try {
let req = get_args()?;
yield build_request(req)
};
if let Err(msg) = r {
eprintln!("Error: {msg}");
}
}
}
```
Reorder stack spills so that constants come later.
Currently constants are "pulled forward" and have their stack spills emitted first. This confuses LLVM as to where to place breakpoints at function entry, and results in argument values being wrong in the debugger. It's straightforward to avoid emitting the stack spills for constants until arguments/etc have been introduced in debug_introduce_locals, so do that.
Example LLVM IR (irrelevant IR elided):
Before:
```
define internal void `@_ZN11rust_1289457binding17h2c78f956ba4bd2c3E(i64` %a, i64 %b, double %c) unnamed_addr #0 !dbg !178 { start:
%c.dbg.spill = alloca [8 x i8], align 8
%b.dbg.spill = alloca [8 x i8], align 8
%a.dbg.spill = alloca [8 x i8], align 8
%x.dbg.spill = alloca [4 x i8], align 4
store i32 0, ptr %x.dbg.spill, align 4, !dbg !192 ; LLVM places breakpoint here.
#dbg_declare(ptr %x.dbg.spill, !190, !DIExpression(), !192)
store i64 %a, ptr %a.dbg.spill, align 8
#dbg_declare(ptr %a.dbg.spill, !187, !DIExpression(), !193)
store i64 %b, ptr %b.dbg.spill, align 8
#dbg_declare(ptr %b.dbg.spill, !188, !DIExpression(), !194)
store double %c, ptr %c.dbg.spill, align 8
#dbg_declare(ptr %c.dbg.spill, !189, !DIExpression(), !195)
ret void, !dbg !196
}
```
After:
```
define internal void `@_ZN11rust_1289457binding17h2c78f956ba4bd2c3E(i64` %a, i64 %b, double %c) unnamed_addr #0 !dbg !178 { start:
%x.dbg.spill = alloca [4 x i8], align 4
%c.dbg.spill = alloca [8 x i8], align 8
%b.dbg.spill = alloca [8 x i8], align 8
%a.dbg.spill = alloca [8 x i8], align 8
store i64 %a, ptr %a.dbg.spill, align 8
#dbg_declare(ptr %a.dbg.spill, !187, !DIExpression(), !192)
store i64 %b, ptr %b.dbg.spill, align 8
#dbg_declare(ptr %b.dbg.spill, !188, !DIExpression(), !193)
store double %c, ptr %c.dbg.spill, align 8
#dbg_declare(ptr %c.dbg.spill, !189, !DIExpression(), !194)
store i32 0, ptr %x.dbg.spill, align 4, !dbg !195 ; LLVM places breakpoint here.
#dbg_declare(ptr %x.dbg.spill, !190, !DIExpression(), !195)
ret void, !dbg !196
}
```
Note in particular the position of the "LLVM places breakpoint here" comment relative to the stack spills for the function arguments. LLVM assumes that the first instruction with with a debug location is the end of the prologue. As LLVM does not currently offer front ends any direct control over the placement of the prologue end reordering the IR is the only mechanism available to fix argument values at function entry in the presence of MIR optimizations like SingleUseConsts. Fixes#128945
r? `@michaelwoerister`
Collect relevant item bounds from trait clauses for nested rigid projections
Rust currently considers trait where-clauses that bound the trait's *own* associated types to act like an item bound:
```rust
trait Foo where Self::Assoc: Bar { type Assoc; }
// acts as if:
trait Foo { type Assoc: Bar; }
```
### Background
This behavior has existed since essentially forever (i.e. before Rust 1.0), since we originally started out by literally looking at the where clauses written on the trait when assembling `SelectionCandidate::ProjectionCandidate` for projections. However, looking at the predicates of the associated type themselves was not sound, since it was unclear which predicates were *assumed* and which predicates were *implied*, and therefore this was reworked in #72788 (which added a query for the predicates we consider for `ProjectionCandidate`s), and then finally item bounds and predicates were split in #73905.
### Problem 1: GATs don't uplift bounds correctly
All the while, we've still had logic to uplift associated type bounds from a trait's where clauses. However, with the introduction of GATs, this logic was never really generalized correctly for them, since we were using simple equality to test if the self type of a trait where clause is a projection. This leads to shortcomings, such as:
```rust
trait Foo
where
for<'a> Self::Gat<'a>: Debug,
{
type Gat<'a>;
}
fn test<T: Foo>(x: T::Gat<'static>) {
//~^ ERROR `<T as Foo>::Gat<'a>` doesn't implement `Debug`
println!("{:?}", x);
}
```
### Problem 2: Nested associated type bounds are not uplifted
We also don't attempt to uplift bounds on nested associated types, something that we couldn't really support until #120584. This can be demonstrated best with an example:
```rust
trait A
where Self::Assoc: B,
where <Self::Assoc as B>::Assoc2: C,
{
type Assoc; // <~ The compiler *should* treat this like it has an item bound `B<Assoc2: C>`.
}
trait B { type Assoc2; }
trait C {}
fn is_c<T: C>() {}
fn test<T: A>() {
is_c::<<Self::Assoc as B>::Assoc2>();
//~^ ERROR the trait bound `<<T as A>::Assoc as B>::Assoc2: C` is not satisfied
}
```
Why does this matter?
Well, generalizing this behavior bridges a gap between the associated type bounds (ATB) feature and trait where clauses. Currently, all bounds that can be stably written on associated types can also be expressed as where clauses on traits; however, with the stabilization of ATB, there are now bounds that can't be desugared in the same way. This fixes that.
## How does this PR fix things?
First, when scraping item bounds from the trait's where clauses, given a trait predicate, we'll loop of the self type of the predicate as long as it's a projection. If we find a projection whose trait ref matches, we'll uplift the bound. This allows us to uplift, for example `<Self as Trait>::Assoc: Bound` (pre-existing), but also `<<Self as Trait>::Assoc as Iterator>::Item: Bound` (new).
If that projection is a GAT, we will check if all of the GAT's *own* args are all unique late-bound vars. We then map the late-bound vars to early-bound vars from the GAT -- this allows us to uplift `for<'a, 'b> Self::Assoc<'a, 'b>: Trait` into an item bound, but we will leave `for<'a> Self::Assoc<'a, 'a>: Trait` and `Self::Assoc<'static, 'static>: Trait` alone.
### Okay, but does this *really* matter?
I consider this to be an improvement of the status quo because it makes GATs a bit less magical, and makes rigid projections a bit more expressive.
Fix up setting strip = true in Cargo.toml makes build scripts fail in…
Fix issue: https://github.com/rust-lang/rust/issues/110536
Strip binary is PATH dependent which breaks builds in MacOS.
For example, on my Mac, the output of 'which strip' is '/opt/homebrew/opt/binutils/bin/strip', which leads to incorrect 'strip' results. Therefore, just like on other systems, it is also necessary to specify 'stripcmd' on macOS. However, it seems that there is a bug in binutils [bugzilla-Bug 31571](https://sourceware.org/bugzilla/show_bug.cgi?id=31571), which leads to the problem mentioned above.
Rollup of 6 pull requests
Successful merges:
- #130549 (Add RISC-V vxworks targets)
- #130595 (Initial std library support for NuttX)
- #130734 (Fix: ices on virtual-function-elimination about principal trait)
- #130787 (Ban combination of GCE and new solver)
- #130809 (Update llvm triple for OpenHarmony targets)
- #130810 (Don't trap into the debugger on panics under Linux)
r? `@ghost`
`@rustbot` modify labels: rollup
Ban combination of GCE and new solver
These do not work together. I don't want anyone to have the impression that they do.
I reused the conflicting features diagnostic but I guess I could make it more tailored to the new solver? OTOH I don't really about the presentation of diagnostics here; these are nightly features after all.
r? `@BoxyUwU` thoughts on this?
Fix: ices on virtual-function-elimination about principal trait
Extract `load_vtable` function to ensure the `virtual_function_elimination` option is always checked.
It's okay not to use `llvm.type.checked.load` to load the vtable if there is no principal trait.
Fixes#123955Fixes#124092
Add `File` constructors that return files wrapped with a buffer
In addition to the light convenience, these are intended to raise visibility that buffering is something you should consider when opening a file, since unbuffered I/O is a common performance footgun to Rust newcomers.
ACP: https://github.com/rust-lang/libs-team/issues/446
Tracking Issue: #130804
rustdoc: inherit parent's stability where applicable
It is currently not possible for a re-export to have a different stability (https://github.com/rust-lang/rust/issues/30827). Therefore the standard library uses a hack when moving items like `std::error::Error` or `std::net::IpAddr` into `core` by marking the containing module (`core::error` / `core::net`) as unstable or stable in a later version than the items the module contains.
Previously, rustdoc would always show the *stability as declared* for an item rather than the *stability as publicly reachable* (i.e. the features required to actually access the item), which could be confusing when viewing the docs. This PR changes it so that we show the stability of the first unstable parent or the most recently stabilized parent instead, to hopefully make things less confusing.
fixes https://github.com/rust-lang/rust/issues/130765
screenshots:
![error in std](https://github.com/user-attachments/assets/2ab9bdb9-ed81-4e45-a832-ac7d3ba1be3f) ![error in core](https://github.com/user-attachments/assets/46f46182-5642-4ac5-b92e-0b99a8e2496d)
Pin memchr to 2.5.0 in the library rather than rustc_ast
The latest versions of `memchr` experience LTO-related issues when compiling for windows-gnu [1], so needs to be pinned. The issue is present in the standard library.
`memchr` has been pinned in `rustc_ast`, but since the workspace was recently split, this pin no longer has any effect on library crates.
Resolve this by adding `memchr` as an _unused_ dependency in `std`, pinned to 2.5. Additionally, remove the pin in `rustc_ast` to allow non-library crates to upgrade to the latest version.
Link: https://github.com/rust-lang/rust/issues/127890 [1]
try-job: x86_64-mingw
try-job: x86_64-msvc
Separate collection of crate-local inherent impls from error tracking
#119895 changed the return type of the `crate_inherent_impls` query from `CrateInherentImpls` to `Result<CrateInherentImpls, ErrorGuaranteed>` to avoid needing to use the non-parallel-friendly `track_errors()` to track if an error was reporting from within the query... This was mostly fine until #121113, which stopped halting compilation when we hit an `Err(ErrorGuaranteed)` in the `crate_inherent_impls` query.
Thus we proceed onwards to typeck, and since a return type of `Result<CrateInherentImpls, ErrorGuaranteed>` means that the query can *either* return one of "the list inherent impls" or "error has been reported", later on when we want to assemble method or associated item candidates for inherent impls, we were just treating any `Err(ErrorGuaranteed)` return value as if Rust had no inherent impls defined anywhere at all! This leads to basically every inherent method call failing with an error, lol, which was reported in #127798.
This PR changes the `crate_inherent_impls` query to return `(CrateInherentImpls, Result<(), ErrorGuaranteed>)`, i.e. returning the inherent impls collected *and* whether an error was reported in the query itself. It firewalls the latter part of that query into a new `crate_inherent_impls_validity_check` just for the `ensure()` call.
This fixes#127798.