Remove the `Arc` rt::init allocation for thread info
Removes an allocation pre-main by just not storing anything in std:🧵:Thread for the main thread.
- The thread name can just be a hard coded literal, as was done in #123433.
- Storing ThreadId and Parker in a static that is initialized once at startup. This uses SyncUnsafeCell and MaybeUninit as this is quite performance critical and we don't need synchronization or to store a tag value and possibly leave in a panic.
(ci) Update macOS Xcode to 15
This updates the macOS builders to Xcode 15. The aarch64 images will be removing Xcode 14 and 16 very soon (https://github.com/actions/runner-images/issues/10703), so we will need to make the switch to continue operating. The linked issue also documents GitHub's new policy for how they will be updating Xcode in the future. Also worth being aware of is the future plans for x86 runners documented in https://github.com/actions/runner-images/issues/9255 and https://github.com/actions/runner-images/issues/10686, which will impact our future upgrade behaviors.
I decided to also update the Xcode in the x86_64 runners, even though they are not being removed. It felt better to me to have all macOS runners on the same (major) version of Xcode. However, note that the x86_64 runners do not have the latest version of 15 (15.4), so I left them at 15.2 (which is currently the default Xcode of the runner).
Xcode 15 was previously causing problems (see #121058) which seem to be resolved now. `@bjorn3` fixed the `invalid r_symbolnum` issue with cranelift. The issue with clang failing to link seems to be fixed, possibly by the update of the pre-built LLVM from 14 to llvm 15 in https://github.com/rust-lang/rust/pull/124850, or an update in our source version of LLVM. I have run some try builds and at least LLVM seems to build (I did not run any tests).
Closes#121058
Check ABI target compatibility for function pointers
Tracking issue: https://github.com/rust-lang/rust/issues/130260
Related tracking issue: #87678
Compatibility of an ABI for a target was previously only performed on function definitions and `extern` blocks. This PR adds it also to function pointers to be consistent.
This might have broken some of the `tests/ui/` depending on the platform, so a try run seems like a good idea.
Also this might break existing code, because we now emit extra errors. Does this require a crater run?
# Example
```rust
// build with: --target=x86_64-unknown-linux-gnu
// These raise E0570
extern "thiscall" fn foo() {}
extern "thiscall" { fn bar() }
// This did not raise any error
fn baz(f: extern "thiscall" fn()) { f() }
```
# Open Questions
* [x] Should this report a future incompatibility warning like #87678 ?
* [ ] Is this the best place to perform the check?
Currently constants are "pulled forward" and have their stack spills emitted
first. This confuses LLVM as to where to place breakpoints at function
entry, and results in argument values being wrong in the debugger. It's
straightforward to avoid emitting the stack spills for constants until
arguments/etc have been introduced in debug_introduce_locals, so do that.
Example LLVM IR (irrelevant IR elided):
Before:
define internal void @_ZN11rust_1289457binding17h2c78f956ba4bd2c3E(i64 %a, i64 %b, double %c) unnamed_addr #0 !dbg !178 {
start:
%c.dbg.spill = alloca [8 x i8], align 8
%b.dbg.spill = alloca [8 x i8], align 8
%a.dbg.spill = alloca [8 x i8], align 8
%x.dbg.spill = alloca [4 x i8], align 4
store i32 0, ptr %x.dbg.spill, align 4, !dbg !192 ; LLVM places breakpoint here.
#dbg_declare(ptr %x.dbg.spill, !190, !DIExpression(), !192)
store i64 %a, ptr %a.dbg.spill, align 8
#dbg_declare(ptr %a.dbg.spill, !187, !DIExpression(), !193)
store i64 %b, ptr %b.dbg.spill, align 8
#dbg_declare(ptr %b.dbg.spill, !188, !DIExpression(), !194)
store double %c, ptr %c.dbg.spill, align 8
#dbg_declare(ptr %c.dbg.spill, !189, !DIExpression(), !195)
ret void, !dbg !196
}
After:
define internal void @_ZN11rust_1289457binding17h2c78f956ba4bd2c3E(i64 %a, i64 %b, double %c) unnamed_addr #0 !dbg !178 {
start:
%x.dbg.spill = alloca [4 x i8], align 4
%c.dbg.spill = alloca [8 x i8], align 8
%b.dbg.spill = alloca [8 x i8], align 8
%a.dbg.spill = alloca [8 x i8], align 8
store i64 %a, ptr %a.dbg.spill, align 8
#dbg_declare(ptr %a.dbg.spill, !187, !DIExpression(), !192)
store i64 %b, ptr %b.dbg.spill, align 8
#dbg_declare(ptr %b.dbg.spill, !188, !DIExpression(), !193)
store double %c, ptr %c.dbg.spill, align 8
#dbg_declare(ptr %c.dbg.spill, !189, !DIExpression(), !194)
store i32 0, ptr %x.dbg.spill, align 4, !dbg !195 ; LLVM places breakpoint here.
#dbg_declare(ptr %x.dbg.spill, !190, !DIExpression(), !195)
ret void, !dbg !196
}
Note in particular the position of the "LLVM places breakpoint here" comment
relative to the stack spills for the function arguments. LLVM assumes that
the first instruction with with a debug location is the end of the prologue.
As LLVM does not currently offer front ends any direct control over the
placement of the prologue end reordering the IR is the only mechanism available
to fix argument values at function entry in the presence of MIR optimizations
like SingleUseConsts. Fixes#128945
Don't leave debug locations for constants sitting on the builder indefinitely
Because constants are currently emitted *before* the prologue, leaving the debug location on the IRBuilder spills onto other instructions in the prologue and messes up both line numbers as well as the point LLVM chooses to be the prologue end.
Example LLVM IR (irrelevant IR elided):
Before:
```
define internal { i64, i64 } `@_ZN3tmp3Foo18var_return_opt_try17he02116165b0fc08cE(ptr` align 8 %self) !dbg !347 { start:
%self.dbg.spill = alloca [8 x i8], align 8
%_0 = alloca [16 x i8], align 8
%residual.dbg.spill = alloca [0 x i8], align 1
#dbg_declare(ptr %residual.dbg.spill, !353, !DIExpression(), !357)
store ptr %self, ptr %self.dbg.spill, align 8, !dbg !357
#dbg_declare(ptr %self.dbg.spill, !350, !DIExpression(), !358)
```
After:
```
define internal { i64, i64 } `@_ZN3tmp3Foo18var_return_opt_try17h00b17d08874ddd90E(ptr` align 8 %self) !dbg !347 { start:
%self.dbg.spill = alloca [8 x i8], align 8
%_0 = alloca [16 x i8], align 8
%residual.dbg.spill = alloca [0 x i8], align 1
#dbg_declare(ptr %residual.dbg.spill, !353, !DIExpression(), !357)
store ptr %self, ptr %self.dbg.spill, align 8
#dbg_declare(ptr %self.dbg.spill, !350, !DIExpression(), !358)
```
Note in particular how !357 from %residual.dbg.spill's dbg_declare no longer falls through onto the store to %self.dbg.spill. This fixes argument values at entry when the constant is a ZST (e.g. `<Option as Try>::Residual`). This fixes#130003 (but note that it does *not* fix issues with argument values and non-ZST constants, which emit their own stores that have debug info on them, like #128945).
r? `@michaelwoerister`
Because constants are currently emitted *before* the prologue, leaving the
debug location on the IRBuilder spills onto other instructions in the prologue
and messes up both line numbers as well as the point LLVM chooses to be the
prologue end.
Example LLVM IR (irrelevant IR elided):
Before:
define internal { i64, i64 } @_ZN3tmp3Foo18var_return_opt_try17he02116165b0fc08cE(ptr align 8 %self) !dbg !347 {
start:
%self.dbg.spill = alloca [8 x i8], align 8
%_0 = alloca [16 x i8], align 8
%residual.dbg.spill = alloca [0 x i8], align 1
#dbg_declare(ptr %residual.dbg.spill, !353, !DIExpression(), !357)
store ptr %self, ptr %self.dbg.spill, align 8, !dbg !357
#dbg_declare(ptr %self.dbg.spill, !350, !DIExpression(), !358)
After:
define internal { i64, i64 } @_ZN3tmp3Foo18var_return_opt_try17h00b17d08874ddd90E(ptr align 8 %self) !dbg !347 {
start:
%self.dbg.spill = alloca [8 x i8], align 8
%_0 = alloca [16 x i8], align 8
%residual.dbg.spill = alloca [0 x i8], align 1
#dbg_declare(ptr %residual.dbg.spill, !353, !DIExpression(), !357)
store ptr %self, ptr %self.dbg.spill, align 8
#dbg_declare(ptr %self.dbg.spill, !350, !DIExpression(), !358)
Note in particular how !357 from %residual.dbg.spill's dbg_declare no longer
falls through onto the store to %self.dbg.spill. This fixes argument values
at entry when the constant is a ZST (e.g. <Option as Try>::Residual). This
fixes#130003 (but note that it does *not* fix issues with argument values and
non-ZST constants, which emit their own stores that have debug info on them,
like #128945).
Special case DUMMY_SP to emit line 0/column 0 locations on DWARF platforms.
Line 0 has a special meaning in DWARF. From the version 5 spec:
The compiler may emit the value 0 in cases
where an instruction cannot be attributed to any
source line.
DUMMY_SP spans cannot be attributed to any line. However, because rustc internally stores line numbers starting at zero, lookup_debug_loc() adjusts every line number by one. Special casing DUMMY_SP to actually emit line 0 ensures rustc communicates to the debugger that there's no meaningful source code for this instruction, rather than telling the debugger to jump to line 1 randomly.
Enable debuginfo tests that have been "temporarily disabled" for the past 6 years
The PR history is a bit of a mess because I had to test this a lot with try-jobs, so I'll try to summarize the non-obvious changes here.
A number of tests now have `min-lldb-version: 1800`. Those tests should have gotten an lldb version jump either in https://github.com/rust-lang/rust/pull/124781 or long ago. Note that all such tests with that lldb version requirement do not run in Apple CI.
`tests/debuginfo/drop-locations.rs` is staying disabled for now because gdb doesn't know to stop on the drop calls produced by a `}`: https://github.com/rust-lang/rust/issues/128971
`tests/debuginfo/function-arg-initialization.rs` now has `-Zmir-enable-passes=-SingleUseConsts`; without that we initialize the const before the function prelude: https://github.com/rust-lang/rust/issues/128945
`tests/debuginfo/by-value-non-immediate-argument.rs` fails because we don't generate a function prelude for unused non-immediate arguments, even with all optimizations disabled, and this seems to confuse debuggers on aarch64: https://github.com/rust-lang/rust/issues/128973
`tests/debuginfo/pretty-std.rs` is staying disabled on windows-gnu because our test harness doesn't know how to load our pretty-printers on that target: https://github.com/rust-lang/rust/issues/128981
`tests/debuginfo/method-on-enum.rs` and `tests/debuginfo/option-like-enum.rs` encounter some kind of gdb bug on i686-pc-windows-gnu. I don't know enough about that situation to write a good issue.
I plan on doing more work on this test suite. There's clearly a lot more basic cleanup work to do here.
Summary:
I landed a fix last year to enable `DW_TAG_variant_part` encoding in LLDBs (https://reviews.llvm.org/D149213). This PR is a corresponding fix in synthetic formatters to decode that information.
This is in no way perfect implementation but at least it improves the status quo. But most types of enums will be visible and debuggable in some way.
I've also updated most of the existing tests that touch enums and re-enabled test cases based on LLDB for enums.
Test Plan:
ran tests `./x test tests/debuginfo/`. Also tested manually in LLDB CLI and LLDB VSCode
Other Thoughs
A better approach would probably be adopting [formatters from codelldb](https://github.com/vadimcn/codelldb/blob/master/formatters/rust.py). There is some neat hack that hooks up summary provider via synthetic provider which can ultimately fix more display issues for Rust types and enums too. But getting it to work well might take more time that I have right now.
`-Z debug-macros` is "stabilized" by enabling it by default and removing.
`-Z collapse-macro-debuginfo` is stabilized as `-C collapse-macro-debuginfo`.
It now supports all typical boolean values (`parse_opt_bool`) in addition to just yes/no.
Default value of `collapse_debuginfo` was changed from `false` to `external` (i.e. collapsed if external, not collapsed if local).
`#[collapse_debuginfo]` attribute without a value is no longer supported to avoid guessing the default.