Set "symbol name" in raw-dylib import libraries to the decorated name
`windows-rs` received a bug report that mixing raw-dylib generated and the Windows SDK import libraries was causing linker failures: <https://github.com/microsoft/windows-rs/issues/3285>
The root cause turned out to be #124958, that is we are not including the decorated name in the import library and so the import name type is also not being correctly set.
This change modifies the generation of import libraries to set the "symbol name" to the fully decorated name and correctly marks the import as being data vs function.
Note that this also required some changes to how the symbol is named within Rust: for MSVC we now need to use the decorated name but for MinGW we still need to use partially decorated (or undecorated) name.
Fixes#124958
Passing i686 MSVC and MinGW build: <https://github.com/rust-lang/rust/actions/runs/11000433888?pr=130586>
r? `@ChrisDenton`
mark some target features as 'forbidden' so they cannot be (un)set with -Ctarget-feature
The context for this is https://github.com/rust-lang/rust/issues/116344: some target features change the way floats are passed between functions. Changing those target features is unsound as code compiled for the same target may now use different ABIs.
So this introduces a new concept of "forbidden" target features (on top of the existing "stable " and "unstable" categories), and makes it a hard error to (un)set such a target feature. For now, the x86 and ARM feature `soft-float` is on that list. We'll have to make some effort to collect more relevant features, and similar features from other targets, but that can happen after the basic infrastructure for this landed. (These features are being collected in https://github.com/rust-lang/rust/issues/131799.)
I've made this a warning for now to give people some time to speak up if this would break something.
MCP: https://github.com/rust-lang/compiler-team/issues/780
Support clobber_abi and vector registers (clobber-only) in PowerPC inline assembly
This supports `clobber_abi` which is one of the requirements of stabilization mentioned in #93335.
This basically does a similar thing I did in https://github.com/rust-lang/rust/pull/130630 to implement `clobber_abi` for s390x, but for powerpc/powerpc64/powerpc64le.
- This also supports vector registers (as `vreg`) as clobber-only, which need to support clobbering of them to implement `clobber_abi`.
- `vreg` should be able to accept `#[repr(simd)]` types as input/output if the unstable `altivec` target feature is enabled, but `core::arch::{powerpc,powerpc64}` vector types, `#[repr(simd)]`, and `core::simd` are all unstable, so the fact that this is currently a clobber-only should not be considered a blocker of clobber_abi implementation or stabilization. So I have not implemented it in this PR.
- See https://github.com/rust-lang/rust/pull/131551 (which is based on this PR) for a PR to implement this.
- (I'm not sticking to whether that PR should be a separate PR or part of this PR, so I can merge that PR into this PR if needed.)
Refs:
- PPC32 SysV: Section "Function Calling Sequence" in [System V Application Binary Interface PowerPC Processor Supplement](https://refspecs.linuxfoundation.org/elf/elfspec_ppc.pdf)
- PPC64 ELFv1: Section 3.2 "Function Calling Sequence" in [64-bit PowerPC ELF Application Binary Interface Supplement](https://refspecs.linuxfoundation.org/ELF/ppc64/PPC-elf64abi.html#FUNC-CALL)
- PPC64 ELFv2: Section 2.2 "Function Calling Sequence" in [64-Bit ELF V2 ABI Specification](https://openpowerfoundation.org/specifications/64bitelfabi/)
- AIX: [Register usage and conventions](https://www.ibm.com/docs/en/aix/7.3?topic=overview-register-usage-conventions), [Special registers in the PowerPC®](https://www.ibm.com/docs/en/aix/7.3?topic=overview-special-registers-in-powerpc), [AIX vector programming](https://www.ibm.com/docs/en/aix/7.3?topic=concepts-aix-vector-programming)
- Register definition in LLVM: https://github.com/llvm/llvm-project/blob/llvmorg-19.1.0/llvm/lib/Target/PowerPC/PPCRegisterInfo.td#L189
If I understand the above four ABI documentations correctly, except for the PPC32 SysV's VR (Vector Registers) and 32-bit AIX (currently not supported by rustc)'s r13, there does not appear to be important differences in terms of implementing `clobber_abi`:
- The above four ABIs are consistent about FPR (0-13: volatile, 14-31: nonvolatile), CR (0-1,5-7: volatile, 2-4: nonvolatile), XER (volatile), and CTR (volatile).
- As for GPR, only the registers we are treating as reserved are slightly different
- r0, r3-r12 are volatile
- r1(sp, reserved), r14-31 are nonvolatile
- r2(reserved) is TOC pointer in PPC64 ELF/AIX, system-reserved register in PPC32 SysV (AFAIK used as thread pointer in Linux/BSDs)
- r13(reserved for non-32-bit-AIX) is thread pointer in PPC64 ELF, small data area pointer register in PPC32 SysV, "reserved under 64-bit environment; not restored across system calls[^r13]" in AIX)
- As for FPSCR, volatile in PPC64 ELFv1/AIX, some fields are volatile only in certain situations (rest are volatile) in PPC32 SysV/PPC64 ELFv2.
- As for VR (Vector Registers), it is not mentioned in PPC32 SysV, v0-v19 are volatile in both in PPC64 ELF/AIX, v20-v31 are nonvolatile in PPC64 ELF, reserved or nonvolatile depending on the ABI ([vec-extabi vs vec-default in LLVM](https://reviews.llvm.org/D89684), we are [using vec-extabi](https://github.com/rust-lang/rust/pull/131341#discussion_r1797693299)) in AIX:
> When the default Vector enabled mode is used, these registers are reserved and must not be used.
> In the extended ABI vector enabled mode, these registers are nonvolatile and their values are preserved across function calls
I left [FIXME comment about PPC32 SysV](https://github.com/rust-lang/rust/pull/131341#discussion_r1790496095) and added ABI check for AIX.
- As for VRSAVE, it is not mentioned in PPC32 SysV, nonvolatile in PPC64 ELFv1, reserved in PPC64 ELFv2/AIX
- As for VSCR, it is not mentioned in PPC32 SysV/PPC64 ELFv1, some fields are volatile only in certain situations (rest are volatile) in PPC64 ELFv2, volatile in AIX
We are currently treating r1-r2, r13 (non-32-bit-AIX), r29-r31, LR, CTR, and VRSAVE as reserved.
We are currently not processing anything about FPSCR and VSCR, but I feel those are things that should be processed by `preserves_flags` rather than `clobber_abi` if we need to do something about them. (However, PPCRegisterInfo.td in LLVM does not seem to define anything about them.)
Replaces #111335 and #124279
cc `@ecnelises` `@bzEq` `@lu-zero`
r? `@Amanieu`
`@rustbot` label +O-PowerPC +A-inline-assembly
[^r13]: callee-saved, according to [LLVM](6a6af0246b/llvm/lib/Target/PowerPC/PPCCallingConv.td (L322)) and [GCC](a9173a50e7/gcc/config/rs6000/rs6000.h (L859)).
The target name can be anything with custom target specs. Matching on
fields inside the target spec is much more robust than matching on the
target name.
- removed extra bits from predicates queries that are no longer needed in the new system
- removed the need for `non_erasable_generics` to take in tcx and DefId, removed unused arguments in callers
rust_for_linux: -Zregparm=<N> commandline flag for X86 (#116972)
Command line flag `-Zregparm=<N>` for X86 (32-bit) for rust-for-linux: https://github.com/rust-lang/rust/issues/116972
Implemented in the similar way as fastcall/vectorcall support (args are marked InReg if fit).
Add intrinsics `fmuladd{f16,f32,f64,f128}`. This computes `(a * b) +
c`, to be fused if the code generator determines that (i) the target
instruction set has support for a fused operation, and (ii) that the
fused operation is more efficient than the equivalent, separate pair
of `mul` and `add` instructions.
https://llvm.org/docs/LangRef.html#llvm-fmuladd-intrinsic
MIRI support is included for f32 and f64.
The codegen_cranelift uses the `fma` function from libc, which is a
correct implementation, but without the desired performance semantic. I
think this requires an update to cranelift to expose a suitable
instruction in its IR.
I have not tested with codegen_gcc, but it should behave the same
way (using `fma` from libc).
codegen_ssa: consolidate tied target checks
Fixes#105110.
Fixes#105111.
`rustc_codegen_llvm` and `rustc_codegen_gcc` duplicated logic for checking if tied target features were partially enabled. This PR consolidates these checks into `rustc_codegen_ssa` in the `codegen_fn_attrs` query, which also is run pre-monomorphisation for each function, which ensures that this check is run for unused functions, as would be expected.
Also adds a test confirming that enabling one tied feature doesn't imply another - the appropriate error for this was already being emitted. I did a bisect and narrowed it down to two patches it was likely to be - something in #128796, probably #128221 or #128679.
`rustc_codegen_llvm` and `rustc_codegen_gcc` duplicated logic for
checking if tied target features were partially enabled. This commit
consolidates these checks into `rustc_codegen_ssa` in the
`codegen_fn_attrs` query, which also is run pre-monomorphisation for
each function, which ensures that this check is run for unused functions,
as would be expected.
llvm: replace some deprecated functions
`LLVMMDStringInContext` and `LLVMMDNodeInContext` are deprecated, replace them with `LLVMMDStringInContext2` and `LLVMMDNodeInContext2`.
Also replace `Value` with `Metadata` in some function signatures for better consistency.
It's crazy to have the integer methods in something close to random
order.
The reordering makes the gaps clear: `const_i64`, `const_i128`,
`const_isize`, and `const_u16`. I guess they just aren't needed.
Supertraits of `BuilderMethods` are all called `XyzBuilderMethods`.
Supertraits of `CodegenMethods` are all called `XyzMethods`. This commit
changes the latter to `XyzCodegenMethods`, for consistency.
It has `Backend` and `Deref` boudns, plus an associated type
`CodegenCx`, and it has a single use. This commit "inlines" it into
`BuilderMethods`, which makes the complicated backend trait situation a
little simpler.
Don't leave debug locations for constants sitting on the builder indefinitely
Because constants are currently emitted *before* the prologue, leaving the debug location on the IRBuilder spills onto other instructions in the prologue and messes up both line numbers as well as the point LLVM chooses to be the prologue end.
Example LLVM IR (irrelevant IR elided):
Before:
```
define internal { i64, i64 } `@_ZN3tmp3Foo18var_return_opt_try17he02116165b0fc08cE(ptr` align 8 %self) !dbg !347 { start:
%self.dbg.spill = alloca [8 x i8], align 8
%_0 = alloca [16 x i8], align 8
%residual.dbg.spill = alloca [0 x i8], align 1
#dbg_declare(ptr %residual.dbg.spill, !353, !DIExpression(), !357)
store ptr %self, ptr %self.dbg.spill, align 8, !dbg !357
#dbg_declare(ptr %self.dbg.spill, !350, !DIExpression(), !358)
```
After:
```
define internal { i64, i64 } `@_ZN3tmp3Foo18var_return_opt_try17h00b17d08874ddd90E(ptr` align 8 %self) !dbg !347 { start:
%self.dbg.spill = alloca [8 x i8], align 8
%_0 = alloca [16 x i8], align 8
%residual.dbg.spill = alloca [0 x i8], align 1
#dbg_declare(ptr %residual.dbg.spill, !353, !DIExpression(), !357)
store ptr %self, ptr %self.dbg.spill, align 8
#dbg_declare(ptr %self.dbg.spill, !350, !DIExpression(), !358)
```
Note in particular how !357 from %residual.dbg.spill's dbg_declare no longer falls through onto the store to %self.dbg.spill. This fixes argument values at entry when the constant is a ZST (e.g. `<Option as Try>::Residual`). This fixes#130003 (but note that it does *not* fix issues with argument values and non-ZST constants, which emit their own stores that have debug info on them, like #128945).
r? `@michaelwoerister`
Because constants are currently emitted *before* the prologue, leaving the
debug location on the IRBuilder spills onto other instructions in the prologue
and messes up both line numbers as well as the point LLVM chooses to be the
prologue end.
Example LLVM IR (irrelevant IR elided):
Before:
define internal { i64, i64 } @_ZN3tmp3Foo18var_return_opt_try17he02116165b0fc08cE(ptr align 8 %self) !dbg !347 {
start:
%self.dbg.spill = alloca [8 x i8], align 8
%_0 = alloca [16 x i8], align 8
%residual.dbg.spill = alloca [0 x i8], align 1
#dbg_declare(ptr %residual.dbg.spill, !353, !DIExpression(), !357)
store ptr %self, ptr %self.dbg.spill, align 8, !dbg !357
#dbg_declare(ptr %self.dbg.spill, !350, !DIExpression(), !358)
After:
define internal { i64, i64 } @_ZN3tmp3Foo18var_return_opt_try17h00b17d08874ddd90E(ptr align 8 %self) !dbg !347 {
start:
%self.dbg.spill = alloca [8 x i8], align 8
%_0 = alloca [16 x i8], align 8
%residual.dbg.spill = alloca [0 x i8], align 1
#dbg_declare(ptr %residual.dbg.spill, !353, !DIExpression(), !357)
store ptr %self, ptr %self.dbg.spill, align 8
#dbg_declare(ptr %self.dbg.spill, !350, !DIExpression(), !358)
Note in particular how !357 from %residual.dbg.spill's dbg_declare no longer
falls through onto the store to %self.dbg.spill. This fixes argument values
at entry when the constant is a ZST (e.g. <Option as Try>::Residual). This
fixes#130003 (but note that it does *not* fix issues with argument values and
non-ZST constants, which emit their own stores that have debug info on them,
like #128945).
This shrinks `compiler/rustc_codegen_gcc/Cargo.lock` quite a bit. The
only remaining dependencies in `compiler/rustc_codegen_gcc/Cargo.lock`
are `gccjit`, `lang_tester`, and `boml`, all of which aren't used in any
other compiler crates.
The commit also reorders and adds comments to the `extern crate` items
so they match those in miri.
simd_shuffle intrinsic: allow argument to be passed as vector
See https://github.com/rust-lang/rust/issues/128738 for context.
I'd like to get rid of [this hack](6c0b89dfac/compiler/rustc_codegen_ssa/src/mir/block.rs (L922-L935)). https://github.com/rust-lang/rust/pull/128537 almost lets us do that since constant SIMD vectors will then be passed as immediate arguments. However, simd_shuffle for some reason actually takes an *array* as argument, not a vector, so the hack is still required to ensure that the array becomes an immediate (which then later stages of codegen convert into a vector, as that's what LLVM needs).
This PR prepares simd_shuffle to also support a vector as the `idx` argument. Once this lands, stdarch can hopefully be updated to pass `idx` as a vector, and then support for arrays can be removed, which finally lets us get rid of that hack.
Shrink `TyKind::FnPtr`.
By splitting the `FnSig` within `TyKind::FnPtr` into `FnSigTys` and `FnHeader`, which can be packed more efficiently. This reduces the size of the hot `TyKind` type from 32 bytes to 24 bytes on 64-bit platforms. This reduces peak memory usage by a few percent on some benchmarks. It also reduces cache misses and page faults similarly, though this doesn't translate to clear cycles or wall-time improvements on CI.
r? `@compiler-errors`
const vector passed through to codegen
This allows constant vectors using a repr(simd) type to be propagated
through to the backend by reusing the functionality used to do a similar
thing for the simd_shuffle intrinsic
#118209
r? RalfJung
nontemporal_store: make sure that the intrinsic is truly just a hint
The `!nontemporal` flag for stores in LLVM *sounds* like it is just a hint, but actually, it is not -- at least on x86, non-temporal stores need very special treatment by the programmer or else the Rust memory model breaks down. LLVM still treats these stores as-if they were normal stores for optimizations, which is [highly dubious](https://github.com/llvm/llvm-project/issues/64521). Let's avoid all that dubiousness by making our own non-temporal stores be truly just a hint, which is possible on some targets (e.g. ARM). On all other targets, non-temporal stores become regular stores.
~~Blocked on https://github.com/rust-lang/stdarch/pull/1541 propagating to the rustc repo, to make sure the `_mm_stream` intrinsics are unaffected by this change.~~
Fixes https://github.com/rust-lang/rust/issues/114582
Cc `@Amanieu` `@workingjubilee`
By splitting the `FnSig` within `TyKind::FnPtr` into `FnSigTys` and
`FnHeader`, which can be packed more efficiently. This reduces the size
of the hot `TyKind` type from 32 bytes to 24 bytes on 64-bit platforms.
This reduces peak memory usage by a few percent on some benchmarks. It
also reduces cache misses and page faults similarly, though this doesn't
translate to clear cycles or wall-time improvements on CI.