Commit Graph

2530 Commits

Author SHA1 Message Date
León Orell Valerian Liehr
a579a23a73
Rollup merge of #137603 - davidtwco:extern-types-no-deref, r=lcnr
codegen_llvm: avoid `Deref` impls w/ extern type

`rustc_codegen_llvm` relied on `Deref` impls where `Deref::Target` was or contained an extern type - in my experimental implementation of rust-lang/rfcs#3729, this isn't possible as the `Target` associated type's `?Sized` bound cannot be relaxed backwards compatibly (unless we come up with some way of doing this).

In later pull requests with the rust-lang/rfcs#3729 implementation, breakage like this could only occur for nightly users relying on the `extern_types` feature.

Upstreaming this to avoid needing to keep carrying this patch locally, and I think it'll necessarily need to change eventually.
2025-02-26 04:15:06 +01:00
León Orell Valerian Liehr
1511ccd6f8
Rollup merge of #137595 - folkertdev:remove-simd-pow-powi, r=RalfJung
remove `simd_fpow` and `simd_fpowi`

Discussed in https://github.com/rust-lang/rust/issues/137555

These functions are not exposed from `std::intrinsics::simd`, and not used anywhere outside of the compiler. They also don't lower to particularly good code at least on the major ISAs (I checked x86_64, aarch64, s390x, powerpc), where the vector is just spilled to the stack and scalar functions are used for the actual logic.

r? `@RalfJung`
2025-02-25 13:07:40 +01:00
Folkert de Vries
60a268998c
remove simd_fpow and simd_fpowi 2025-02-25 09:20:10 +01:00
Michael Goulet
6c1f959288
Rollup merge of #137556 - RalfJung:simd_shuffle_const_generic, r=oli-obk
rename simd_shuffle_generic → simd_shuffle_const_generic

I've been confused by this name one time too often. ;)

r? `@oli-obk`
2025-02-24 19:21:51 -05:00
Michael Goulet
828a3a41b3
Rollup merge of #137417 - taiki-e:riscv-atomic, r=Amanieu
rustc_target: Add more RISC-V atomic-related features

This is a continuation of https://github.com/rust-lang/rust/pull/130877 and adds a few target features, including `zacas`, which was experimental in LLVM 19 and marked non-experimental in LLVM 20.

This adds the following target features to unstable riscv_target_feature:

- `za64rs` (Za64rs Extension 1.0): Reservation Set Size of at Most 64 Bytes
  ([definition in LLVM](https://github.com/llvm/llvm-project/blob/llvmorg-20.1.0-rc2/llvm/lib/Target/RISCV/RISCVFeatures.td#L227-L228), [available since LLVM 18](8649328060))
- `za128rs` (Za128rs Extension 1.0): Reservation Set Size of at Most 128 Bytes
  ([definition in LLVM](https://github.com/llvm/llvm-project/blob/llvmorg-20.1.0-rc2/llvm/lib/Target/RISCV/RISCVFeatures.td#L230-L231), [available since LLVM 18](8649328060))
  - IIUC, `za*rs` can be referenced when implementing helpers to reduce contention in synchronization primitives, like [`crossbeam_utils::CachePadded`](https://docs.rs/crossbeam-utils/latest/crossbeam_utils/struct.CachePadded.html). (relevant discussion: https://github.com/riscv/riscv-profiles/issues/79)
- `zacas` (Zacas Extension 1.0): Atomic Compare-And-Swap Instructions (`amocas.{w,d,q}{,.aq,.rl,.aqrl}` and `amocas.{b,h}{,.aq,.rl,.aqrl}` when `zabha` is also enabled)
  ([definition in LLVM](https://github.com/llvm/llvm-project/blob/llvmorg-20.1.0-rc2/llvm/lib/Target/RISCV/RISCVFeatures.td#L240-L243), [available as non-experimental since LLVM 20](614aeda93b))
  - This implies `zaamo`.
  - This is used to optimize CAS in existing atomics and/or implement 64-bit/128-bit atomics on riscv32/riscv64 (e.g., https://github.com/taiki-e/portable-atomic/pull/173).
  - Note that [LLVM does not automatically use this instruction for 64-bit/128-bit atomics on riscv32/riscv64 even if this feature is enabled, because doing it changes the ABI](876174ffd7/llvm/docs/RISCVUsage.rst (riscv-zacas-note)). (If the ability to do that is provided by LLVM in the future, it should probably be controlled by another ABI feature similar to `forced-atomics`.)
- `zama16b` (Zama16b Extension 1.0): Atomic 16-byte misaligned loads, stores and AMOs
  ([definition in LLVM](https://github.com/llvm/llvm-project/blob/llvmorg-20.1.0-rc2/llvm/lib/Target/RISCV/RISCVFeatures.td#L255-L256), [available since LLVM 19](b090569685))
  - IIUC, unlike AArch64 FEAT_LSE2 which also makes 16-byte aligned ldp ({i,u}128 load) atomic, this extension only affects instructions that already considered atomic if they were naturally aligned. i.e., fld (f64 load) on riscv32 would not be atomic with or without this extension ([relevant QEMU code](b69801dd6b/target/riscv/insn_trans/trans_rvd.c.inc (L50-L62))).
- `zawrs` (Zawrs Extension 1.0): Wait on Reservation Set (`wrs.nto` and `wrs.sto`)
  ([definition in LLVM](https://github.com/llvm/llvm-project/blob/llvmorg-20.1.0-rc2/llvm/lib/Target/RISCV/RISCVFeatures.td#L258), [available as non-experimental since LLVM 17](d41a73aa94))
  - This is used to optimize synchronization primitives (e.g., Linux uses this for spinlocks (b8ddb0df30)).

Btw, the question of whether `zaamo` is implied by `zabha` or not, which was discussed in https://github.com/rust-lang/rust/pull/130877, has been resolved in LLVM 20, since LLVM now treats `zaamo` as implied by `zabha`/`zacas` (https://github.com/llvm/llvm-project/pull/115694), just like GCC and rustc.

r? `@Amanieu`

`@rustbot` label +O-riscv +A-target-feature
2025-02-24 19:21:47 -05:00
Ralf Jung
0362775fb5 rename simd_shuffle_generic → simd_shuffle_const_generic 2025-02-24 19:13:23 +01:00
David Wood
a5615d3c62
codegen_llvm: avoid Deref impls w/ extern type
`rustc_codegen_llvm` relied on `Deref` impls where `Deref::Target` was
or contained an extern type - in my experimental implementation of
rust-lang/rfcs#3729, this isn't possible as the `Target` associated
type's `?Sized` bound cannot be relaxed backwards compatibly (unless we
come up with some way of doing this).

In later pull requests with the rust-lang/rfcs#3729 implementation,
breakage like this could only occur for nightly users relying on the
`extern_types` feature.

Upstreaming this to avoid needing to keep carrying this patch locally,
and I think it'll necessarily need to change eventually.
2025-02-24 08:08:55 +00:00
bors
e0be1a0262 Auto merge of #137271 - nikic:gep-nuw-2, r=scottmcm
Emit getelementptr inbounds nuw for pointer::add()

Lower pointer::add (via intrinsic::offset with unsigned offset) to getelementptr inbounds nuw on LLVM versions that support it. This lets LLVM make use of the pre-condition that the offset addition does not wrap in an unsigned sense. Together with inbounds, this also implies that the offset is non-negative.

Fixes https://github.com/rust-lang/rust/issues/137217.
2025-02-24 03:06:16 +00:00
Trevor Gross
a2bb4d748d
Rollup merge of #136543 - RalfJung:round-ties-even, r=tgross35
intrinsics: unify rint, roundeven, nearbyint in a single round_ties_even intrinsic

LLVM has three intrinsics here that all do the same thing (when used in the default FP environment). There's no reason Rust needs to copy that historically-grown mess -- let's just have one intrinsic and leave it up to the LLVM backend to decide how to lower that.

Suggested by `@hanna-kruppe` in https://github.com/rust-lang/rust/issues/136459; Cc `@tgross35`

try-job: test-various
2025-02-23 14:30:25 -05:00
bors
15469f8f8a Auto merge of #137420 - matthiaskrgr:rollup-rr0q37f, r=matthiaskrgr
Rollup of 9 pull requests

Successful merges:

 - #136910 (Implement feature `isolate_most_least_significant_one` for integer types)
 - #137183 (Prune dead regionck code)
 - #137333 (Use `edition = "2024"` in the compiler (redux))
 - #137356 (Ferris 🦀 Identifier naming conventions)
 - #137362 (Add build step log for `run-make-support`)
 - #137377 (Always allow reusing cratenum in CrateLoader::load)
 - #137388 (Fix(lib/fs/tests): Disable rename POSIX semantics FS tests under Windows 7)
 - #137410 (Use StableHasher + Hash64 for dep_tracking_hash)
 - #137413 (jubilee cleared out the review queue)

r? `@ghost`
`@rustbot` modify labels: rollup
2025-02-22 13:32:44 +00:00
Taiki Endo
a343dcb97f rustc_target: Add more RISC-V atomic-related features 2025-02-22 16:15:14 +09:00
Manuel Drehwald
e2d250c3f6 update autodiff flags 2025-02-21 21:51:20 -05:00
Manuel Drehwald
f4e2218b13 clean up autodiff code/comments 2025-02-21 21:47:48 -05:00
Michael Goulet
e1819a889a Fix overcapturing, unsafe extern blocks, and new unsafe ops 2025-02-22 00:01:48 +00:00
Michael Goulet
76d341fa09 Upgrade the compiler to edition 2024 2025-02-22 00:01:48 +00:00
Matthias Krüger
636f4f19d8
Rollup merge of #137313 - oli-obk:push-ywvuqkxuqyom, r=petrochenkov
Some codegen_llvm cleanups

Using some more safe wrappers and thus being able to remove a large unsafe block.

As a next step we should probably look into safe extern fns
2025-02-21 12:45:26 +01:00
Zachary S
7ba3d7b54e Remove BackendRepr::Uninhabited, replaced with an uninhabited: bool field in LayoutData.
Also update comments that refered to BackendRepr::Uninhabited.
2025-02-20 13:27:32 -06:00
Oli Scherer
ce7f58bd91 Merge two operations that were always performed together 2025-02-20 11:24:00 +00:00
Oli Scherer
ea7180813b Create safe helper for LLVMSetDLLStorageClass 2025-02-20 11:15:00 +00:00
Scott McMurray
6f9cfd694d Rework OperandRef::extract_field to stop calling to_immediate_scalar on things which are already immediates
That means it stops trying to truncate things that are already `i1`s.
2025-02-19 12:03:40 -08:00
Scott McMurray
642a705f71 PR feedback 2025-02-19 11:36:52 -08:00
Scott McMurray
511bf307f0 Emit trunc nuw for unchecked shifts and to_immediate_scalar
- For shifts this shrinks the IR by no longer needing an `assume` while still providing the UB information
- Having this on the `i8`→`i1` truncations will hopefully help with some places that have to load `i8`s or pass those in LLVM structs without range information
2025-02-19 11:36:52 -08:00
Nikita Popov
31cc4c074d Emit getelementptr inbounds nuw for pointer::add() 2025-02-19 11:32:32 +01:00
Nikita Popov
5e9d8a7d55 Switch to the LLVMBuildGEPWithNoWrapFlags API
This API allows us to set the nuw flag as well.
2025-02-19 11:32:32 +01:00
Matthias Krüger
2bd65ebede
Rollup merge of #137210 - workingjubilee:fixup-passmode-import, r=RalfJung
compiler: Stop reexporting stuff in cg_llvm::abi

The reexports confuse tooling like rustdoc into thinking cg_llvm is the source of key types that originate in rustc_target.
2025-02-19 01:30:12 +01:00
Jubilee Young
2d2de18166 compiler: Stop reexporting stuff in cg_llvm::abi
The reexports confuse tooling like rustdoc into thinking cg_llvm is
the source of key types that originate in rustc_target.
2025-02-18 00:31:29 -08:00
bors
3b022d8cee Auto merge of #133852 - x17jiri:cold_path, r=saethlin
improve cold_path()

#120370 added a new instrinsic `cold_path()` and used it to fix `likely` and `unlikely`

However, in order to limit scope, the information about cold code paths is only used in 2-target switch instructions. This is sufficient for `likely` and `unlikely`, but limits usefulness of `cold_path` for idiomatic rust. For example, code like this:

```
if let Some(x) = y { ... }
```

may generate 3-target switch:

```
switch y.discriminator:
0 => true branch
1 = > false branch
_ => unreachable
```

and therefore marking a branch as cold will have no effect.

This PR improves `cold_path()` to work with arbitrary switch instructions.

Note that for 2-target switches, we can use `llvm.expect`, but for multiple targets we need to manually emit branch weights. I checked Clang and it also emits weights in this situation. The Clang's weight calculation is more complex that this PR, which I believe is mainly because `switch` in `C/C++` can have multiple cases going to the same target.
2025-02-18 07:49:09 +00:00
Nicholas Nethercote
fd7b4bf4e1 Move methods from Map to TyCtxt, part 2.
Continuing the work started in #136466.

Every method gains a `hir_` prefix, though for the ones that already
have a `par_` or `try_par_` prefix I added the `hir_` after that.
2025-02-18 10:17:44 +11:00
Jiri Bobek
7bb5f4dd78 improve cold_path() 2025-02-17 06:39:58 +01:00
Matthias Krüger
fab38375bc
Rollup merge of #137095 - saethlin:use-hash64-for-hashes, r=workingjubilee
Replace some u64 hashes with Hash64

I introduced the Hash64 and Hash128 types in https://github.com/rust-lang/rust/pull/110083, essentially as a mechanism to prevent hashes from landing in our leb128 encoding paths. If you just have a u64 or u128 field in a struct then derive Encodable/Decodable, that number gets leb128 encoding. So if you need to store a hash or some other value which behaves very close to a hash, don't store it as a u64.

This reverts part of https://github.com/rust-lang/rust/pull/117603, which turned an encoded Hash64 into a u64.

Based on https://github.com/rust-lang/rust/pull/110083, I don't expect this to be perf-sensitive on its own, though I expect that it may help stabilize some of the small rmeta size fluctuations we currently see in perf reports.
2025-02-17 06:38:14 +01:00
Ben Kimock
4cf21866e8 Move hashes from rustc_data_structure to rustc_hashes so they can be shared with rust-analyzer 2025-02-16 16:18:30 -05:00
Jacob Pratt
d3556c6644
Rollup merge of #136545 - durin42:nvptx64-align, r=nikic
nvptx64: update default alignment to match LLVM 21

This changed in llvm/llvm-project@91cb8f5d32. The commit itself is mostly about some intrinsic instructions, but as an aside it also mentions something about addrspace for tensor memory, which I believe is what this string is telling us.

`@rustbot` label: +llvm-main
2025-02-16 00:51:24 -05:00
bors
bdc97d1046 Auto merge of #136575 - scottmcm:nsuw-math, r=nikic
Set both `nuw` and `nsw` in slice size calculation

There's an old note in the code to do this, and now that [LLVM-C has an API for it](f0b8ff1251/llvm/include/llvm-c/Core.h (L4403-L4408)), we might as well.  And it's been there since what looks like LLVM 17 de9b6aa341 so doesn't even need to be conditional.

(There's other places, like `RawVecInner` or `Layout`, that might want to do things like this too, but I'll leave those for a future PR.)
2025-02-14 14:21:29 +00:00
bors
905b1bf1cc Auto merge of #137010 - workingjubilee:rollup-g00c07v, r=workingjubilee
Rollup of 9 pull requests

Successful merges:

 - #135439 (Make `-O` mean `OptLevel::Aggressive`)
 - #136460 (Simplify `rustc_span` `analyze_source_file`)
 - #136904 (add `IntoBounds` trait)
 - #136908 ([AIX] expect `EINVAL` for `pthread_mutex_destroy`)
 - #136924 (Add profiling of bootstrap commands using Chrome events)
 - #136951 (Use the right binder for rebinding `PolyTraitRef`)
 - #136981 (ci: switch loongarch jobs to free runners)
 - #136992 (Update backtrace)
 - #136993 ([cg_llvm] Remove dead error message)

r? `@ghost`
`@rustbot` modify labels: rollup
2025-02-14 06:13:42 +00:00
Jubilee
e8d0d00798
Rollup merge of #136993 - dpaoliello:cleanllvm4, r=workingjubilee
[cg_llvm] Remove dead error message

Part of #135502

Discovered a dead error message in rustc_codegen_llvm, so removing it.

r? ``@Zalathar``
2025-02-13 21:37:54 -08:00
Scott McMurray
9ad6839f7a Set both nuw and nsw in slice size calculation
There's an old note in the code to do this, and now that LLVM-C has an API for it, we might as well.
2025-02-13 21:26:48 -08:00
Jubilee
864eba9fb1
Rollup merge of #136895 - maurer:fix-enum-discr, r=nikic
debuginfo: Set bitwidth appropriately in enum variant tags

Previously, we unconditionally set the bitwidth to 128-bits, the largest an enum would possibly be. Then, LLVM would cut down the constant by chopping off leading zeroes before emitting the DWARF. LLVM only supported 64-bit enumerators, so this would also have occasionally resulted in truncated data.

LLVM added support for 128-bit enumerators in llvm/llvm-project#125578

That patchset trusts the constant to describe how wide the variant tag is, so the high 64-bits of zeros are considered potentially load-bearing.

As a result, we went from emitting tags that looked like:
DW_AT_discr_value     (0xfe)

(because `dwarf::BestForm` selected `data1`)

to emitting tags that looked like:
DW_AT_discr_value	(<0x10> fe ff ff ff 00 00 00 00 00 00 00 00 00 00 00 00 )

This makes the `DW_AT_discr_value` encode at the bitwidth of the tag, which:
1. Is probably closer to our intentions in terms of describing the data.
2. Doesn't invoke the 128-bit support which may not be supported by all debuggers / downstream tools.
3. Will result in smaller debug information.
2025-02-13 17:46:08 -08:00
Daniel Paoliello
bfdc96114c [cg_llvm] Remove dead error message 2025-02-13 15:04:39 -08:00
clubby789
2966256133 Make -O mean -C opt-level=3 2025-02-13 19:47:55 +00:00
Jacob Pratt
f7d5285062
Rollup merge of #136881 - dpaoliello:cleanllvm3, r=Zalathar
cg_llvm: Reduce visibility of all functions in the llvm module

Next part of #135502

This reduces the visibility of all functions in the `llvm` module to `pub(crate)` and marks the `enzyme_ffi` modules with `#![expect(dead_code)]` (as previously discussed: <https://github.com/rust-lang/rust/pull/135502#discussion_r1915608085>).

r? ``@Zalathar``
2025-02-13 03:53:31 -05:00
Jacob Pratt
1f669fdc7d
Rollup merge of #136858 - safinaskar:parallel-cleanup-2025-02-11-07-54, r=SparrowLii
Parallel-compiler-related cleanup

Parallel-compiler-related cleanup

I carefully split changes into commits. Commit messages are self-explanatory. Squashing is not recommended.

cc "Parallel Rustc Front-end" https://github.com/rust-lang/rust/issues/113349

r? SparrowLii

``@rustbot`` label: +WG-compiler-parallel
2025-02-13 03:53:31 -05:00
Daniel Paoliello
e7cef26a3d cg_llvm: Reduce visibility of all functions in the llvm module 2025-02-13 12:36:25 +11:00
Zalathar
659e20fa75 Remove LLVMGetModuleContext
This was unused after the removal of `-Zprofile` in #131829.
2025-02-13 12:36:09 +11:00
Jacob Pratt
33c186baf7
Rollup merge of #136807 - workingjubilee:merge-gpus-to-get-the-arcradeongeforce, r=bjorn3
compiler: internally merge `PtxKernel` into `GpuKernel`

r? ``@bjorn3`` for review
2025-02-12 20:10:00 -05:00
Jacob Pratt
0de2341fef
Rollup merge of #136217 - taiki-e:csky-asm-flags, r=Amanieu
Mark condition/carry bit as clobbered in C-SKY inline assembly

C-SKY's compare and some arithmetic/logical instructions modify condition/carry bit (C) in PSR, but there is currently no way to mark it as clobbered in `asm!`.

This PR marks it as clobbered except when [`options(preserves_flags)`](https://doc.rust-lang.org/reference/inline-assembly.html#r-asm.options.supported-options.preserves_flags) is used.

Refs:
- Section 1.3 "Programming model" and Section 1.3.5 "Condition/carry bit" in CSKY Architecture user_guide:
  9f7121f7d4/CSKY%20Architecture%20user_guide.pdf

  > Under user mode, condition/carry bit (C) is located in the lowest bit of PSR, and it can be
accessed and changed by common user instructions. It is the only data bit that can be visited
under user mode in PSR.

  > Condition or carry bit represents the result after one operation. Condition/carry bit can be
clearly set according to the results of compare instructions or unclearly set as some
high-precision arithmetic or logical instructions. In addition, special instructions such as
DEC[GT,LT,NE] and XTRB[0-3] will influence the value of condition/carry bit.

- Register definition in LLVM:
  https://github.com/llvm/llvm-project/blob/llvmorg-19.1.0/llvm/lib/Target/CSKY/CSKYRegisterInfo.td#L88

cc ```@Dirreke``` ([target maintainer](aa6f5ab18e/src/doc/rustc/src/platform-support/csky-unknown-linux-gnuabiv2.md (target-maintainers)))

r? ```@Amanieu```

```@rustbot``` label +O-csky +A-inline-assembly
2025-02-12 20:09:58 -05:00
Jacob Pratt
a53cd3c979
Rollup merge of #135025 - Flakebi:alloca-addrspace, r=nikic
Cast allocas to default address space

Pointers for variables all need to be in the same address space for correct compilation. Therefore ensure that even if an `alloca` is created in a different address space, it is casted to the default address space before its value is used.

This is necessary for the amdgpu target and others where the default address space for `alloca`s is not 0.

For example the following code compiles incorrectly when not casting the address space to the default one:

```rust
fn f(p: *const i8 /* addrspace(0) */) -> *const i8 /* addrspace(0) */ {
    let local = 0i8; /* addrspace(5) */
    let res = if cond { p } else { &raw const local };
    res
}
```

results in

```llvm
    %local = alloca addrspace(5) i8
    %res = alloca addrspace(5) ptr

if:
    ; Store 64-bit flat pointer
    store ptr %p, ptr addrspace(5) %res

else:
    ; Store 32-bit scratch pointer
    store ptr addrspace(5) %local, ptr addrspace(5) %res

ret:
    ; Load and return 64-bit flat pointer
    %res.load = load ptr, ptr addrspace(5) %res
    ret ptr %res.load
```

For amdgpu, `addrspace(0)` are 64-bit pointers, `addrspace(5)` are 32-bit pointers.
The above code may store a 32-bit pointer and read it back as a 64-bit pointer, which is obviously wrong and cannot work. Instead, we need to `addrspacecast %local to ptr addrspace(0)`, then we store and load the correct type.

Tracking issue: #135024
2025-02-12 20:09:56 -05:00
Matthew Maurer
d82219a4fa debuginfo: Set bitwidth appropriately in enum variant tags
Previously, we unconditionally set the bitwidth to 128-bits, the largest
an discrimnator would possibly be. Then, LLVM would cut down the constant by
chopping off leading zeroes before emitting the DWARF. LLVM only
supported 64-bit descriminators, so this would also have occasionally
resulted in truncated data (or an assert) if more than 64-bits were
used.

LLVM added support for 128-bit enumerators in llvm/llvm-project#125578

That patchset also trusts the constant to describe how wide the variant tag is.
As a result, we went from emitting tags that looked like:
DW_AT_discr_value     (0xfe)

(`form1`)

to emitting tags that looked like:
DW_AT_discr_value	(<0x10> fe ff ff ff 00 00 00 00 00 00 00 00 00 00 00 00 )

This makes the `DW_AT_discr_value` encode at the bitwidth of the tag,
which:
1. Is probably closer to our intentions in terms of describing the data.
2. Doesn't invoke the 128-bit support which may not be supported by all
   debuggers / downstream tools.
3. Will result in smaller debug information.
2025-02-12 18:01:42 +00:00
Matthias Krüger
9e89feefb9
Rollup merge of #135549 - oli-obk:push-tmxtpnrloyqu, r=compiler-errors
Document some safety constraints and use more safe wrappers

Lots of unsafe codegen_llvm code has safe wrappers already, so I used some of them and added some where applicable. I stopped here because this diff is large enough and should probably be reviewed independently of other changes.
2025-02-12 06:07:35 +01:00
Oli Scherer
dcf1e4d72b Document some safety constraints and use more safe wrappers 2025-02-11 09:47:13 +00:00
Oli Scherer
4b83038d63 Add a safe wrapper for WriteBitcodeToFile 2025-02-11 09:41:22 +00:00