This reflects the fact that we can't compute meaningful info for a function
that wasn't instrumented and therefore doesn't have `function_coverage_info`.
Make our `DIFlags` match `LLVMDIFlags` in the LLVM-C API
In order to be able to use a mixture of LLVM-C and C++ bindings for debuginfo, our Rust-side `DIFlags` needs to have the same layout as LLVM-C's `LLVMDIFlags`, and we also need to be able to convert it to the `DIFlags` accepted by LLVM's C++ API.
Internally, LLVM converts between the two types with a simple cast. We can't necessarily rely on that always being true, and LLVM doesn't expose a conversion function, so we have two potential options:
- Convert each bit/subvalue individually
- Statically assert that doing a cast is actually fine
As long as both types do remain the same under the hood (which seems likely), the static-assert-and-cast approach is easier and faster. If the static assertions ever start failing against some future version of LLVM, we'll have to switch over to the convert-each-subvalue approach, which is a bit more error-prone.
---
Extracted from #134009, though this PR ended up choosing the static-assert-and-cast approach over the convert-each-subvalue approach.
remove support for the (unstable) #[start] attribute
As explained by `@Noratrieb:`
`#[start]` should be deleted. It's nothing but an accidentally leaked implementation detail that's a not very useful mix between "portable" entrypoint logic and bad abstraction.
I think the way the stable user-facing entrypoint should work (and works today on stable) is pretty simple:
- `std`-using cross-platform programs should use `fn main()`. the compiler, together with `std`, will then ensure that code ends up at `main` (by having a platform-specific entrypoint that gets directed through `lang_start` in `std` to `main` - but that's just an implementation detail)
- `no_std` platform-specific programs should use `#![no_main]` and define their own platform-specific entrypoint symbol with `#[no_mangle]`, like `main`, `_start`, `WinMain` or `my_embedded_platform_wants_to_start_here`. most of them only support a single platform anyways, and need cfg for the different platform's ways of passing arguments or other things *anyways*
`#[start]` is in a super weird position of being neither of those two. It tries to pretend that it's cross-platform, but its signature is a total lie. Those arguments are just stubbed out to zero on ~~Windows~~ wasm, for example. It also only handles the platform-specific entrypoints for a few platforms that are supported by `std`, like Windows or Unix-likes. `my_embedded_platform_wants_to_start_here` can't use it, and neither could a libc-less Linux program.
So we have an attribute that only works in some cases anyways, that has a signature that's a total lie (and a signature that, as I might want to add, has changed recently, and that I definitely would not be comfortable giving *any* stability guarantees on), and where there's a pretty easy way to get things working without it in the first place.
Note that this feature has **not** been RFCed in the first place.
*This comment was posted [in May](https://github.com/rust-lang/rust/issues/29633#issuecomment-2088596042) and so far nobody spoke up in that issue with a usecase that would require keeping the attribute.*
Closes https://github.com/rust-lang/rust/issues/29633
try-job: x86_64-gnu-nopt
try-job: x86_64-msvc-1
try-job: x86_64-msvc-2
try-job: test-various
When LLVM's location discriminator value limit is exceeded, emit locations with dummy spans instead of dropping them entirely
Dropping them fails `-Zverify-llvm-ir`.
Fixes#135332.
r? `@jieyouxu`
Revert most of #133194 (except the test and the comment fixes). Then refix
not emitting locations at all when the correct location discriminator value
exceeds LLVM's capacity.
Add gpu-kernel calling convention
The amdgpu-kernel calling convention was reverted in commit f6b21e90d1 (#120495 and https://github.com/rust-lang/rust-analyzer/pull/16463) due to inactivity in the amdgpu target.
Introduce a `gpu-kernel` calling convention that translates to `ptx_kernel` or `amdgpu_kernel`, depending on the target that rust compiles for.
Tracking issue: #135467
amdgpu target tracking issue: #135024
The amdgpu-kernel calling convention was reverted in commit
f6b21e90d1 due to inactivity in the amdgpu
target.
Introduce a `gpu-kernel` calling convention that translates to
`ptx_kernel` or `amdgpu_kernel`, depending on the target that rust
compiles for.
Rename `BitSet` to `DenseBitSet`
r? `@Mark-Simulacrum` as you requested this in https://github.com/rust-lang/rust/pull/134438#discussion_r1890659739 after such a confusion.
This PR renames `BitSet` to `DenseBitSet` to make it less obvious as the go-to solution for bitmap needs, as well as make its representation (and positives/negatives) clearer. It also expands the comments there to hopefully make it clearer when it's not a good fit, with some alternative bitsets types.
(This migrates the subtrees cg_gcc and clippy to use the new name in separate commits, for easier review by their respective owners, but they can obvs be squashed)
add `-Zmin-function-alignment`
tracking issue: https://github.com/rust-lang/rust/issues/82232
This PR adds the `-Zmin-function-alignment=<align>` flag, that specifies a minimum alignment for all* functions.
### Motivation
This feature is requested by RfL [here](https://github.com/rust-lang/rust/issues/128830):
> i.e. the equivalents of `-fmin-function-alignment` ([GCC](https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html#index-fmin-function-alignment_003dn), Clang does not support it) / `-falign-functions` ([GCC](https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html#index-falign-functions), [Clang](https://clang.llvm.org/docs/ClangCommandLineReference.html#cmdoption-clang1-falign-functions)).
>
> For the Linux kernel, the behavior wanted is that of GCC's `-fmin-function-alignment` and Clang's `-falign-functions`, i.e. align all functions, including cold functions.
>
> There is [`feature(fn_align)`](https://github.com/rust-lang/rust/issues/82232), but we need to do it globally.
### Behavior
The `fn_align` feature does not have an RFC. It was decided at the time that it would not be necessary, but maybe we feel differently about that now? In any case, here are the semantics of this flag:
- `-Zmin-function-alignment=<align>` specifies the minimum alignment of all* functions
- the `#[repr(align(<align>))]` attribute can be used to override the function alignment on a per-function basis: when `-Zmin-function-alignment` is specified, the attribute's value is only used when it is higher than the value passed to `-Zmin-function-alignment`.
- the target may decide to use a higher value (e.g. on x86_64 the minimum that LLVM generates is 16)
- The highest supported alignment in rust is `2^29`: I checked a bunch of targets, and they all emit the `.p2align 29` directive for targets that align functions at all (some GPU stuff does not have function alignment).
*: Only with `build-std` would the minimum alignment also be applied to `std` functions.
---
cc `@ojeda`
r? `@workingjubilee` you were active on the tracking issue
Adds `#[rustc_force_inline]` which is similar to always inlining but
reports an error if the inlining was not possible, and which always
attempts to inline annotated items, regardless of optimisation levels.
It can only be applied to free functions to guarantee that the MIR
inliner will be able to resolve calls.
llvm: Ignore error value that is always false
See llvm/llvm-project#121851
For LLVM 20+, this function (`renameModuleForThinLTO`) has no return value. For prior versions of LLVM, this never failed, but had a signature which allowed an error value people were handling.
`@rustbot` label: +llvm-main
r? `@nikic`
Wait a moment before approving while the llvm-main infrastructure picks it up.
Add support for wasm exception handling to Emscripten target
This is a draft because we need some additional setting for the Emscripten target to select between the old exception handling and the new exception handling. I don't know how to add a setting like that, would appreciate advice from Rust folks. We could maybe choose to use the new exception handling if `Ctarget-feature=+exception-handling` is passed? I tried this but I get errors from llvm so I'm not doing it right.
See llvm/llvm-project#121851
For LLVM 20+, this function (`renameModuleForThinLTO`) has no return
value. For prior versions of LLVM, this never failed, but had a
signature which allowed an error value people were handling.
Add a notion of "some ABIs require certain target features"
I think I finally found the right shape for the data and checks that I recently added in https://github.com/rust-lang/rust/pull/133099, https://github.com/rust-lang/rust/pull/133417, https://github.com/rust-lang/rust/pull/134337: we have a notion of "this ABI requires the following list of target features, and it is incompatible with the following list of target features". Both `-Ctarget-feature` and `#[target_feature]` are updated to ensure we follow the rules of the ABI. This removes all the "toggleability" stuff introduced before, though we do keep the notion of a fully "forbidden" target feature -- this is needed to deal with target features that are actual ABI switches, and hence are needed to even compute the list of required target features.
We always explicitly (un)set all required and in-conflict features, just to avoid potential trouble caused by the default features of whatever the base CPU is. We do this *before* applying `-Ctarget-feature` to maintain backward compatibility; this poses a slight risk of missing some implicit feature dependencies in LLVM but has the advantage of not breaking users that deliberately toggle ABI-relevant target features. They get a warning but the feature does get toggled the way they requested.
For now, our logic supports x86, ARM, and RISC-V (just like the previous logic did). Unsurprisingly, RISC-V is the nicest. ;)
As a side-effect this also (unstably) allows *enabling* `x87` when that is harmless. I used the opportunity to mark SSE2 as required on x86-64, to better match the actual logic in LLVM and because all x86-64 chips do have SSE2. This infrastructure also prepares us for requiring SSE on x86-32 when we want to use that for our ABI (and for float semantics sanity), see https://github.com/rust-lang/rust/issues/133611, but no such change is happening in this PR.
r? `@workingjubilee`
[Debuginfo] Force enum `DISCR_*` to `static const u64` to allow for inspection via LLDB
see [here](https://rust-lang.zulipchat.com/#narrow/channel/317568-t-compiler.2Fwg-debugging/topic/Revamping.20Debuginfo/near/486614878) for more info.
This change mainly helps `*-msvc` debugged with LLDB. Currently, LLDB cannot inspect `static` struct fields, so the intended visualization for enums is only borderline functional, and niche enums with ranges of discriminant cannot be determined at all .
LLDB *can* inspect `static const` values (though for whatever reason, non-enum/non-u64 consts don't work).
This change adds the `LLVMRustDIBuilderCreateQualifiedType` to the rust FFI layer to wrap the discr type with a `const` modifier, as well as forcing all generated integer enum `DISCR_*` values to be u64's. Those values will only ever be used by debugger visualizers anyway, so it shouldn't be a huge deal, but I left a fixme comment for it just in case.. The `tag` also still properly reflects the discriminant type, so no information is lost.
Range metadata was disabled for amdgpu due to a backend bug. I did not
encounter any problems when removing the workaround to enable range
metadata (tried compiling `core` and `alloc`), so I assume this has
been fixed in LLVM in the last years.
Remove the workaround to re-enable range metadata.
Pointers for variables all need to be in the same address space for
correct compilation. Therefore ensure that even if a global variable is
created in a different address space, it is casted to the default
address space before its value is used.
This is necessary for the amdgpu target and others where the default
address space for global variables is not 0.
For example `core` does not compile in debug mode when not casting the
address space to the default one because it tries to emit the following
(simplified) LLVM IR, containing a type mismatch:
```llvm
@alloc_0 = addrspace(1) constant <{ [6 x i8] }> <{ [6 x i8] c"bit.rs" }>, align 1
@alloc_1 = addrspace(1) constant <{ ptr }> <{ ptr addrspace(1) @alloc_0 }>, align 8
; ^ here a struct containing a `ptr` is needed, but it is created using a `ptr addrspace(1)`
```
For this to compile, we need to insert a constant `addrspacecast` before
we use a global variable:
```llvm
@alloc_0 = addrspace(1) constant <{ [6 x i8] }> <{ [6 x i8] c"bit.rs" }>, align 1
@alloc_1 = addrspace(1) constant <{ ptr }> <{ ptr addrspacecast (ptr addrspace(1) @alloc_0 to ptr) }>, align 8
```
As vtables are global variables as well, they are also created with an
`addrspacecast`. In the SSA backend, after a vtable global is created,
metadata is added to it. To add metadata, we need the non-casted global
variable. Therefore we strip away an addrspacecast if there is one, to
get the underlying global.
coverage: Store coverage source regions as `Span` until codegen (take 2)
This is an attempt to re-land #133418:
> Historically, coverage spans were converted into line/column coordinates during the MIR instrumentation pass.
> This PR moves that conversion step into codegen, so that coverage spans spend most of their time stored as Span instead.
> In addition to being conceptually nicer, this also reduces the size of coverage mappings in MIR, because Span is smaller than 4x u32.
That PR was reverted by #133608, because in some circumstances not covered by our test suite we were emitting coverage metadata that was causing `llvm-cov` to exit with an error (#133606).
---
The implementation here is *mostly* the same, but adapted for subsequent changes in the relevant code (e.g. #134163).
I believe that the changes in #134163 should be sufficient to prevent the problem that required the original PR to be reverted. But I haven't been able to reproduce the original breakage in a regression test, and the `llvm-cov` error message is extremely unhelpful, so I can't completely rule out the possibility of this breaking again.
r? jieyouxu (reviewer of the original PR)
In codegen, a used function with `FunctionCoverageInfo` but no mappings has
historically indicated a bug. However, that will no longer be the case after
moving some fallible span-processing steps into codegen.
Re-export more `rustc_span::symbol` things from `rustc_span`.
`rustc_span::symbol` defines some things that are re-exported from `rustc_span`, such as `Symbol` and `sym`. But it doesn't re-export some closely related things such as `Ident` and `kw`. So you can do `use rustc_span::{Symbol, sym}` but you have to do `use rustc_span::symbol::{Ident, kw}`, which is inconsistent for no good reason.
This commit re-exports `Ident`, `kw`, and `MacroRulesNormalizedIdent`, and changes many `rustc_span::symbol::` qualifiers to `rustc_span::`. This is a 300+ net line of code reduction, mostly because many files with two `use rustc_span` items can be reduced to one.
r? `@jieyouxu`
`rustc_span::symbol` defines some things that are re-exported from
`rustc_span`, such as `Symbol` and `sym`. But it doesn't re-export some
closely related things such as `Ident` and `kw`. So you can do `use
rustc_span::{Symbol, sym}` but you have to do `use
rustc_span::symbol::{Ident, kw}`, which is inconsistent for no good
reason.
This commit re-exports `Ident`, `kw`, and `MacroRulesNormalizedIdent`,
and changes many `rustc_span::symbol::` qualifiers in `compiler/` to
`rustc_span::`. This is a 200+ net line of code reduction, mostly
because many files with two `use rustc_span` items can be reduced to
one.
coverage: Dismantle `map_data.rs` by moving its responsibilities elsewhere
This is a series of incremental changes that combine to let us get rid of `coverageinfo/map_data.rs`, by moving all of its responsibilities into more appropriate places.
Some of the notable consequences are:
- We once again build the per-CGU file table on the fly while preparing individual covfun records, instead of building the whole table up-front. The up-front approach was introduced by #117042 to work around various other problems in generating the covmap/covfun records, but subsequent cleanups have made that approach no longer necessary.
- Expression conversion and mapping-region conversion are now performed directly in `mapgen::covfun`, which should make future changes easier.
- We no longer insert unused function instances into the same map that is also used to track used function instances. This helps to decouple the handling of used vs unused functions.
---
There should be no meaningful change to compiler output. The file table is no longer sorted, because reordering it would invalidate the file indices stored in individual covfun records, but the table order should still be deterministic (albeit arbitrary).
There are some subsequent cleanups that I intend to investigate, but this is enough change for one PR.
This patch dismantles what was left of `FunctionCoverage` in `map_data.rs`,
replaces `function_coverage_map` with a set, and overhauls how we prepare
covfun records for unused functions.
The checks in `is_eligible_for_coverage` include `is_fn_like`, but will also
exclude various function-like things that cannot possibly have coverage
instrumentation.
reject aarch64 target feature toggling that would change the float ABI
~~Stacked on top of https://github.com/rust-lang/rust/pull/133099. Only the last two commits are new.~~
The first new commit lays the groundwork for separately controlling whether a feature may be enabled or disabled. The second commit uses that to make it illegal to *disable* the `neon` feature (which is only possible via `-Ctarget-feature`, and so the new check just adds a warning). Enabling the `neon` feature remains allowed on targets that don't disable `neon` or `fp-armv8`, which is all our built-in targets. This way, the entire PR is not a breaking change.
Fixes https://github.com/rust-lang/rust/issues/131058 for hardfloat targets (together with https://github.com/rust-lang/rust/pull/133102 which fixed it for softfloat targets).
Part of https://github.com/rust-lang/rust/issues/116344.
coverage: Tidy up creation of covmap and covfun records
This is a small follow-up to #134163 that mostly just inlines and renames some variables, and adds a few comments.
It also slightly defers the creation of the LLVM value that holds the filename table, to just before the value is needed.
---
try-job: x86_64-mingw-2
try-job: dist-x86_64-linux
forbid toggling x87 and fpregs on hard-float targets
Part of https://github.com/rust-lang/rust/issues/116344, follow-up to https://github.com/rust-lang/rust/pull/129884:
The `x87` target feature on x86 and the `fpregs` target feature on ARM must not be disabled on a hardfloat target, as that would change the float ABI. However, *enabling* `fpregs` on ARM is [explicitly requested](https://github.com/rust-lang/rust/issues/130988) as it seems to be useful. Therefore, we need to refine the distinction of "forbidden" target features and "allowed" target features: all (un)stable target features can determine on a per-target basis whether they should be allowed to be toggled or not. `fpregs` then checks whether the current target has the `soft-float` feature, and if yes, `fpregs` is permitted -- otherwise, it is not. (Same for `x87` on x86).
Also fixes https://github.com/rust-lang/rust/issues/132351. Since `fpregs` and `x87` can be enabled on some builds and disabled on others, it would make sense that one can query it via `cfg`. Therefore, I made them behave in `cfg` like any other unstable target feature.
The first commit prepares the infrastructure, but does not change behavior. The second commit then wires up `fpregs` and `x87` with that new infrastructure.
r? `@workingjubilee`
In the LLVM-C API, boolean values are passed as `typedef int LLVMBool`, but our
Rust-side typedef was using `c_uint` instead.
Signed and unsigned integers have the same ABI on most platforms, but that
isn't universally true, so we should prefer to be consistent with LLVM.
Pass end position of span through inline ASM cookie
Before this PR, only the start position of the span was passed though the inline ASM cookie to diagnostics. LLVM 19 has full support for 64-bit inline ASM cookies; this PR uses that to pass the end position of the span in the upper 32 bits, meaning inline ASM diagnostics now point at the entire line the error occurred on, not just the first character of it.
codegen `#[naked]` functions using global asm
tracking issue: https://github.com/rust-lang/rust/issues/90957Fixes#124375
This implements the approach suggested in the tracking issue: use the existing global assembly infrastructure to emit the body of `#[naked]` functions. The main advantage is that we now have full control over what gets generated, and are no longer dependent on LLVM not sneakily messing with our output (inlining, adding extra instructions, etc).
I discussed this approach with `@Amanieu` and while I think the general direction is correct, there is probably a bunch of stuff that needs to change or move around here. I'll leave some inline comments on things that I'm not sure about.
Combined with https://github.com/rust-lang/rust/pull/127853, if both accepted, I think that resolves all steps from the tracking issue.
r? `@Amanieu`
coverage: Rearrange the code for embedding per-function coverage metadata
This is a series of refactorings to the code that prepares and embeds per-function coverage metadata records (“covfun records”) in the `__llvm_covfun` linker section of the final binary. The `llvm-cov` tool reads this metadata from the binary when preparing a coverage report.
Beyond general cleanup, a big motivation behind these changes is to pave the way for re-landing an updated version of #133418.
---
There should be no change in compiler output, as demonstrated by the absence of (meaningful) changes to coverage tests.
The first patch is just moving code around, so I suggest looking at the other patches to see the actual changes.
---
try-job: x86_64-gnu
try-job: x86_64-msvc
try-job: aarch64-apple
rustc_target: ppc64 target string fixes for LLVM 20
LLVM continues to clean these up, and we continue to make this consistent. This is similar to 9caced7bad, e985396145, and
a10e744faf.
```@rustbot``` label: +llvm-main
Add the `power8-crypto` target feature
Add the `power8-crypto` target feature. This will enable adding some new PPC intrinsics in stdarch (specifically AES, SHA and CLMUL intrinsics). The implied target feature is from [here](https://github.com/llvm/llvm-project/blob/main/llvm/lib/Target/PowerPC/PPC.td)
```@rustbot``` label A-target-feature O-PowerPC
LLVM continues to clean these up, and we continue to make this
consistent. This is similar to 9caced7bad,
e985396145, and
a10e744faf.
`@rustbot` label: +llvm-main
coverage: Use a query to find counters/expressions that must be zero
As of #133446, this query (`coverage_ids_info`) determines which counter/expression IDs are unused. So with only a little extra work, we can take the code that was using that information to determine which coverage counters/expressions must be zero, and move that inside the query as well.
There should be no change in compiler output.
A bunch of cleanups
These are all extracted from a branch I have to get rid of driver queries. Most of the commits are not directly necessary for this, but were found in the process of implementing the removal of driver queries.
Previous PR: https://github.com/rust-lang/rust/pull/132410
This query (`coverage_ids_info`) already determines which counter/expression
IDs are unused, so it only takes a little extra effort to also determine which
counters/expressions must have a value of zero.
It was inconsistently done (sometimes even within a single function) and
most of the rest of the compiler uses fatal errors instead, which need
to be caught using catch_with_exit_code anyway. Using fatal errors
instead of ErrorGuaranteed everywhere in the driver simplifies things a
bit.
LLVM does not include an implementation of the va_arg instruction for
Xtensa. From what I understand, this is a conscious decision and
instead language frontends are encouraged to implement it themselves.
The rationale seems to be that loading values correctly requires
language and ABI-specific knowledge that LLVM lacks.
This is true of most architectures, and rustc already provides
implementation for a number of them. This commit extends the support to
include Xtensa.
See https://lists.llvm.org/pipermail/llvm-dev/2017-August/116337.html
for some discussion on the topic.
Unfortunately there does not seem to be a reference document for the
semantics of the va_list and va_arg on Xtensa. The most reliable source
is the GCC implementation, which this commit tries to follow. Clang also
provides its own compatible implementation.
This was tested for all the types that rustc allows in variadics.
Co-authored-by: Brian Tarricone <brian@tarricone.org>
Co-authored-by: Jonathan Bastien-Filiatrault <joe@x2a.org>
Co-authored-by: Paul Lietar <paul@lietar.net>
coverage: Use a query to identify which counter/expression IDs are used
Given that we already have a query to identify the highest-numbered counter ID in a MIR body, we can extend that query to also build bitsets of used counter/expression IDs. That lets us avoid some messy coverage bookkeeping during the main MIR traversal for codegen.
This does mean that we fail to treat some IDs as used in certain MIR-inlining scenarios, but I think that's fine, because it means that the results will be consistent across all instantiations of a function.
---
There's some more cleanup I want to do in the function coverage collector, since it isn't really collecting anything any more, but I'll leave that for future work.
Respect verify-llvm-ir option in the backend
We are currently unconditionally verifying the LLVM IR in the backend (twice), ignoring the value of the verify-llvm-ir option. This has substantial compile-time impact for debug builds.
Support input/output in vector registers of PowerPC inline assembly
This extends currently clobber-only vector registers (`vreg`) support to allow passing `#[repr(simd)]` types as input/output.
| Architecture | Register class | Target feature | Allowed types |
| ------------ | -------------- | -------------- | -------------- |
| PowerPC | `vreg` | `altivec` | `i8x16`, `i16x8`, `i32x4`, `f32x4` |
| PowerPC | `vreg` | `vsx` | `f32`, `f64`, `i64x2`, `f64x2` |
In addition to floats and `core::simd` types listed above, `core::arch` types and custom `#[repr(simd)]` types of the same size and type are also allowed. All allowed types and relevant target features are currently unstable.
r? `@Amanieu`
`@rustbot` label +O-PowerPC +A-inline-assembly
This is currently handled automatically by the fact that codegen doesn't visit
coverage statements in unused functions, but that will no longer be the case
when unused IDs are identified by a separate query instead.
Enable -Zshare-generics for inline(never) functions
This avoids inlining cross-crate generic items when possible that are
already marked inline(never), implying that the author is not intending
for the function to be inlined by callers. As such, having a local copy
may make it easier for LLVM to optimize but mostly just adds to binary
bloat and codegen time. In practice our benchmarks indicate this is
indeed a win for larger compilations, where the extra cost in dynamic
linking to these symbols is diminished compared to the advantages in
fewer copies that need optimizing in each binary.
It might also make sense it expand this with other heuristics (e.g.,
`#[cold]`) in the future, but this seems like a good starting point.
FWIW, I expect that doing cleanup in where we make the decision
what should/shouldn't be shared is also a good idea. Way too
much code needed to be tweaked to check this. But I'm hoping
to leave that for a follow-up PR rather than blocking this on it.
This reduces code sizes and better respects programmer intent when
marking inline(never). Previously such a marking was essentially ignored
for generic functions, as we'd still inline them in remote crates.
coverage: Store coverage source regions as `Span` until codegen
Historically, coverage spans were converted into line/column coordinates during the MIR instrumentation pass.
This PR moves that conversion step into codegen, so that coverage spans spend most of their time stored as `Span` instead.
In addition to being conceptually nicer, this also reduces the size of coverage mappings in MIR, because `Span` is smaller than 4x u32.
---
There should be no changes to coverage output.
Support input/output in vector registers of s390x inline assembly (under asm_experimental_reg feature)
This extends currently clobber-only vector registers (`vreg`) support to allow passing `#[repr(simd)]` types, floats (f32/f64/f128), and integers (i32/i64/i128) as input/output.
This is unstable and gated under new `#![feature(asm_experimental_reg)]` (tracking issue: https://github.com/rust-lang/rust/issues/133416). If the feature is not enabled, only clober is supported as before.
| Architecture | Register class | Target feature | Allowed types |
| ------------ | -------------- | -------------- | -------------- |
| s390x | `vreg` | `vector` | `i32`, `f32`, `i64`, `f64`, `i128`, `f128`, `i8x16`, `i16x8`, `i32x4`, `i64x2`, `f32x4`, `f64x2` |
This matches the list of types that are supported by the vector registers in LLVM:
https://github.com/llvm/llvm-project/blob/llvmorg-19.1.0/llvm/lib/Target/SystemZ/SystemZRegisterInfo.td#L301-L313
In addition to `core::simd` types and floats listed above, custom `#[repr(simd)]` types of the same size and type are also allowed. All allowed types other than i32/f32/i64/f64/i128, and relevant target features are currently unstable.
Currently there is no SIMD type for s390x in `core::arch`, but this is tracked in https://github.com/rust-lang/rust/issues/130869.
cc https://github.com/rust-lang/rust/issues/130869 about vector facility support in s390x
cc https://github.com/rust-lang/rust/issues/125398 & https://github.com/rust-lang/rust/issues/116909 about f128 support in asm
`@rustbot` label +O-SystemZ +A-inline-assembly
Fix asm goto with outputs and move it to a separate feature gate
Tracking issue: #119364
This PR addresses 3 aspects of asm goto with outputs:
* Codegen is fixed. My initial implementation has an oversight which cause the output to be only stored in fallthrough path, but not in label blocks.
* Outputs can now be used with `options(noreturn)` if a label block is given.
* All of this is moved to a new feature gate, because we likely want to stabilise `asm_goto` before asm goto with outputs.
`@rustbot` labels: +A-inline-assembly +F-asm
When labels are present, the `noreturn` option really means that asm block
won't fallthrough -- if labels are present, then outputs can still be
meaningfully used.
A used function with no mappings has historically indicated a bug, but that
will no longer be the case after moving some fallible span-processing steps
into codegen.
Allow disabling ASan instrumentation for globals
AddressSanitizer adds instrumentation to global variables unless the [`no_sanitize_address`](https://llvm.org/docs/LangRef.html#global-attributes) attribute is set on them.
This commit extends the existing `#[no_sanitize(address)]` attribute to set this; previously it only had the desired effect on functions.
(cc https://github.com/rust-lang/rust/issues/39699)
The maximum discriminator value LLVM can currently encode is 2^12. If macro use
results in more than 2^12 calls to the same function attributed to the same
callsite, and those calls are MIR-inlined, we will require more than the maximum
discriminator value to completely represent the debug information. Once we reach
that point drop the debug info instead.
The test relies on the fact that inlining more than 2^12 calls at the same
callsite will trigger a panic (and after the following commit, a warning) due to
LLVM limitations but with collapse_debuginfo the callsites should not be the
same.
the behavior of the type system not only depends on the current
assumptions, but also the currentnphase of the compiler. This is
mostly necessary as we need to decide whether and how to reveal
opaque types. We track this via the `TypingMode`.
CFI: Append debug location to CFI blocks
Currently we're not appending debug locations to the inserted CFI blocks. This shows up in #132615 and #100783. This change fixes that by passing down the debug location to the CFI type-test generation and appending it to the blocks.
Credits also belong to `@jakos-sec` who worked with me on this.
Add a default implementation for CodegenBackend::link
As a side effect this should add raw-dylib support to cg_gcc as the default ArchiveBuilderBuilder that is used implements create_dll_import_lib. I haven't tested if the raw-dylib support actually works however.
coverage: Restrict empty-span expansion to only cover `{` and `}`
Coverage instrumentation has some tricky code for converting a coverage-relevant `Span` into a set of start/end line/byte-column coordinates that will be embedded in the CGU's coverage metadata.
A big part of this complexity is special code for handling empty spans, which are expanded into non-empty spans (if possible) because LLVM's coverage reporter does not handle empty spans well.
This PR simplifies that code by restricting it to only apply in two specific situations: when the character after the empty span is `{`, or the character before the empty span is `}`.
(As an added benefit, this means that the expanded spans no longer extend awkwardly beyond the end of a physical line, which was common under the previous implementation.)
Along the way, this PR also removes some unhelpful code for dealing with function source code spread across multiple files. Functions currently can't have coverage spans in multiple files, and if that ever changes (e.g. to properly support expansion regions) then this code will need to be completely overhauled anyway.
After link_binary the temporary files referenced by CodegenResults are
deleted, so calling link_binary again with the same CodegenResults
should not be allowed.
LLVM does not expect to ever see multiple dbg_declares for the same variable at the same
location with different values. proc-macros make it possible for arbitrary code,
including multiple calls that get inlined, to happen at any given location in the source
code. Add discriminators when that happens so these locations are different to LLVM.
This may interfere with the AddDiscriminators pass in LLVM, which is added by the
unstable flag -Zdebug-info-for-profiling.
Fixes#131944
Rollup of 5 pull requests
Successful merges:
- #132552 (Add v9, v8plus, and leoncasa target feature to sparc and use v8plus in create_object_file)
- #132745 (pointee_info_at: fix logic for recursing into enums)
- #132777 (try_question_mark_nop: update test for LLVM 20)
- #132785 (rustc_target: more target string fixes for LLVM 20)
- #132794 (Use a separate dir for r-a builds consistently in helix config)
r? `@ghost`
`@rustbot` modify labels: rollup
Trim and tidy includes in `rustc_llvm`
These includes tend to accumulate over time, and are usually only removed when something breaks in a new LLVM version, so it's nice to clean them up manually once in a while.
General strategy used for this PR:
- Remove all includes from `LLVMWrapper.h` that aren't needed by the header itself, transplanting them to individual source files as necessary.
- For each source file, temporarily remove each include if doing so doesn't cause a compile error.
- If a “required” include looks like it shouldn't be needed, try replacing it with its sub-includes, then trim that list.
- After doing all of the above, go back and re-add any removed include if the file does actually use things defined in that header, even if the header happens to also be included by something else.
Functions currently can't have mappings in multiple files, and if that ever
changes (e.g. to properly support expansion regions), this code will need to be
completely overhauled anyway.
We already had a dedicated `LocalFileId` index type, but previously we used a
raw `u32` for global file IDs, because index types were harder to pass through
FFI.
Simplify FFI calls for `-Ztime-llvm-passes` and `-Zprint-codegen-stats`
The existing code for these unstable LLVM-infodump flags was jumping through hoops to pass an allocated C string across the FFI boundary, when it's much simpler to just write to a `&RustString` instead.
coverage: Extract safe FFI wrapper functions to `llvm_cov`
This PR takes all of the inline `unsafe` calls in coverage codegen, and all the safe wrapper functions in `coverageinfo/mod.rs`, and moves them to a new `llvm_cov` submodule that is dedicated to safe FFI wrapper functions. This reduces the mixing of abstraction levels in the rest of coverage codegen.
As a follow-up, this PR also tidies up the names and signatures of several of the coverage FFI functions.
Set "symbol name" in raw-dylib import libraries to the decorated name
`windows-rs` received a bug report that mixing raw-dylib generated and the Windows SDK import libraries was causing linker failures: <https://github.com/microsoft/windows-rs/issues/3285>
The root cause turned out to be #124958, that is we are not including the decorated name in the import library and so the import name type is also not being correctly set.
This change modifies the generation of import libraries to set the "symbol name" to the fully decorated name and correctly marks the import as being data vs function.
Note that this also required some changes to how the symbol is named within Rust: for MSVC we now need to use the decorated name but for MinGW we still need to use partially decorated (or undecorated) name.
Fixes#124958
Passing i686 MSVC and MinGW build: <https://github.com/rust-lang/rust/actions/runs/11000433888?pr=130586>
r? `@ChrisDenton`
Add a new `wide-arithmetic` feature for WebAssembly
This commit adds a new rustc target feature named `wide-arithmetic` for WebAssembly targets. This corresponds to the [wide-arithmetic] proposal for WebAssembly which adds new instructions catered towards accelerating integer arithmetic larger than 64-bits. This proposal to WebAssembly is not standard yet so this new feature is flagged as an unstable target feature. Additionally Rust's LLVM version doesn't support this new feature yet since support will first be added in LLVM 20, so the feature filtering logic for LLVM is updated to handle this.
I'll also note that I'm not currently planning to add wasm-specific intrinsics to `std::arch::wasm32` at this time. The currently proposed instructions are all accessible through `i128` or `u128`-based operations which Rust already supports, so intrinsic shouldn't be necessary to get access to these new instructions.
[wide-arithmetic]: https://github.com/WebAssembly/wide-arithmetic
rustc_codegen_llvm: Add a new 'pc' option to branch-protection
Add a new 'pc' option to -Z branch-protection for aarch64 that enables the use of PC as a diversifier in PAC branch protection code.
When the pauth-lr target feature is enabled in combination with -Z branch-protection=pac-ret,pc, the new 9.5-a instructions (pacibsppc, retaasppc, etc) will be generated.
mark some target features as 'forbidden' so they cannot be (un)set with -Ctarget-feature
The context for this is https://github.com/rust-lang/rust/issues/116344: some target features change the way floats are passed between functions. Changing those target features is unsound as code compiled for the same target may now use different ABIs.
So this introduces a new concept of "forbidden" target features (on top of the existing "stable " and "unstable" categories), and makes it a hard error to (un)set such a target feature. For now, the x86 and ARM feature `soft-float` is on that list. We'll have to make some effort to collect more relevant features, and similar features from other targets, but that can happen after the basic infrastructure for this landed. (These features are being collected in https://github.com/rust-lang/rust/issues/131799.)
I've made this a warning for now to give people some time to speak up if this would break something.
MCP: https://github.com/rust-lang/compiler-team/issues/780
Support clobber_abi and vector registers (clobber-only) in PowerPC inline assembly
This supports `clobber_abi` which is one of the requirements of stabilization mentioned in #93335.
This basically does a similar thing I did in https://github.com/rust-lang/rust/pull/130630 to implement `clobber_abi` for s390x, but for powerpc/powerpc64/powerpc64le.
- This also supports vector registers (as `vreg`) as clobber-only, which need to support clobbering of them to implement `clobber_abi`.
- `vreg` should be able to accept `#[repr(simd)]` types as input/output if the unstable `altivec` target feature is enabled, but `core::arch::{powerpc,powerpc64}` vector types, `#[repr(simd)]`, and `core::simd` are all unstable, so the fact that this is currently a clobber-only should not be considered a blocker of clobber_abi implementation or stabilization. So I have not implemented it in this PR.
- See https://github.com/rust-lang/rust/pull/131551 (which is based on this PR) for a PR to implement this.
- (I'm not sticking to whether that PR should be a separate PR or part of this PR, so I can merge that PR into this PR if needed.)
Refs:
- PPC32 SysV: Section "Function Calling Sequence" in [System V Application Binary Interface PowerPC Processor Supplement](https://refspecs.linuxfoundation.org/elf/elfspec_ppc.pdf)
- PPC64 ELFv1: Section 3.2 "Function Calling Sequence" in [64-bit PowerPC ELF Application Binary Interface Supplement](https://refspecs.linuxfoundation.org/ELF/ppc64/PPC-elf64abi.html#FUNC-CALL)
- PPC64 ELFv2: Section 2.2 "Function Calling Sequence" in [64-Bit ELF V2 ABI Specification](https://openpowerfoundation.org/specifications/64bitelfabi/)
- AIX: [Register usage and conventions](https://www.ibm.com/docs/en/aix/7.3?topic=overview-register-usage-conventions), [Special registers in the PowerPC®](https://www.ibm.com/docs/en/aix/7.3?topic=overview-special-registers-in-powerpc), [AIX vector programming](https://www.ibm.com/docs/en/aix/7.3?topic=concepts-aix-vector-programming)
- Register definition in LLVM: https://github.com/llvm/llvm-project/blob/llvmorg-19.1.0/llvm/lib/Target/PowerPC/PPCRegisterInfo.td#L189
If I understand the above four ABI documentations correctly, except for the PPC32 SysV's VR (Vector Registers) and 32-bit AIX (currently not supported by rustc)'s r13, there does not appear to be important differences in terms of implementing `clobber_abi`:
- The above four ABIs are consistent about FPR (0-13: volatile, 14-31: nonvolatile), CR (0-1,5-7: volatile, 2-4: nonvolatile), XER (volatile), and CTR (volatile).
- As for GPR, only the registers we are treating as reserved are slightly different
- r0, r3-r12 are volatile
- r1(sp, reserved), r14-31 are nonvolatile
- r2(reserved) is TOC pointer in PPC64 ELF/AIX, system-reserved register in PPC32 SysV (AFAIK used as thread pointer in Linux/BSDs)
- r13(reserved for non-32-bit-AIX) is thread pointer in PPC64 ELF, small data area pointer register in PPC32 SysV, "reserved under 64-bit environment; not restored across system calls[^r13]" in AIX)
- As for FPSCR, volatile in PPC64 ELFv1/AIX, some fields are volatile only in certain situations (rest are volatile) in PPC32 SysV/PPC64 ELFv2.
- As for VR (Vector Registers), it is not mentioned in PPC32 SysV, v0-v19 are volatile in both in PPC64 ELF/AIX, v20-v31 are nonvolatile in PPC64 ELF, reserved or nonvolatile depending on the ABI ([vec-extabi vs vec-default in LLVM](https://reviews.llvm.org/D89684), we are [using vec-extabi](https://github.com/rust-lang/rust/pull/131341#discussion_r1797693299)) in AIX:
> When the default Vector enabled mode is used, these registers are reserved and must not be used.
> In the extended ABI vector enabled mode, these registers are nonvolatile and their values are preserved across function calls
I left [FIXME comment about PPC32 SysV](https://github.com/rust-lang/rust/pull/131341#discussion_r1790496095) and added ABI check for AIX.
- As for VRSAVE, it is not mentioned in PPC32 SysV, nonvolatile in PPC64 ELFv1, reserved in PPC64 ELFv2/AIX
- As for VSCR, it is not mentioned in PPC32 SysV/PPC64 ELFv1, some fields are volatile only in certain situations (rest are volatile) in PPC64 ELFv2, volatile in AIX
We are currently treating r1-r2, r13 (non-32-bit-AIX), r29-r31, LR, CTR, and VRSAVE as reserved.
We are currently not processing anything about FPSCR and VSCR, but I feel those are things that should be processed by `preserves_flags` rather than `clobber_abi` if we need to do something about them. (However, PPCRegisterInfo.td in LLVM does not seem to define anything about them.)
Replaces #111335 and #124279
cc `@ecnelises` `@bzEq` `@lu-zero`
r? `@Amanieu`
`@rustbot` label +O-PowerPC +A-inline-assembly
[^r13]: callee-saved, according to [LLVM](6a6af0246b/llvm/lib/Target/PowerPC/PPCCallingConv.td (L322)) and [GCC](a9173a50e7/gcc/config/rs6000/rs6000.h (L859)).
Reduce dependence on the target name
The target name can be anything with custom target specs. Matching on fields inside the target spec is much more robust than matching on the target name.
Also remove the unused is_builtin target spec field.
The target name can be anything with custom target specs. Matching on
fields inside the target spec is much more robust than matching on the
target name.
Port most of `--print=target-cpus` to Rust
The logic and formatting needed by `--print=target-cpus` has historically been carried out in C++ code. Originally it used `printf` to write directly to the console, but later it switched over to writing to a `std::ostringstream` and then passing its buffer to a callback function pointer.
This PR replaces that C++ code with a very simple function that writes a list of CPU names to a `&RustString`, with the rest of the logic and formatting being handled by ordinary safe Rust code.
AddressSanitizer adds instrumentation to global variables unless the
[`no_sanitize_address`](https://llvm.org/docs/LangRef.html#global-attributes)
attribute is set on them.
This commit extends the existing `#[no_sanitize(address)]` attribute to
set this; previously it only had the desired effect on functions.
Move versioned Apple LLVM targets from `rustc_target` to `rustc_codegen_ssa`
Fully specified LLVM targets contain the OS version on macOS/iOS/tvOS/watchOS/visionOS, and this version depends on the deployment target environment variables like `MACOSX_DEPLOYMENT_TARGET`, `IPHONEOS_DEPLOYMENT_TARGET` etc.
We would like to move this to later in the compilation pipeline, both because it feels impure to access environment variables when fetching target information, but mostly because we need access to more information from https://github.com/rust-lang/rust/pull/130883 to do https://github.com/rust-lang/rust/issues/118204. See also https://github.com/rust-lang/rust/pull/129342#issuecomment-2335156119 for some discussion.
The first and second commit does the actual refactor, it should be a non-functional change, the third commit adds diagnostics for invalid deployment targets, which are now possible to do because we have access to the session.
Tested with the same commands as in https://github.com/rust-lang/rust/pull/130435.
r? ``````@petrochenkov``````
Remove support for `-Zprofile` (gcov-style coverage instrumentation)
Tracking issue: #42524
MCP: https://github.com/rust-lang/compiler-team/issues/798
---
This PR removes the unstable `-Zprofile` flag, which enables ”gcov-style” coverage instrumentation, along with its associated `-Zprofile-emit` configuration flag.
(The profile flag predates and is almost entirely separate from the stable `-Cinstrument-coverage` flag.)
Notably, the `-Zprofile` flag:
- Is largely untested in-tree, having only one run-make test that does not check whether its output is correct or useful.
- Has no known maintainer.
- Has seen no push towards stabilization.
- Has at least one severe regression reported in 2022 that apparently remains unaddressed.
- #100125
- Is confusingly named, since it appears to be more about coverage than performance profiling, and has nothing to do with PGO.
- Is fundamentally limited by relying on counters auto-inserted by LLVM, with no knowledge of Rust beyond debuginfo.
The OS version depends on the deployment target environment variables,
the access of which we want to move to later in the compilation pipeline
that has access to more information, for example `env_depinfo`.
LLVM continues to align more 128-bit integers to 128-bits in the data
layout rather than relying on the high level language to do it. Update
SPARC target files to match and add a backcompat replacement for current
LLVMs.
See llvm/llvm-project#106951 for details
Add a new 'pc' option to -Z branch-protection for aarch64 that
enables the use of PC as a diversifier in PAC branch protection code.
When the pauth-lr target feature is enabled in combination
with -Z branch-protection=pac-ret,pc, the new 9.5-a instructions
(pacibsppc, retaasppc, etc) will be generated.
Rename `rustc_abi::Abi` to `BackendRepr`
Remove the confabulation of `rustc_abi::Abi` with what "ABI" actually means by renaming it to `BackendRepr`, and rename `Abi::Aggregate` to `BackendRepr::Memory`. The type never actually represented how things are passed, as that has to have `PassMode` considered, at minimum, but rather it just is how we represented some things to the backend. This conflation arose because LLVM, the primary backend at the time, would lower certain IR forms using certain ABIs. Even that only somewhat was true, as it broke down when one ventured significantly afield of what is described by the System V AMD64 ABI either by using different architectures, ABI-modifying IR annotations, the same architecture **with different ISA extensions enabled**, or other... unexpected delights.
Unfortunately both names are still somewhat of a misnomer right now, as people have written code for years based on this misunderstanding. Still, their original names are even moreso, and for better or worse, this backend code hasn't received as much maintenance as the rest of the compiler, lately. Actually arriving at a correct end-state will simply require us to disentangle a lot of code in order to fix, much of it pointlessly repeated in several places. Thus this is not an "actual fix", just a way to deflect further misunderstandings.
cg_llvm: Clean up FFI calls for operand bundles
All of these FFI functions have equivalents in the stable LLVM-C API, though `LLVMBuildCallBr` requires a temporary polyfill on LLVM 18.
This PR also creates a clear split between `OperandBundleOwned` and `OperandBundle`, and updates the internals of the owner to be a little less terrifying.
The initial naming of "Abi" was an awful mistake, conveying wrong ideas
about how psABIs worked and even more about what the enum meant.
It was only meant to represent the way the value would be described to
a codegen backend as it was lowered to that intermediate representation.
It was never meant to mean anything about the actual psABI handling!
The conflation is because LLVM typically will associate a certain form
with a certain ABI, but even that does not hold when the special cases
that actually exist arise, plus the IR annotations that modify the ABI.
Reframe `rustc_abi::Abi` as the `BackendRepr` of the type, and rename
`BackendRepr::Aggregate` as `BackendRepr::Memory`. Unfortunately, due to
the persistent misunderstandings, this too is now incorrect:
- Scattered ABI-relevant code is entangled with BackendRepr
- We do not always pre-compute a correct BackendRepr that reflects how
we "actually" want this value to be handled, so we leave the backend
interface to also inject various special-cases here
- In some cases `BackendRepr::Memory` is a "real" aggregate, but in
others it is in fact using memory, and in some cases it is a scalar!
Our rustc-to-backend lowering code handles this sort of thing right now.
That will eventually be addressed by lifting duplicated lowering code
to either rustc_codegen_ssa or rustc_target as appropriate.
cg_llvm: Clean up FFI calls for setting module flags
This is a combination of several inter-related changes to how module flags are set:
- Remove some unnecessary code for setting an `"LTOPostLink"` flag, which has been obsolete since LLVM 17.
- Define our own enum instead of relying on enum values defined by LLVM's unstable C++ API.
- Use safe wrapper functions to set module flags, instead of direct `unsafe` calls.
- Consistently pass pointer/length strings instead of C strings.
- Remove or shrink some `unsafe` blocks.
- Don't rely on enum values defined by LLVM's C++ API
- Use safe wrapper functions instead of direct `unsafe` calls
- Consistently pass pointer/length strings instead of C strings
correct LLVMRustCreateThinLTOData arg types
`LLVMRustCreateThinLTOData` defined in rust as
```rust
pub fn LLVMRustCreateThinLTOData(
Modules: *const ThinLTOModule,
NumModules: c_uint,
PreservedSymbols: *const *const c_char,
PreservedSymbolsLen: c_uint,
) -> Option<&'static mut ThinLTOData>;
```
but in cpp as
```cpp
extern "C" LLVMRustThinLTOData *
LLVMRustCreateThinLTOData(LLVMRustThinLTOModule *modules, int num_modules,
const char **preserved_symbols, int num_symbols) {
```
(note `c_unit` vs `int` types). Let it be actually `size_t`.
Also fixes return type of `LLVMRustDIBuilderCreateOpLLVMFragment` to uint64_t as other similar functions around, which should be correct, i assume.
cg_llvm: Use a type-safe helper to cast `&str` and `&[u8]` to `*const c_char`
In `rustc_codegen_llvm` there are many uses of `.as_ptr().cast()` to convert a string or byte-slice to `*const c_char`, which then gets passed through FFI.
This works, but is fragile, because there's nothing constraining the pointer cast to actually be from `u8` to `c_char`. If the original value changes to something else that has an `as_ptr` method, or the context changes to expect something other than `c_char`, the cast will silently do the wrong thing.
By making the cast more explicit via a helper method, we can be sure that it will either perform the intended cast, or fail at compile time.
This commit adds a new rustc target feature named `wide-arithmetic` for
WebAssembly targets. This corresponds to the [wide-arithmetic] proposal
for WebAssembly which adds new instructions catered towards accelerating
integer arithmetic larger than 64-bits. This proposal to WebAssembly is
not standard yet so this new feature is flagged as an unstable target
feature. Additionally Rust's LLVM version doesn't support this new
feature yet since support will first be added in LLVM 20, so the
feature filtering logic for LLVM is updated to handle this.
I'll also note that I'm not currently planning to add wasm-specific
intrinsics to `std::arch::wasm32` at this time. The currently proposed
instructions are all accessible through `i128` or `u128`-based
operations which Rust already supports, so intrinsic shouldn't be
necessary to get access to these new instructions.
[wide-arithmetic]: https://github.com/WebAssembly/wide-arithmetic
Replace some LLVMRust wrappers with calls to the LLVM C API
This PR removes the LLVMRust wrapper functions for getting/setting linkage and visibility, and replaces them with direct calls to the corresponding functions in LLVM's C API.
To make this convenient and sound, two pieces of supporting code have also been added:
- A simple proc-macro that derives `TryFrom<u32>` for fieldless enums
- A wrapper type for C enum values returned by LLVM functions, to ensure soundness if LLVM returns an enum value we don't know about
In a few places, the use of safe wrapper functions means that an `unsafe` block is no longer needed, so the affected code has changed its indentation level.
rustc_target: Add pauth-lr aarch64 target feature
Add the pauth-lr target feature, corresponding to aarch64 FEAT_PAuth_LR. This feature has been added in LLVM 19.
It is currently not supported by the Linux hwcap and so we cannot add runtime feature detection for it at this time.
r? `@Amanieu`
coverage: Consolidate creation of covmap/covfun records
This code for creating covmap/covfun records during codegen was split across multiple functions and files for dubious historical reasons. Having it all in one place makes it easier to follow.
This PR also includes two semi-related cleanups:
- Getting the codegen context's `coverage_cx` state is made infallible, since it should always exist when running the code paths that need it.
- The value of `covfun_section_name` is saved in the codegen context, since it never changes at runtime, and the code that needs it has access to the context anyway.
---
Background: Coverage instrumentation generates two kinds of metadata that are embedded in the final binary. There is per-CGU information that goes in the `__llvm_covmap` linker section, and per-function information that goes in the `__llvm_covfun` section (except on Windows, where slightly different section names are used).
- removed extra bits from predicates queries that are no longer needed in the new system
- removed the need for `non_erasable_generics` to take in tcx and DefId, removed unused arguments in callers
Adding an extra `OnceCell` to `CrateCoverageContext` is much nicer than trying
to thread this string through multiple layers of function calls that already
have access to the context.
coverage: Pass coverage mappings to LLVM as separate structs
Instead of trying to cram *N* different kinds of coverage mapping data into a single list for FFI, pass *N* different lists of simpler structs.
This avoids the need to fill unused fields with dummy values, and avoids the need to tag structs with their underlying kind. It also lets us call the dedicated LLVM constructors for each different mapping type, instead of having to go through the complex general-purpose constructor.
Even though this adds multiple new structs to the FFI surface area, the resulting C++ code is simpler and shorter.
---
I've structured this mostly as a single atomic patch, rather than a series of incremental changes, because that avoids the need to make fiddly fixes to code that is about to be deleted anyway.
Continue to get rid of `ty::Const::{try_}eval*`
This PR mostly does:
* Removes all of the `try_eval_*` and `eval_*` helpers from `ty::Const`, and replace their usages with `try_to_*`.
* Remove `ty::Const::eval`.
* Rename `ty::Const::normalize` to `ty::Const::normalize_internal`. This function is still used in the normalization code itself.
* Fix some weirdness around the `TransmuteFrom` goal.
I'm happy to split it out further; for example, I could probably land the first part which removes the helpers, or the changes to codegen which are more obvious than the changes to tools.
r? BoxyUwU
Part of https://github.com/rust-lang/rust/issues/130704
Migrate `llvm::set_comdat` and `llvm::SetUniqueComdat` to LLVM-C FFI.
Note, now we can call `llvm::set_comdat` only when the target actually
supports adding comdat. As this has no convenient LLVM-C API, we
implement this as `TargetOptions::supports_comdat`.
Co-authored-by: Stuart Cook <Zalathar@users.noreply.github.com>
Add the pauth-lr target feature, corresponding to aarch64 FEAT_PAuth_LR.
This feature has been added in LLVM 19.
It is currently not supported by the Linux hwcap and so we cannot add
runtime feature detection for it at this time.
LLVM has added 3 new address spaces to support special Windows use
cases. These shouldn't trouble us for now, but LLVM requires matching
data layouts.
See llvm/llvm-project#111879 for details