Set branch protection function attributes
Since LLVM 19, it is necessary to set not only module flags, but also function attributes for branch protection on aarch64. See e15d67cfc2 for the relevant LLVM change.
Fixes https://github.com/rust-lang/rust/issues/127829.
Update compiler_builtins to 0.1.114
The `weak-intrinsics` feature was removed from compiler_builtins in https://github.com/rust-lang/compiler-builtins/pull/598, so dropped the `compiler-builtins-weak-intrinsics` feature from alloc/std/sysroot.
In https://github.com/rust-lang/compiler-builtins/pull/593, some builtins for f16/f128 were added. These don't work for all compiler backends, so add a `compiler-builtins-no-f16-f128` feature and disable it for cranelift and gcc.
deps: dedup object, wasmparser, wasm-encoder
* dedups one `object`, additional dupe will be removed, with next `thorin-dwp` update
* `wasmparser` pinned to minor versions, so full merge isn't possible
* same with `wasm-encoder`
Turned off some features for `wasmparser` (see features https://github.com/bytecodealliance/wasm-tools/blob/v1.208.1/crates/wasmparser/Cargo.toml) in `run-make-support`, looks working?
Since https://reviews.llvm.org/D118118, LLVM will no longer turn CMOVs
into branches if it comes from a `select` marked with an `unpredictable`
metadata attribute.
This PR introduces `core::intrinsics::select_unpredictable` which emits
such a `select` and uses it in the implementation of `binary_search_by`.
compiler: Never debug_assert in codegen
In the name of Turing and his Hoarey heralds, assert our truths before creating a monster!
The `rustc_codegen_llvm` and `rustc_codegen_ssa` crates are fairly critical for rustc's correctness. Small mistakes here can easily result in undefined behavior, since a "small mistake" can mean something like "link and execute the wrong code". We should probably run any and all asserts in these modules unconditionally on whether this is a "debug build", and damn the costs in performance.
...Especially because the costs in performance seem to be *nothing*. It is not clear how much correctness we gain here, but I'll take free correctness improvements.
Since LLVM 19, it is necessary to set not only module flags, but
also function attributes for branch protection on aarch64. See
e15d67cfc2
for the relevant LLVM change.
rustc_target: add known safe s390x target features
This pull request adds known safe target features for s390x (aka IBM Z systems).
Currently, these features are unstable since stabilizing the target features requires submitting proposals.
The `vector` feature was added in IBM Z13 (`arch11`), and this is a SIMD feature for the newer IBM Z systems.
The `backchain` attribute is the IBM Z way of adding frame pointers like unwinding capabilities (the "frame-pointer" switch on IBM Z and IBM POWER platforms will add _emulated_ frame pointers to the binary, which profilers can't use for unwinding the stack).
Both attributes can be applied at the LLVM module or function levels. However, the `backchain` attribute has to be enabled for all the functions in the call stack to get a successful unwind process.
Handle .init_array link_section specially on wasm
Given that wasm-ld now has support for [.init_array](8f2bd8ae68/llvm/lib/MC/WasmObjectWriter.cpp (L1852)), it appears we can easily implement that section by falling through to the normal path rather than taking the typical custom_section path for wasm.
The wasm-ld appears to have a bunch of limitations. Only one static with the `link_section` in a crate or else you hit the fatal error in the link above "only one .init_array section fragment supported". They do not get merged.
You can still call multiple constructors by setting it to an array.
```
unsafe extern "C" fn ctor() {
println!("foo");
}
#[used]
#[link_section = ".init_array"]
static FOO: [unsafe extern "C" fn(); 2] = [ctor, ctor];
```
Another issue appears to be that if crate *A* depends on crate *B*, but *A* doesn't call any symbols from *B* and *B* doesn't `#[export_name = ...]` any symbols, then crate *B*'s constructor will not be called. The workaround to this is to provide an exported symbol in crate *B*.
... this is a special attribute that was made to be a target-feature in
LLVM 18+, but in all previous versions, this "feature" is a naked
attribute. We will have to handle this situation differently than all
other target-features.
Sync ar_archive_writer to LLVM 18.1.3
From LLVM 15.0.0-rc3. This adds support for COFF archives containing Arm64EC object files and has various fixes for AIX big archive files.
Currently the output on failure is as follows:
Compiling block-buffer v0.10.4
Compiling crypto-common v0.1.6
Compiling digest v0.10.7
Compiling sha2 v0.10.8
Compiling xz2 v0.1.7
error: failed to build archive: No such file or directory
error: could not compile `bootstrap` (lib) due to 1 previous error
Print which file is being constructed to give some hint about what is
going on.
In the future, branch and MC/DC mappings might have expressions that don't
correspond to any single point in the control-flow graph. That makes it
trickier to keep track of which expressions should expect an `ExpressionUsed`
node.
We therefore sidestep that complexity by only performing `ExpressionUsed`
simplification for expressions associated directly with ordinary `Code`
mappings.
Fix incorrect NDEBUG handling in LLVM bindings
We currently compile our LLVM bindings using `-DNDEBUG` if debuginfo for LLVM is disabled. However, `NDEBUG` doesn't have any relation to debuginfo, it controls whether assertions are enabled.
Split the LLVM_NDEBUG environment variable into two, so that assertions and debuginfo are controlled independently.
After this change, `LLVMRustDIBuilderInsertDeclareAtEnd` triggers an assertion failure on LLVM 19 due to an incorrect cast. Fix it by removing the unused return value entirely.
r? `@cuviper`
The return value changed from an Instruction to a DbgRecord in
LLVM 19. As we don't actually use the result, drop the return
value entirely to support both.
Rollup of 8 pull requests
Successful merges:
- #124599 (Suggest borrowing on fn argument that is `impl AsRef`)
- #127572 (Don't mark `DEBUG_EVENT` struct as `repr(packed)`)
- #127588 (core: Limit remaining f16 doctests to x86_64 linux)
- #127591 (Make sure that labels are defined after the primary span in diagnostics)
- #127598 (Allows `#[diagnostic::do_not_recommend]` to supress trait impls in suggestions as well)
- #127599 (Rename `lazy_cell_consume` to `lazy_cell_into_inner`)
- #127601 (check is_ident before parse_ident)
- #127605 (Remove extern "wasm" ABI)
r? `@ghost`
`@rustbot` modify labels: rollup
Add `f16` and `f128` as simd types in LLVM
`@sayantn` is working on adding SIMD for `f16` and hitting the `FloatingPointVector` error. This should fix it and unblock adding support for `simd_fma` and `simd_fabs` in stdarch.
Remove the unstable `extern "wasm"` ABI (`wasm_abi` feature tracked
in #83788).
As discussed in https://github.com/rust-lang/rust/pull/127513#issuecomment-2220410679
and following, this ABI is a failed experiment that did not end
up being used for anything. Keeping support for this ABI in LLVM 19
would require us to switch wasm targets to the `experimental-mv`
ABI, which we do not want to do.
It should be noted that `Abi::Wasm` was internally used for two
things: The `-Z wasm-c-abi=legacy` ABI that is still used by
default on some wasm targets, and the `extern "wasm"` ABI. Despite
both being `Abi::Wasm` internally, they were not the same. An
explicit `extern "wasm"` additionally enabled the `+multivalue`
feature.
I've opted to remove `Abi::Wasm` in this patch entirely, instead
of keeping it as an ABI with only internal usage. Both
`-Z wasm-c-abi` variants are now treated as part of the normal
C ABI, just with different different treatment in
adjust_for_foreign_abi.
Add Natvis visualiser and debuginfo tests for `f16`
To render `f16`s in debuggers on MSVC targets, this PR changes the compiler to output `f16`s as `struct f16 { bits: u16 }`, and includes a Natvis visualiser that manually converts the `f16`'s bits to a `float` which is can then be displayed by debuggers. `gdb`, `lldb` and `cdb` tests are also included for `f16` .
`f16`/`f128` MSVC debug info issue: #121837
Tracking issue: #116909
Miri function identity hack: account for possible inlining
Having a non-lifetime generic is not the only reason a function can be duplicated. Another possibility is that the function may be eligible for cross-crate inlining. So also take into account the inlining attribute in this Miri hack for function pointer identity.
That said, `cross_crate_inlinable` will still sometimes return true even for `inline(never)` functions:
- when they are `DefKind::Ctor(..) | DefKind::Closure` -- I assume those cannot be `InlineAttr::Never` anyway?
- when `cross_crate_inline_threshold == InliningThreshold::Always`
so maybe this is still not quite the right criterion to use for function pointer identity.
Re-implement a type-size based limit
r? lcnr
This PR reintroduces the type length limit added in #37789, which was accidentally made practically useless by the caching changes to `Ty::walk` in #72412, which caused the `walk` function to no longer walk over identical elements.
Hitting this length limit is not fatal unless we are in codegen -- so it shouldn't affect passes like the mir inliner which creates potentially very large types (which we observed, for example, when the new trait solver compiles `itertools` in `--release` mode).
This also increases the type length limit from `1048576 == 2 ** 20` to `2 ** 24`, which covers all of the code that can be reached with craterbot-check. Individual crates can increase the length limit further if desired.
Perf regression is mild and I think we should accept it -- reinstating this limit is important for the new trait solver and to make sure we don't accidentally hit more type-size related regressions in the future.
Fixes#125460
Since this codegen flag now only controls LLVM-generated comments rather than
all assembly comments, make the name more accurate (and also match Clang).
`-Z patchable-function-entry` works like `-fpatchable-function-entry`
on clang/gcc. The arguments are total nop count and function offset.
See MCP rust-lang/compiler-team#704
Deprecate no-op codegen option `-Cinline-threshold=...`
This deprecates `-Cinline-threshold` since using it has no effect. This has been the case since the new LLVM pass manager started being used, more than 2 years ago.
Recommend using `-Cllvm-args=--inline-threshold=...` instead.
Closes#89742 which is E-help-wanted.
Add `f16` inline ASM support for 32-bit ARM
Adds `f16` inline ASM support for 32-bit ARM. SIMD vector types are taken from [here](https://developer.arm.com/architectures/instruction-sets/intrinsics/#f:`@navigationhierarchiesreturnbasetype=[float]&f:@navigationhierarchieselementbitsize=[16]&f:@navigationhierarchiesarchitectures=[A32]).`
Relevant issue: #125398
Tracking issue: #116909
`@rustbot` label +F-f16_and_f128
Add `f16` inline ASM support for RISC-V
This PR adds `f16` inline ASM support for RISC-V. A `FIXME` is left for `f128` support as LLVM does not support the required `Q` (Quad-Precision Floating-Point) extension yet.
Relevant issue: #125398
Tracking issue: #116909
`@rustbot` label +F-f16_and_f128
Honor collapse_debuginfo for statics.
fixes#126363
<!--
If this PR is related to an unstable feature or an otherwise tracked effort,
please link to the relevant tracking issue here. If you don't know of a related
tracking issue or there are none, feel free to ignore this.
This PR will get automatically assigned to a reviewer. In case you would like
a specific user to review your work, you can assign it to them by using
r? <reviewer name>
-->
Also sort `crt-static` in `--print target-features` output
I didn't find `crt-static` at first (for `x86_64-unknown-linux-gnu`), because it was put at the bottom of the large and otherwise sorted list.
Fully sort the list before we print it.
Note that `llvm_target_features` starts out and remains sorted and does not need to be sorted an extra time.
On my machine the diff is just:
```diff
$ diff -u /tmp/before2.txt /tmp/after2.txt
--- /tmp/before2.txt 2024-06-13 20:40:27.091636592 +0200
+++ /tmp/after2.txt 2024-06-13 20:39:54.584894891 +0200
``@@`` -20,6 +20,7 ``@@``
bmi1 - Support BMI instructions.
bmi2 - Support BMI2 instructions.
cmpxchg16b - 64-bit with cmpxchg16b (this is true for most x86-64 chips, but not the first AMD chips).
+ crt-static - Enables C Run-time Libraries to be statically linked.
ermsb - REP MOVS/STOS are fast.
f16c - Support 16-bit floating point conversion instructions.
fma - Enable three-operand fused multiple-add.
``@@`` -49,7 +50,6 ``@@``
xsavec - Support xsavec instructions.
xsaveopt - Support xsaveopt instructions.
xsaves - Support xsaves instructions.
- crt-static - Enables C Run-time Libraries to be statically linked.
Code-generation features supported by LLVM for this target:
16bit-mode - 16-bit mode (i8086).
```
I couldn't find a ui test that tested this output. Let's see if CI finds a regression tests.
This deprecates `-Cinline-threshold` since using it has no effect. This
has been the case since the new LLVM pass manager started being used,
more than 2 years ago.
I didn't find `crt-static` at first (for `x86_64-unknown-linux-gnu`),
because it was put at the bottom the large and otherwise sorted list.
Fully sort the list before we print it.
Note that `llvm_target_features` starts out sorted and does not need to
be sorted an extra time.
We already do this for a number of crates, e.g. `rustc_middle`,
`rustc_span`, `rustc_metadata`, `rustc_span`, `rustc_errors`.
For the ones we don't, in many cases the attributes are a mess.
- There is no consistency about order of attribute kinds (e.g.
`allow`/`deny`/`feature`).
- Within attribute kind groups (e.g. the `feature` attributes),
sometimes the order is alphabetical, and sometimes there is no
particular order.
- Sometimes the attributes of a particular kind aren't even grouped
all together, e.g. there might be a `feature`, then an `allow`, then
another `feature`.
This commit extends the existing sorting to all compiler crates,
increasing consistency. If any new attribute line is added there is now
only one place it can go -- no need for arbitrary decisions.
Exceptions:
- `rustc_log`, `rustc_next_trait_solver` and `rustc_type_ir_macros`,
because they have no crate attributes.
- `rustc_codegen_gcc`, because it's quasi-external to rustc (e.g. it's
ignored in `rustfmt.toml`).
Directly add extension instead of using `Path::with_extension`
`Path::with_extension` has a nice footgun when the original path doesn't contain an extension: Anything after the last dot gets removed.
Make repr(packed) vectors work with SIMD intrinsics
In #117116 I fixed `#[repr(packed, simd)]` by doing the expected thing and removing padding from the layout. This should be the last step in providing a solution to rust-lang/portable-simd#319
Rollup of 6 pull requests
Successful merges:
- #125263 (rust-lld: fallback to rustc's sysroot if there's no path to the linker in the target sysroot)
- #125345 (rustc_codegen_llvm: add support for writing summary bitcode)
- #125362 (Actually use TAIT instead of emulating it)
- #125412 (Don't suggest adding the unexpected cfgs to the build-script it-self)
- #125445 (Migrate `run-make/rustdoc-with-short-out-dir-option` to `rmake.rs`)
- #125452 (Cleanup check-cfg handling in core and std)
r? `@ghost`
`@rustbot` modify labels: rollup
rustc_codegen_llvm: add support for writing summary bitcode
Typical uses of ThinLTO don't have any use for this as a standalone file, but distributed ThinLTO uses this to make the linker phase more efficient. With clang you'd do something like `clang -flto=thin -fthin-link-bitcode=foo.indexing.o -c foo.c` and then get both foo.o (full of bitcode) and foo.indexing.o (just the summary or index part of the bitcode). That's then usable by a two-stage linking process that's more friendly to distributed build systems like bazel, which is why I'm working on this area.
I talked some to `@teresajohnson` about naming in this area, as things seem to be a little confused between various blog posts and build systems. "bitcode index" and "bitcode summary" tend to be a little too ambiguous, and she tends to use "thin link bitcode" and "minimized bitcode" (which matches the descriptions in LLVM). Since the clang option is thin-link-bitcode, I went with that to try and not add a new spelling in the world.
Per `@dtolnay,` you can work around the lack of this by using `lld --thinlto-index-only` to do the indexing on regular .o files of bitcode, but that is a bit wasteful on actions when we already have all the information in rustc and could just write out the matching minimized bitcode. I didn't test that at all in our infrastructure, because by the time I learned that I already had this patch largely written.
If we don't do this, some versions of LLVM (at least 17, experimentally)
will double-emit some error messages, which is how I noticed this. Given
that it seems to be costing some extra work, let's only request the
summary bitcode production if we'll actually bother writing it down,
otherwise skip it.
Typical uses of ThinLTO don't have any use for this as a standalone
file, but distributed ThinLTO uses this to make the linker phase more
efficient. With clang you'd do something like `clang -flto=thin
-fthin-link-bitcode=foo.indexing.o -c foo.c` and then get both foo.o
(full of bitcode) and foo.indexing.o (just the summary or index part of
the bitcode). That's then usable by a two-stage linking process that's
more friendly to distributed build systems like bazel, which is why I'm
working on this area.
I talked some to @teresajohnson about naming in this area, as things
seem to be a little confused between various blog posts and build
systems. "bitcode index" and "bitcode summary" tend to be a little too
ambiguous, and she tends to use "thin link bitcode" and "minimized
bitcode" (which matches the descriptions in LLVM). Since the clang
option is thin-link-bitcode, I went with that to try and not add a new
spelling in the world.
Per @dtolnay, you can work around the lack of this by using `lld
--thinlto-index-only` to do the indexing on regular .o files of
bitcode, but that is a bit wasteful on actions when we already have all
the information in rustc and could just write out the matching minimized
bitcode. I didn't test that at all in our infrastructure, because by the
time I learned that I already had this patch largely written.
This code for recalculating `mcdc_bitmap_bytes` doesn't provide any benefit,
because its result won't have changed from the value in `FunctionCoverageInfo`
that was computed during the MIR instrumentation pass.
Rollup of 5 pull requests
Successful merges:
- #124615 (coverage: Further simplify extraction of mapping info from MIR)
- #124778 (Fix parse error message for meta items)
- #124797 (Refactor float `Primitive`s to a separate `Float` type)
- #124888 (Migrate `run-make/rustdoc-output-path` to rmake)
- #124957 (Make `Ty::builtin_deref` just return a `Ty`)
r? `@ghost`
`@rustbot` modify labels: rollup
Refactor float `Primitive`s to a separate `Float` type
Now there are 4 of them, it makes sense to refactor `F16`, `F32`, `F64` and `F128` out of `Primitive` and into a separate `Float` type (like integers already are). This allows patterns like `F16 | F32 | F64 | F128` to be simplified into `Float(_)`, and is consistent with `ty::FloatTy`.
As a side effect, this PR also makes the `Ty::primitive_size` method work with `f16` and `f128`.
Tracking issue: #116909
`@rustbot` label +F-f16_and_f128
codegen: memmove/memset cannot be non-temporal
non-temporal memset is not a thing.
And for memmove, since the LLVM backend doesn't support this, surely we don't need it in the GCC backend.
LLVM has updated data layouts to specify `Fn32` on 64-bit ARM to avoid
C++ accidentally underaligning functions when trying to comply with
member function ABIs.
This should only affect Rust in cases where we had a similar bug (I
don't believe we have one), but our data layout must match to generate
code.
As a compatibility adaptatation, if LLVM is not version 19 yet, `Fn32`
gets voided from the data layout.
See llvm/llvm-project#90415
coverage: Clean up creation of MC/DC condition bitmaps
This PR improves the code for creating and initializing [MC/DC](https://en.wikipedia.org/wiki/Modified_condition/decision_coverage) condition bitmap variables, as introduced by #123409 and modified by #124255.
- The condition bitmap variables are now created eagerly at the start of per-function codegen, via a new `init_coverage` method in `CoverageInfoBuilderMethods`. This avoids having to retroactively create the bitmaps while doing codegen for an individual coverage statement.
- As a result, we can now create and initialize those bitmaps using existing safe APIs, instead of having to perform our own unsafe call to `llvm::LLVMBuildAlloca`.
- This PR also tweaks the way we count the number of condition bitmaps needed, by tracking the total number of bitmaps needed (max depth + 1), instead of only tracking the maximum depth. This reduces the potential for subtle off-by-one confusion.
Stabilize the size of incr comp object file names
The current implementation does not produce stable-length paths, and we create the paths in a way that makes our allocation behavior is nondeterministic. I think `@eddyb` fixed a number of other cases like this in the past, and this PR fixes another one. Whether that actually matters I have no idea, but we still have bimodal behavior in rustc-perf and the non-uniformity in `find` and `ls` was bothering me.
I've also removed the truncation of the mangled CGU names. Before this PR incr comp paths look like this:
```
target/debug/incremental/scratch-38izrrq90cex7/s-gux6gz0ow8-1ph76gg-ewe1xj434l26w9up5bedsojpd/261xgo1oqnd90ry5.o
```
And after, they look like this:
```
target/debug/incremental/scratch-035omutqbfkbw/s-gux6borni0-16r3v1j-6n64tmwqzchtgqzwwim5amuga/55v2re42sztc8je9bva6g8ft3.o
```
On the one hand, I'm sure this will break some people's builds because they're on Windows and only a few bytes from the path length limit. But if we're that seriously worried about the length of our file names, I have some other ideas on how to make them smaller. And last time I deleted some hash truncations from the compiler, there was a huge drop in the number if incremental compilation ICEs that were reported: https://github.com/rust-lang/rust/pull/110367https://github.com/rust-lang/rust/pull/110367
---
Upon further reading, this PR actually fixes a bug. This comment says the CGU names are supposed to be a fixed-length hash, and before this PR they aren't: ca7d34efa9/compiler/rustc_monomorphize/src/partitioning.rs (L445-L448)
Because this now always takes place at the start of the function, we can just
use the normal `alloca` method and then initialize each bitmap immediately.
This patch also moves bitmap setup out of the `mcdc_parameters` method, because
there is no longer any particular reason for it to be there.
Remove many `#[macro_use] extern crate foo` items
This requires the addition of more `use` items, which often make the code more verbose. But they also make the code easier to read, because `#[macro_use]` obscures where macros are defined.
r? `@fee1-dead`
MCDC coverage: support nested decision coverage
#123409 provided the initial MCDC coverage implementation.
As referenced in #124144, it does not currently support "nested" decisions, like the following example :
```rust
fn nested_if_in_condition(a: bool, b: bool, c: bool) {
if a && if b || c { true } else { false } {
say("yes");
} else {
say("no");
}
}
```
Note that there is an if-expression (`if b || c ...`) embedded inside a boolean expression in the decision of an outer if-expression.
This PR proposes a workaround for this cases, by introducing a Decision context stack, and by handing several `temporary condition bitmaps` instead of just one.
When instrumenting boolean expressions, if the current node is a leaf condition (i.e. not a `||`/`&&` logical operator nor a `!` not operator), we insert a new decision context, such that if there are more boolean expressions inside the condition, they are handled as separate expressions.
On the codegen LLVM side, we allocate as many `temp_cond_bitmap`s as necessary to handle the maximum encountered decision depth.
Add decision_depth field to TVBitmapUpdate/CondBitmapUpdate statements
Add decision_depth field to BcbMappingKinds MCDCBranch and MCDCDecision
Add decision_depth field to MCDCBranchSpan and MCDCDecisionSpan
Set writable and dead_on_unwind attributes for sret arguments
Set the `writable` and `dead_on_unwind` attributes for `sret` arguments. This allows call slot optimization to remove more memcpy's.
See https://llvm.org/docs/LangRef.html#parameter-attributes for the specification of these attributes. In short, the statement we're making here is that:
* The return slot is writable.
* The return slot will not be read if the function unwinds.
Fixes https://github.com/rust-lang/rust/issues/90595.
Stop using LLVM struct types for alloca
The alloca type has no semantic meaning, only the size (and alignment, but we specify it explicitly) matter. Using `[N x i8]` is a more direct way to specify that we want `N` bytes, and avoids relying on LLVM's struct layout. It is likely that a future LLVM version will change to an untyped alloca representation.
Split out from #121577.
r? `@ghost`
Dellvmize some intrinsics (use `u32` instead of `Self` in some integer intrinsics)
This implements https://github.com/rust-lang/compiler-team/issues/693 minus what was implemented in #123226.
Note: I decided to _not_ change `shl`/... builder methods, as it just doesn't seem worth it.
r? ``@scottmcm``
[cleanup] [llvm backend] Prevent creating the same `Instance::mono` multiple times
Just a little thing I came across while going through the code.
r? ```@oli-obk```
Add support for Arm64EC to the Standard Library
Adds the final pieces so that the standard library can be built for arm64ec-pc-windows-msvc (initially added in #119199)
* Bumps `windows-sys` to 0.56.0, which adds support for Arm64EC.
* Correctly set the `isEC` parameter for LLVM's `writeArchive` function.
* Add `#![feature(asm_experimental_arch)]` to library crates where Arm64EC inline assembly is used, as it is currently unstable.
This makes sure that &[] is just as efficient as indirecting through
unsafe code (from_raw_parts). No new stable guarantee is intended about
whether or not we do this, this is just an optimization.
Co-authored-by: Ralf Jung <post@ralfj.de>
Add the missing inttoptr when we ptrtoint in ptr atomics
Ralf noticed this here: https://github.com/rust-lang/rust/pull/122220#discussion_r1535172094
Our previous codegen forgot to add the cast back to integer type. The code compiles anyway, because of course all locals are in-memory to start with, so previous codegen would do the integer atomic, store the integer to a local, then load a pointer from that local. Which is definitely _not_ what we wanted: That's an integer-to-pointer transmute, so all pointers returned by these `AtomicPtr` methods didn't have provenance. Yikes.
Here's the IR for `AtomicPtr::fetch_byte_add` on 1.76: https://godbolt.org/z/8qTEjeraY
```llvm
define noundef ptr `@atomicptr_fetch_byte_add(ptr` noundef nonnull align 8 %a, i64 noundef %v) unnamed_addr #0 !dbg !7 {
start:
%0 = alloca ptr, align 8, !dbg !12
%val = inttoptr i64 %v to ptr, !dbg !12
call void `@llvm.lifetime.start.p0(i64` 8, ptr %0), !dbg !28
%1 = ptrtoint ptr %val to i64, !dbg !28
%2 = atomicrmw add ptr %a, i64 %1 monotonic, align 8, !dbg !28
store i64 %2, ptr %0, align 8, !dbg !28
%self = load ptr, ptr %0, align 8, !dbg !28
call void `@llvm.lifetime.end.p0(i64` 8, ptr %0), !dbg !28
ret ptr %self, !dbg !33
}
```
r? `@RalfJung`
cc `@nikic`
Make `PlaceRef` and `OperandValue::Ref` share a common `PlaceValue` type
Both `PlaceRef` and `OperandValue::Ref` need the triple of the backend pointer immediate, the optional backend metadata for DSTs, and the actual alignment of the place (since it can differ from the ABI alignment).
This PR introduces a new `PlaceValue` type for those three values, leaving [`PlaceRef`](https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/mir/place/struct.PlaceRef.html) with the `TyAndLayout` and a `PlaceValue`, just like how [`OperandRef`](https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/mir/operand/struct.OperandRef.html) is a `TyAndLayout` and an `OperandValue`.
This means that various places that use `Ref`s as places can just pass the `PlaceValue` along, like in the below excerpt from the diff:
```diff
match operand.val {
- OperandValue::Ref(ptr, meta, align) => {
- debug_assert_eq!(meta, None);
+ OperandValue::Ref(source_place_val) => {
+ debug_assert_eq!(source_place_val.llextra, None);
debug_assert!(matches!(operand_kind, OperandValueKind::Ref));
- let fake_place = PlaceRef::new_sized_aligned(ptr, cast, align);
+ let fake_place = PlaceRef { val: source_place_val, layout: cast };
Some(bx.load_operand(fake_place).val)
}
```
There's more refactoring that I'd like to do after this, but I wanted to stop the PR here where it's hopefully easy (albeit probably not quick) to review since I tried to keep every change line-by-line clear. (Most are just adding `.val` to get to a field.)
You can also go commit-at-a-time if you'd like. Each passed tidy and the codegen tests on my machine (though I didn't run the cg_gcc ones).
I added this back in 111999, but I no longer think it's a good idea
- It had to get scaled back to only power-of-two things to not break a bunch of targets
- LLVM seems to be getting better at memcpy removal anyway
- Introducing vector instructions has seemed to sometimes (115515) make autovectorization worse
So this removes it from the codegen crates entirely, and instead just tries to use <https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/traits/builder/trait.BuilderMethods.html#method.typed_place_copy> instead of direct `memcpy` so things will still use load/store for immediates.
sanitizers: Create the rustc_sanitizers crate
Create the `rustc_sanitizers` crate and move the source code for the CFI and KCFI sanitizers to it. The tracking issue for reviewing and moving sanitizers into a compiler crate is #123619. This is part of our work to organize and stabilize support for the sanitizers. (See our roadmap at https://hackmd.io/`@rcvalle/S1Ou9K6H6.)`
Create the rustc_sanitizers crate and move the source code for the CFI
and KCFI sanitizers to it.
Co-authored-by: David Wood <agile.lion3441@fuligin.ink>
The actual ABI implication here is that in some cases the values
are required to be "consecutive", i.e. must either all be passed
in registers or all on stack (without padding).
Adjust the code to either use Uniform::new() or Uniform::consecutive()
depending on which behavior is needed.
Then, when lowering this in LLVM, skip the [1 x i128] to i128
simplification if is_consecutive is set. i128 is the only case
I'm aware of where this is problematic right now. If we find
other cases, we can extend this (either based on target information
or possibly just by not simplifying for is_consecutive entirely).
When passing a 16 (or higher) aligned struct by value on ppc64le,
it needs to be passed as an array of `i128` rather than an array
of `i64`. This will force the use of an even starting register.
For the case of a 16 byte struct with alignment 16 it is important
that `[1 x i128]` is used instead of `i128` -- apparently, the
latter will get treated similarly to `[2 x i64]`, not exhibiting
the correct ABI. Add a `force_array` flag to `Uniform` to support
this.
The relevant clang code can be found here:
fe2119a7b0/clang/lib/CodeGen/Targets/PPC.cpp (L878-L884)fe2119a7b0/clang/lib/CodeGen/Targets/PPC.cpp (L780-L784)
I think the corresponding psABI wording is this:
> Fixed size aggregates and unions passed by value are mapped to as
> many doublewords of the parameter save area as the value uses in
> memory. Aggregrates and unions are aligned according to their
> alignment requirements. This may result in doublewords being
> skipped for alignment.
In particular the last sentence.
Fixes https://github.com/rust-lang/rust/issues/122767.
CFI: Restore typeid_for_instance default behavior
Restore typeid_for_instance default behavior of performing self type erasure, since it's the most common case and what it does most of the time. Using concrete self (or not performing self type erasure) is for assigning a secondary type id, and secondary type ids are only assigned when they're unique and to methods, and also are only tested for when methods are used as function pointers.