https://reviews.llvm.org/D120026 changed atomics on thumbv6m to
use libatomic, to ensure that atomic load/store are compatible with
atomic RMW/CAS. However, Rust wants to expose only load/store
without libcalls.
https://reviews.llvm.org/D130480 added support for this behind
the +atomics-32 target feature, so enable that feature.
Introduce an ArchiveBuilderBuilder
This avoids monomorphizing all linker code for each codegen backend and will allow passing in extra information to the archive builder from the codegen backend. I'm going to use this in https://github.com/rust-lang/rust/pull/97485 to allow passing in the right function to extract symbols from object files to a generic archive builder to be used by cg_llvm, cg_clif and cg_gcc.
This avoids monomorphizing all linker code for each codegen backend and
will allow passing in extra information to the archive builder from the
codegen backend.
codegen: use new {re,de,}allocator annotations in llvm
This obviates the patch that teaches LLVM internals about
_rust_{re,de}alloc functions by putting annotations directly in the IR
for the optimizer.
The sole test change is required to anchor FileCheck to the body of the
`box_uninitialized` method, so it doesn't see the `allocalign` on
`__rust_alloc` and get mad about the string `alloca` showing up. Since I
was there anyway, I added some checks on the attributes to prove the
right attributes got set.
r? `@nikic`
This obviates the patch that teaches LLVM internals about
_rust_{re,de}alloc functions by putting annotations directly in the IR
for the optimizer.
The sole test change is required to anchor FileCheck to the body of the
`box_uninitialized` method, so it doesn't see the `allocalign` on
`__rust_alloc` and get mad about the string `alloca` showing up. Since I
was there anyway, I added some checks on the attributes to prove the
right attributes got set.
While we're here, we also emit allocator attributes on
__rust_alloc_zeroed. This should allow LLVM to perform more
optimizations for zeroed blocks, and probably fixes#90032. [This
comment](https://github.com/rust-lang/rust/issues/24194#issuecomment-308791157)
mentions "weird UB-like behaviour with bitvec iterators in
rustc_data_structures" so we may need to back this change out if things
go wrong.
The new test cases require LLVM 15, so we copy them into LLVM
14-supporting versions, which we can delete when we drop LLVM 14.
Enable raw-dylib for bin crates
Fixes#93842
When `raw-dylib` is used in a `bin` crate, we need to collect all of the `raw-dylib` functions, generate the import library and add that to the linker command line.
I also changed the tests so that 1) the C++ dlls are created after the Rust dlls, thus there is no chance of accidentally using them in the Rust linking process and 2) disabled generating import libraries when building with MSVC.
Add fine-grained LLVM CFI support to the Rust compiler
This PR improves the LLVM Control Flow Integrity (CFI) support in the Rust compiler by providing forward-edge control flow protection for Rust-compiled code only by aggregating function pointers in groups identified by their return and parameter types.
Forward-edge control flow protection for C or C++ and Rust -compiled code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code share the same virtual address space) will be provided in later work as part of this project by identifying C char and integer type uses at the time types are encoded (see Type metadata in the design document in the tracking issue https://github.com/rust-lang/rust/issues/89653).
LLVM CFI can be enabled with -Zsanitizer=cfi and requires LTO (i.e., -Clto).
Thank you again, `@eddyb,` `@nagisa,` `@pcc,` and `@tmiasko` for all the help!
Add support for LLVM ShadowCallStack.
LLVMs ShadowCallStack provides backward edge control flow integrity protection by using a separate shadow stack to store and retrieve a function's return address.
LLVM currently only supports this for AArch64 targets. The x18 register is used to hold the pointer to the shadow stack, and therefore this only works on ABIs which reserve x18. Further details are available in the [LLVM ShadowCallStack](https://clang.llvm.org/docs/ShadowCallStack.html) docs.
# Usage
`-Zsanitizer=shadow-call-stack`
# Comments/Caveats
* Currently only enabled for the aarch64-linux-android target
* Requires the platform to define a runtime to initialize the shadow stack, see the [LLVM docs](https://clang.llvm.org/docs/ShadowCallStack.html) for more detail.
This commit improves the LLVM Control Flow Integrity (CFI) support in
the Rust compiler by providing forward-edge control flow protection for
Rust-compiled code only by aggregating function pointers in groups
identified by their return and parameter types.
Forward-edge control flow protection for C or C++ and Rust -compiled
code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code
share the same virtual address space) will be provided in later work as
part of this project by identifying C char and integer type uses at the
time types are encoded (see Type metadata in the design document in the
tracking issue #89653).
LLVM CFI can be enabled with -Zsanitizer=cfi and requires LTO (i.e.,
-Clto).
make vtable pointers entirely opaque
This implements the scheme discussed in https://github.com/rust-lang/unsafe-code-guidelines/issues/338: vtable pointers should be considered entirely opaque and not even readable by Rust code, similar to function pointers.
- We have a new kind of `GlobalAlloc` that symbolically refers to a vtable.
- Miri uses that kind of allocation when generating a vtable.
- The codegen backends, upon encountering such an allocation, call `vtable_allocation` to obtain an actually dataful allocation for this vtable.
- We need new intrinsics to obtain the size and align from a vtable (for some `ptr::metadata` APIs), since direct accesses are UB now.
I had to touch quite a bit of code that I am not very familiar with, so some of this might not make much sense...
r? `@oli-obk`
Allow to disable thinLTO buffer to support lto-embed-bitcode lld feature
Hello
This change is to fix issue (https://github.com/rust-lang/rust/issues/84395) in which passing "-lto-embed-bitcode=optimized" to lld when linking rust code via linker-plugin-lto doesn't produce the expected result.
Instead of emitting a single unified module into a llvmbc section of the linked elf, it emits multiple submodules.
This is caused because rustc emits the BC modules after running llvm `createWriteThinLTOBitcodePass` pass.
Which in turn triggers a thinLTO linkage and causes the said issue.
This patch allows via compiler flag (-Cemit-thin-lto=<bool>) to select between running `createWriteThinLTOBitcodePass` and `createBitcodeWriterPass`.
Note this pattern of selecting between those 2 passes is common inside of LLVM code.
The default is to match the old behavior.
Only compile #[used] as llvm.compiler.used for ELF targets
This returns `#[used]` to how it worked prior to the LLVM 13 update. The intention is not that this is a stable promise.
I'll add tests later today. The tests will test things that we don't actually promise, though.
It's a deliberately small patch, mostly comments. And assuming it's reviewed and lands in time, IMO it should at least be considered for uplifting to beta (so that it can be in 1.59), as the change broke many crates in the ecosystem, even if they are relying on behavior that is not guaranteed.
# Background
LLVM has two ways of preventing removal of an unused variable: `llvm.compiler.used`, which must be present in object files, but allows the linker to remove the value, and `llvm.used` which is supposed to apply to the linker as well, if possible.
Prior to LLVM 13, `llvm.used` and `llvm.compiler.used` were the same on ELF targets, although they were different elsewhere. Prior to our update to LLVM 13, we compiled `#[used]` using `llvm.used` unconditionally, even though we only ever promised behavior like `llvm.compiler.used`.
In LLVM 13, ELF targets gained some support for preventing linker removal of `llvm.used` via the SHF_RETAIN section flag. This has some compatibility issues though: Concretely: some older versions `ld.gold` (specifically ones prior to v2.36, released in Jan 2021) had a bug where it would fail to place a `#[used] #[link_section = ".init_array"]` static in between `__init_array_start`/`__init_array_end`, leading to code that does this failing to run a static constructor. This is technically not a thing we guarantee will work, is a common use case, and is needed in `libstd` (for example, to get access to `std::env::args()` even if Rust does not control `main`, such as when in a `cdylib` crate).
As a result, when updating to LLVM 13, we unconditionally switched to using `llvm.compiler.used`, which mirror the guarantees we make for `#[used]` and doesn't require the latest ld.gold. Unfortunately, this happened to break quite a bit of things in the ecosystem, as non-ELF targets had come to rely on `#[used]` being slightly stronger. In particular, there are cases where it will even break static constructors on these targets[^initinit] (and in fact, breaks way more use cases, as Mach-O uses special sections as an interface to the OS/linker/loader in many places).
As a result, we only switch to `llvm.compiler.used` on ELF[^elfish] targets. The rationale here is:
1. It is (hopefully) identical to the semantics we used prior to the LLVM13 update as prior to that update we unconditionally used `llvm.used`, but on ELF `llvm.used` was the same as `llvm.compiler.used`.
2. It seems to be how Clang compiles this, and given that they have similar (but stronger) compatibility promises, that makes sense.
[^initinit]: For Mach-O targets: It is not always guaranteed that `__DATA,__mod_init_func` is a GC root if it does not have the `S_MOD_INIT_FUNC_POINTERS` flag which we cannot add. In most cases, when ld64 transformed this section into `__DATA_CONST,__mod_init_func` it gets applied, but it's not clear that that is intentional (let alone guaranteed), and the logic is complex enough that it probably happens sometimes, and people in the wild report it occurring.
[^elfish]: Actually, there's not a great way to tell if it's ELF, so I've approximated it.
This is pretty ad-hoc and hacky! We probably should have a firmer set of guarantees here, but this change should relax the pressure on coming up with that considerably, returning it to previous levels.
---
Unsure who should review so leaving it open, but for sure CC `@nikic`
Remove branch target prologues from `#[naked] fn`
This patch hacks around rust-lang/rust#98768 for now via injecting appropriate attributes into the LLVMIR we emit for naked functions. I intend to pursue this upstream so that these attributes can be removed in general, but it's slow going wading through C++ for me.
Revert "Work around invalid DWARF bugs for fat LTO"
Since September, the toolchain has not been generating reliable DWARF
information for static variables when LTO is on. This has affected
projects in the embedded space where the use of LTO is typical. In our
case, it has kept us from bumping past the 2021-09-22 nightly toolchain
lest our debugger break. This has been a pretty dramatic regression for
people using debuggers and static variables. See #90357 for more info
and a repro case.
This commit is a mechanical revert of
d5de680e20 from PR #89041, which caused
the issue. (Note on that PR that the commit's author has requested it be
reverted.)
I have locally verified that this fixes#90357 by restoring the
functionality of both the repro case I posted on that bug, and debugger
behavior on real programs. There do not appear to be test cases for this
in the toolchain; if I've missed them, point me at 'em and I'll update
them.
Adding the option to control from rustc CLI
if the resulted ".o" bitcode module files are with
thinLTO info or regular LTO info.
Allows using "-lto-embed-bitcode=optimized" during linkage
correctly.
Signed-off-by: Ziv Dunkelman <ziv.dunkelman@nextsilicon.com>
Keep unstable target features for asm feature checking
Inline assembly uses the target features to determine which registers
are available on the current target. However it needs to be able to
access unstable target features for this.
Fixes#99071
Inline assembly uses the target features to determine which registers
are available on the current target. However it needs to be able to
access unstable target features for this.
Fixes#99071
There are several indications that we should not ZST as a ScalarInt:
- We had two ways to have ZST valtrees, either an empty `Branch` or a `Leaf` with a ZST in it.
`ValTree::zst()` used the former, but the latter could possibly arise as well.
- Likewise, the interpreter had `Immediate::Uninit` and `Immediate::Scalar(Scalar::ZST)`.
- LLVM codegen already had to special-case ZST ScalarInt.
So instead add new ZST variants to those types that did not have other variants
which could be used for this purpose.
DWARF version 5 brings a number of improvements over version 4. Quoting from
the announcement [1]:
> Version 5 incorporates improvements in many areas: better data compression,
> separation of debugging data from executable files, improved description of
> macros and source files, faster searching for symbols, improved debugging
> optimized code, as well as numerous improvements in functionality and
> performance.
On platforms where DWARF version 5 is supported (Linux, primarily), this commit
adds support for it behind a new `-Z dwarf-version=5` flag.
[1]: https://dwarfstd.org/Public_Review.php
Use less string interning
This removes string interning in a couple of places where doing so won't result in perf improvements. I also switched one place to use pre-interned symbols.
Change enum->int casts to not go through MIR casts.
follow-up to https://github.com/rust-lang/rust/pull/96814
this simplifies all backends and even gives LLVM more information about the return value of `Rvalue::Discriminant`, enabling optimizations in more cases.
Work around llvm 12's memory ordering restrictions.
Older llvm has the pre-C++17 restriction on success and failure memory ordering, requiring the former to be at least as strong as the latter. So, for llvm 12, this upgrades the success ordering to a stronger one if necessary.
See https://github.com/rust-lang/rust/issues/68464