This involves lots of breaking changes. There are two big changes that
force changes. The first is that the bitflag types now don't
automatically implement normal derive traits, so we need to derive them
manually.
Additionally, bitflags now have a hidden inner type by default, which
breaks our custom derives. The bitflags docs recommend using the impl
form in these cases, which I did.
`.debug_pubnames` and `.debug_pubtypes` are poorly designed and people
seldom use them. However, they take a considerable portion of size in
the final binary. This tells LLVM stop emitting those sections on
DWARFv4 or lower. DWARFv5 use `.debug_names` which is more concise
in size and performant for name lookup.
Currently LLVM uses emutls by default
for some targets (such as android, openbsd),
but rust does not use it, because `has_thread_local` is false.
This commit has some changes to allow users to enable emutls:
1. add `-Zhas-thread-local` flag to specify
that std uses `#[thread_local]` instead of pthread key.
2. when using emutls, decorate symbol names
to find thread local symbol correctly.
3. change `-Zforce-emulated-tls` to `-Ztls-model=emulated`
to explicitly specify whether to generate emutls.
Restore `#![no_builtins]` crates participation in LTO.
After #113716, we can make `#![no_builtins]` crates participate in LTO again.
`#![no_builtins]` with LTO does not result in undefined references to the error. I believe this type of issue won't happen again.
Fixes#72140. Fixes#112245. Fixes#110606. Fixes#105734. Fixes#96486. Fixes#108853. Fixes#108893. Fixes#78744. Fixes#91158. Fixes https://github.com/rust-lang/cargo/issues/10118. Fixes https://github.com/rust-lang/compiler-builtins/issues/347.
The `nightly-2023-07-20` version does not always reproduce problems due to changes in compiler-builtins, core, and user code. That's why this issue recurs and disappears.
Some issues were not tested due to the difficulty of reproducing them.
r? pnkfelix
cc `@bjorn3` `@japaric` `@alexcrichton` `@Amanieu`
This is intended to be used for Linux kernel RETHUNK builds.
With this commit (optionally backported to Rust 1.73.0), plus a
patched Linux kernel to pass the flag, I get a RETHUNK build with
Rust enabled that is `objtool`-warning-free and is able to boot in
QEMU and load a sample Rust kernel module.
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
`LLVMConstZext` recently got deleted, and it turns out (thanks to @nikic
for knowing!) that this is dead code. Tests all pass for me without this
logic, and per nikic:
> We always generate constants in "relocatable bag of bytes"
> representation, so you're never going to get a plain bool.
So this should be a safe thing to do.
r? @nikic
@rustbot label: +llvm-main
LLVM already supports emitting compressed debuginfo. In debuginfo=full
builds, the debug section is often a large amount of data, and it
typically compresses very well (3x is not unreasonable.) We add a new
knob to allow debuginfo to be compressed when the matching LLVM
functionality is present. Like clang, if a known-but-disabled
compression mechanism is requested, we disable compression and emit
uncompressed debuginfo sections.
The API is different enough on older LLVMs we just pretend the support
is missing on LLVM older than 16.
Upstream change
llvm/llvm-project@6b539f5eb8 changed
`isSectionBitcode` works and it now only respects `.llvm.lto` sections
instead of also `.llvmbc`, which it says was never intended to be used
for LTO. We instead load sections by name, and sniff for raw bitcode by
hand.
r? @nikic
@rustbot label: +llvm-main
As experimentation in 115242 has shown looks better than `coldcc`.
And *don't* use a different convention for cold on Windows, because that actually ends up making things worse.
cc tracking issue 97544
coverage: Don't convert filename/symbol strings to `CString` for FFI
LLVM APIs are usually perfectly happy to accept pointer/length strings, as long as we supply a suitable length value when creating a `StringRef` or `std::string`.
This lets us avoid quite a few intermediate `CString` copies during coverage codegen. It also lets us use an `IndexSet<Symbol>` (instead of an `IndexSet<CString>`) when building the deduplicated filename table.
CFI: Fix error compiling core with LLVM CFI enabled
Fix#90546 by filtering out global value function pointer types from the type tests, and adding the LowerTypeTests pass to the rustc LTO optimization pipelines.
Fix#90546 by filtering out global value function pointer types from the
type tests, and adding the LowerTypeTests pass to the rustc LTO
optimization pipelines.
Coverage FFI types were historically split across two modules, because some of
them were needed by code in `rustc_codegen_ssa`.
Now that all of the coverage codegen code has been moved into
`rustc_codegen_llvm` (#113355), it's possible to move all of the FFI types into
a single module, making it easier to see all of them at once.
Filter out short-lived LLVM diagnostics before they reach the rustc handler
During profiling I saw remark passes being unconditionally enabled: for example `Machine Optimization Remark Emitter`.
The diagnostic remarks enabled by default are [from missed optimizations and opt analyses](https://github.com/rust-lang/rust/pull/113339#discussion_r1259480303). They are created by LLVM, passed to the diagnostic handler on the C++ side, emitted to rust, where they are unpacked, C++ strings are converted to rust, etc.
Then they are discarded in the vast majority of the time (i.e. unless some kind of `-Cremark` has enabled some of these passes' output to be printed).
These unneeded allocations are very short-lived, basically only lasting between the LLVM pass emitting them and the rust handler where they are discarded. So it doesn't hugely impact max-rss, and is only a slight reduction in instruction count (cachegrind reports a reduction between 0.3% and 0.5%) _on linux_. It's possible that targets without `jemalloc` or with a worse allocator, may optimize these less.
It is however significant in the aggregate, looking at the total number of allocated bytes:
- it's the biggest source of allocations according to dhat, on the benchmarks I've tried e.g. `syn` or `cargo`
- allocations on `syn` are reduced by 440MB, 17% (from 2440722647 bytes total, to 2030461328 bytes)
- allocations on `cargo` are reduced by 6.6GB, 19% (from 35371886402 bytes total, to 28723987743 bytes)
Some of these diagnostics objects [are allocated in LLVM](https://github.com/rust-lang/rust/pull/113339#discussion_r1252387484) *before* they're emitted to our diagnostic handler, where they'll be filtered out. So we could remove those in the future, but that will require changing a few LLVM call-sites upstream, so I left a FIXME.
Support `--print KIND=PATH` command line syntax
As is already done for `--emit KIND=PATH` and `-L KIND=PATH`.
In the discussion of #110785, it was pointed out that `--print KIND=PATH` is nicer than trying to apply the single global `-o` path to `--print`'s output, because in general there can be multiple print requests within a single rustc invocation, and anyway `-o` would already be used for a different meaning in the case of `link-args` and `native-static-libs`.
I am interested in using `--print cfg=PATH` in Buck2. Currently Buck2 works around the lack of support for `--print KIND=PATH` by [indirecting through a Python wrapper script](d43cf3a51a/prelude/rust/tools/get_rustc_cfg.py) to redirect rustc's stdout into the location dictated by the build system.
From skimming Cargo's usages of `--print`, it definitely seems like it would benefit from `--print KIND=PATH` too. Currently it is working around the lack of this by inserting `--crate-name=___ --print=crate-name` so that it can look for a line containing `___` as a delimiter between the 2 other `--print` informations it actually cares about. This is commented as a "HACK" and "abuse". 31eda6f7c3/src/cargo/core/compiler/build_context/target_info.rs (L242) (FYI `@weihanglo` as you dealt with this recently in https://github.com/rust-lang/cargo/pull/11633.)
Mentioning reviewers active in #110785: `@fee1-dead` `@jyn514` `@bjorn3`
LLVM has a neat [statistics] feature that tracks how often optimizations kick
in. It's very handy for optimization work. Since we expose the LLVM pass
timings, I thought it made sense to expose the LLVM statistics too.
[statistics]: https://llvm.org/docs/ProgrammersManual.html#the-statistic-class-stats-option
Coverage has two FFI functions for computing the hash of a byte string. One
takes a ptr/len pair, and the other takes a NUL-terminated C string.
But on closer inspection, the C string version is unnecessary. The calling-side
code converts a Rust `&str` into a C string, and the C++ code then immediately
turns it back into a ptr/len string before actually hashing it.
Add `-Zremark-dir` unstable flag to write LLVM optimization remarks to YAML
This PR adds an option for `rustc` to emit LLVM optimization remarks to a set of YAML files, which can then be digested by existing tools, like https://github.com/OfekShilon/optview2. When `-Cremark-dir` is passed, and remarks are enabled (`-Cremark=all`), the remarks will be now written to the specified directory, **instead** of being printed to standard error output. The files are named based on the CGU from which they are being generated.
Currently, the remarks are written using the LLVM streaming machinery, directly in the diagnostics handler. It seemed easier than going back to Rust and then form there back to C++ to use the streamer from the diagnostics handler. But there are many ways to implement this, of course, so I'm open to suggestions :)
I included some comments with questions into the code. Also, I'm not sure how to test this.
r? `@tmiasko`
After the last commit, they contain `Option<&OperandBundleDef<'a>>` but
the values are always `Some(_)`. This commit removes the needless
`Option` wrapper. This also simplifies the type signatures of
`LLVMRustBuild{Invoke,Call}`, which were relying on the fact that the
represention of `Option<&T>` is the same as `&T` for non-`None` values.
Adds support for LLVM [SafeStack] which provides backward edge control
flow protection by separating the stack into two parts: data which is
only accessed in provable safe ways is allocated on the normal stack
(the "safe stack") and all other data is placed in a separate allocation
(the "unsafe stack").
SafeStack support is enabled by passing `-Zsanitizer=safestack`.
[SafeStack]: https://clang.llvm.org/docs/SafeStack.html
Remove the ThinLTO CU hack
This reverts #46722, commit e0ab5d5feb.
Since #111167, commit 10b69dde3f, we are
generating DWARF subprograms in a way that is meant to be more compatible
with LLVM's expectations, so hopefully we don't need this workaround
rewriting CUs anymore.
This reverts #46722, commit e0ab5d5feb.
Since #111167, commit 10b69dde3f, we are
generating DWARF subprograms in a way that is meant to be more compatible
with LLVM's expectations, so hopefully we don't need this workaround
rewriting CUs anymore.
debuginfo: split method declaration and definition
When we're adding a method to a type DIE, we only want a DW_AT_declaration
there, because LLVM LTO can't unify type definitions when a child DIE is a
full subprogram definition. Now the subprogram definition gets added at the
CU level with a specification link back to the abstract declaration.
Both GCC and Clang write debuginfo this way for C++ class methods.
Fixes#109730.
Fixes#109934.
When we're adding a method to a type DIE, we only want a DW_AT_declaration
there, because LLVM LTO can't unify type definitions when a child DIE is a
full subprogram definition. Now the subprogram definition gets added at the
CU level with a specification link back to the abstract declaration.
`-Cdebuginfo=1` was never line tables only and
can't be due to backwards compatibility issues.
This was clarified and an option for line tables only
was added. Additionally an option for line info
directives only was added, which is well needed for
some targets. The debug info options should now
behave the same as clang's debug info options.
In cases where it is legal, we should prefer poison values over
undef values.
This replaces undef with poison for aggregate construction and
for uninhabited types. There are more places where we can likely
use poison, but I wanted to stay conservative to start with.
In particular the aggregate case is important for newer LLVM
versions, which are not able to handle an undef base value during
early optimization due to poison-propagation concerns.
Remove legacy PM leftovers
This drops two leftovers of legacy PM usage:
* We don't need to initialize passes anymore.
* The pass listing was still using legacy PM passes. Replace it with the corresponding new PM listing.
When building with Fat LTO and BTI enabled on aarch64, the BTI is set to
`Module::Min` for alloc shim but is set to `Module::Error` for the
crate. This was fine when we were using LLVM-14 but LLVM-15 changes it's
behaviour to support for compiling with different `mbranch-protection`
flags.
Refer:
b0343a38a5
Add LLVM KCFI support to the Rust compiler
This PR adds LLVM Kernel Control Flow Integrity (KCFI) support to the Rust compiler. It initially provides forward-edge control flow protection for operating systems kernels for Rust-compiled code only by aggregating function pointers in groups identified by their return and parameter types. (See llvm/llvm-project@cff5bef.)
Forward-edge control flow protection for C or C++ and Rust -compiled code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code share the same virtual address space) will be provided in later work as part of this project by identifying C char and integer type uses at the time types are encoded (see Type metadata in the design document in the tracking issue #89653).
LLVM KCFI can be enabled with -Zsanitizer=kcfi.
Thank you again, `@bjorn3,` `@eddyb,` `@nagisa,` and `@ojeda,` for all the help!
This commit adds LLVM Kernel Control Flow Integrity (KCFI) support to
the Rust compiler. It initially provides forward-edge control flow
protection for operating systems kernels for Rust-compiled code only by
aggregating function pointers in groups identified by their return and
parameter types. (See llvm/llvm-project@cff5bef.)
Forward-edge control flow protection for C or C++ and Rust -compiled
code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code
share the same virtual address space) will be provided in later work as
part of this project by identifying C char and integer type uses at the
time types are encoded (see Type metadata in the design document in the
tracking issue #89653).
LLVM KCFI can be enabled with -Zsanitizer=kcfi.
Co-authored-by: bjorn3 <17426603+bjorn3@users.noreply.github.com>
Pass 128-bit C-style enum enumerator values to LLVM
Pass the full 128 bits of C-style enum enumerators through to LLVM. This means that debuginfo for C-style repr128 enums is now emitted correctly for DWARF platforms (as compared to not being correctly emitted on any platform).
Tracking issue: #56071
Remove support for legacy PM
This removes support for optimizing with LLVM's legacy pass manager, as well as the unstable `-Znew-llvm-pass-manager` option. We have been defaulting to the new PM since LLVM 13 (except for s390x that waited for 14), and LLVM 15 removed support altogether. The only place we still use the legacy PM is for writing the output file, just like `llc` does.
cc #74705
r? ``@nikic``
Use object instead of LLVM for reading bitcode from rlibs
Together with changes I plan to make as part of https://github.com/rust-lang/rust/pull/97485 this will allow entirely removing usage of LLVM's archive reader and thus allow removing `archive_ro.rs` and `ArchiveWrapper.cpp`.
Update the minimum external LLVM to 13
With this change, we'll have stable support for LLVM 13 through 15 (pending release).
For reference, the previous increase to LLVM 12 was #90175.
r? `@nagisa`
Add support for generating unique profraw files by default when using `-C instrument-coverage`
Currently, enabling the rustc flag `-C instrument-coverage` instruments the given crate and by default uses the naming scheme `default.profraw` for any instrumented profile files generated during the execution of a binary linked against this crate. This leads to multiple binaries being executed overwriting one another and causing only the last executable run to contain actual coverage results.
This can be overridden by manually setting the environment variable `LLVM_PROFILE_FILE` to use a unique naming scheme.
This PR adds a change to add support for a reasonable default for rustc to use when enabling coverage instrumentation similar to how the Rust compiler treats generating these same `profraw` files when PGO is enabled.
The new naming scheme is set to `default_%m_%p.profraw` to ensure the uniqueness of each file being generated using [LLVMs special pattern strings](https://clang.llvm.org/docs/SourceBasedCodeCoverage.html#running-the-instrumented-program).
Today the compiler sets the default for PGO `profraw` files to `default_%m.profraw` to ensure a unique file for each run. The same can be done for the instrumented profile files generated via the `-C instrument-coverage` flag as well which LLVM has API support for.
Linked Issue: https://github.com/rust-lang/rust/issues/100381
r? `@wesleywiser`
This obviates the patch that teaches LLVM internals about
_rust_{re,de}alloc functions by putting annotations directly in the IR
for the optimizer.
The sole test change is required to anchor FileCheck to the body of the
`box_uninitialized` method, so it doesn't see the `allocalign` on
`__rust_alloc` and get mad about the string `alloca` showing up. Since I
was there anyway, I added some checks on the attributes to prove the
right attributes got set.
While we're here, we also emit allocator attributes on
__rust_alloc_zeroed. This should allow LLVM to perform more
optimizations for zeroed blocks, and probably fixes#90032. [This
comment](https://github.com/rust-lang/rust/issues/24194#issuecomment-308791157)
mentions "weird UB-like behaviour with bitvec iterators in
rustc_data_structures" so we may need to back this change out if things
go wrong.
The new test cases require LLVM 15, so we copy them into LLVM
14-supporting versions, which we can delete when we drop LLVM 14.
Add support for LLVM ShadowCallStack.
LLVMs ShadowCallStack provides backward edge control flow integrity protection by using a separate shadow stack to store and retrieve a function's return address.
LLVM currently only supports this for AArch64 targets. The x18 register is used to hold the pointer to the shadow stack, and therefore this only works on ABIs which reserve x18. Further details are available in the [LLVM ShadowCallStack](https://clang.llvm.org/docs/ShadowCallStack.html) docs.
# Usage
`-Zsanitizer=shadow-call-stack`
# Comments/Caveats
* Currently only enabled for the aarch64-linux-android target
* Requires the platform to define a runtime to initialize the shadow stack, see the [LLVM docs](https://clang.llvm.org/docs/ShadowCallStack.html) for more detail.
Allow to disable thinLTO buffer to support lto-embed-bitcode lld feature
Hello
This change is to fix issue (https://github.com/rust-lang/rust/issues/84395) in which passing "-lto-embed-bitcode=optimized" to lld when linking rust code via linker-plugin-lto doesn't produce the expected result.
Instead of emitting a single unified module into a llvmbc section of the linked elf, it emits multiple submodules.
This is caused because rustc emits the BC modules after running llvm `createWriteThinLTOBitcodePass` pass.
Which in turn triggers a thinLTO linkage and causes the said issue.
This patch allows via compiler flag (-Cemit-thin-lto=<bool>) to select between running `createWriteThinLTOBitcodePass` and `createBitcodeWriterPass`.
Note this pattern of selecting between those 2 passes is common inside of LLVM code.
The default is to match the old behavior.