debuginfo: add compiler option to allow compressed debuginfo sections
LLVM already supports emitting compressed debuginfo. In debuginfo=full builds, the debug section is often a large amount of data, and it typically compresses very well (3x is not unreasonable.) We add a new knob to allow debuginfo to be compressed when the matching LLVM functionality is present. Like clang, if a known-but-disabled compression mechanism is requested, we disable compression and emit uncompressed debuginfo sections.
The API is different enough on older LLVMs we just pretend the support
is missing on LLVM older than 16.
lto: load bitcode sections by name
Upstream change
llvm/llvm-project@6b539f5eb8 changed `isSectionBitcode` works and it now only respects `.llvm.lto` sections instead of also `.llvmbc`, which it says was never intended to be used for LTO. We instead load sections by name, and sniff for raw bitcode by hand.
This is an alternative approach to #115136, where we tried the same thing using the `object` crate, but it got too fraught to continue.
r? `@nikic`
`@rustbot` label: +llvm-main
LLVM already supports emitting compressed debuginfo. In debuginfo=full
builds, the debug section is often a large amount of data, and it
typically compresses very well (3x is not unreasonable.) We add a new
knob to allow debuginfo to be compressed when the matching LLVM
functionality is present. Like clang, if a known-but-disabled
compression mechanism is requested, we disable compression and emit
uncompressed debuginfo sections.
The API is different enough on older LLVMs we just pretend the support
is missing on LLVM older than 16.
Upstream change
llvm/llvm-project@6b539f5eb8 changed
`isSectionBitcode` works and it now only respects `.llvm.lto` sections
instead of also `.llvmbc`, which it says was never intended to be used
for LTO. We instead load sections by name, and sniff for raw bitcode by
hand.
r? @nikic
@rustbot label: +llvm-main
`-Cllvm-args` usability improvement
fixes: #26338fixes: #115564
Two problems were found during playing with `-Cllvm-args`
1. When `llvm.link-shared` is set to `false` in `config.toml`, output of `rustc -C llvm-args='--help-list-hidden'` doesn't contain `--emit-dwarf-unwind` and `--emulated-tls`. When it is set to `true`, `rustc -C llvm-args='--help-list-hidden'` emits `--emit-dwarf-unwind`, but `--emulated-tls` is still missing.
2. Setting `-Cllvm-args=--emit-dwarf-unwind=always` doesn't take any effect, but `-Cllvm-args=-machine-outliner-reruns=3` does work.
### 1
Adding `RegisterCodeGenFlags` to register codegen flags fixed the first problem. `rustc -C llvm-args='--help-list-hidden'` emits full codegen flags including `--emit-dwarf-unwind` and `--emulated-tls`.
### 2
Constructing `TargetOptions` from `InitTargetOptionsFromCodeGenFlags` in `LLVMRustCreateTargetMachine` fixed the second problem. The `LLVMRustSetLLVMOptions` calls `ParseCommandLineOptions` which parses given `llvm-args`. For options like `machine-outliner-reruns`, it just works, since the codegen logic directly consumes the parsing result:
[machine-outliner-reruns register](0537f6354c/llvm/lib/CodeGen/MachineOutliner.cpp (L114))
[machine-outliner-reruns consumption](0537f6354c/llvm/lib/CodeGen/MachineOutliner.cpp (L1138))
But for flags defined in `TargetOptions` and `MCTargetOptions` to take effect, constructing them with `InitTargetOptionsFromCodeGenFlags` is essential, or the parsing result is just not consumed. Similar patterns can be observed in [lli](0537f6354c/llvm/tools/llc/llc.cpp (L494)), [llc](0537f6354c/llvm/tools/lli/lli.cpp (L517)), etc.
Add CL and CMD into to pdb debug info
Partial fix for https://github.com/rust-lang/rust/issues/96475
The Arg0 and CommandLineArgs of the MCTargetOptions cpp class are not set within bb548f9645/compiler/rustc_llvm/llvm-wrapper/PassWrapper.cpp (L378)
This causes LLVM to not neither output any compiler path (cl) nor the arguments that were used when invoking it (cmd) in the PDB file.
This fix adds the missing information to the target machine so LLVM can use it.
Upstream change
llvm/llvm-project@6b539f5eb8 changed
`isSectionBitcode` works and it now only respects `.llvm.lto` sections
instead of also `.llvmbc`, which it says was never intended to be used
for LTO. We instead load sections by name, and sniff for raw bitcode by
hand.
r? @nikic
@rustbot label: +llvm-main
rustc_llvm: Link to `zlib` on dragonfly and solaris
On native builds `llvm-config` picks up `zlib` and this gets pased into
the rust build tools, but on cross builds `llvm-config` is explicitly
ignored as it contains information for the host system and cannot be
trusted to be accurate for the target system.
Both DragonFly and Solaris contain `zlib` in the base system, so this is
both a safe assumption and required for a successful cross build unless
`zlib` support is disabled in LLVM.
This is more or less in the same vein as rust-lang#75713 and rust-lang#75655.
Move a local to the `#if` block where it is used
For other cases (LLVM < 17), this was complaining under `-Wall`:
```
warning: llvm-wrapper/PassWrapper.cpp: In function ‘void LLVMRustPrintTargetCPUs(LLVMTargetMachineRef, const char*)’:
warning: llvm-wrapper/PassWrapper.cpp:311:26: warning: unused variable ‘MCInfo’ [-Wunused-variable]
warning: 311 | const MCSubtargetInfo *MCInfo = Target->getMCSubtargetInfo();
warning: | ^~~~~~
```
coverage: Don't convert filename/symbol strings to `CString` for FFI
LLVM APIs are usually perfectly happy to accept pointer/length strings, as long as we supply a suitable length value when creating a `StringRef` or `std::string`.
This lets us avoid quite a few intermediate `CString` copies during coverage codegen. It also lets us use an `IndexSet<Symbol>` (instead of an `IndexSet<CString>`) when building the deduplicated filename table.
update llvm-wrapper include to silence deprecation warning
Includes of `include/llvm/Support/Host.h` now emit a deprecated warning: `warning: This header is deprecated, please use llvm/TargetParser/Host.h`.
I don't believe we are using this include.
I don't believe we need to bump the `download-ci-llvm` stamp since these warnings are emitted while building the `llvm-wrapper`.
r? ```@nikic```
CFI: Fix error compiling core with LLVM CFI enabled
Fix#90546 by filtering out global value function pointer types from the type tests, and adding the LowerTypeTests pass to the rustc LTO optimization pipelines.
Add hotness data to LLVM remarks
Slight improvement of https://github.com/rust-lang/rust/pull/113040. This makes sure that if PGO is used, remarks generated using `-Zremark-dir` will include the `Hotness` attribute.
r? `@tmiasko`
On native builds `llvm-config` picks up `zlib` and this gets pased into
the rust build tools, but on cross builds `llvm-config` is explicitly
ignored as it contains information for the host system and cannot be
trusted to be accurate for the target system.
Both DragonFly and Solaris contain `zlib` in the base system, so this is
both a safe assumption and required for a successful cross build unless
`zlib` support is disabled in LLVM.
This is more or less in the same vein as #75713 and #75655.
Fix#90546 by filtering out global value function pointer types from the
type tests, and adding the LowerTypeTests pass to the rustc LTO
optimization pipelines.
Filter out short-lived LLVM diagnostics before they reach the rustc handler
During profiling I saw remark passes being unconditionally enabled: for example `Machine Optimization Remark Emitter`.
The diagnostic remarks enabled by default are [from missed optimizations and opt analyses](https://github.com/rust-lang/rust/pull/113339#discussion_r1259480303). They are created by LLVM, passed to the diagnostic handler on the C++ side, emitted to rust, where they are unpacked, C++ strings are converted to rust, etc.
Then they are discarded in the vast majority of the time (i.e. unless some kind of `-Cremark` has enabled some of these passes' output to be printed).
These unneeded allocations are very short-lived, basically only lasting between the LLVM pass emitting them and the rust handler where they are discarded. So it doesn't hugely impact max-rss, and is only a slight reduction in instruction count (cachegrind reports a reduction between 0.3% and 0.5%) _on linux_. It's possible that targets without `jemalloc` or with a worse allocator, may optimize these less.
It is however significant in the aggregate, looking at the total number of allocated bytes:
- it's the biggest source of allocations according to dhat, on the benchmarks I've tried e.g. `syn` or `cargo`
- allocations on `syn` are reduced by 440MB, 17% (from 2440722647 bytes total, to 2030461328 bytes)
- allocations on `cargo` are reduced by 6.6GB, 19% (from 35371886402 bytes total, to 28723987743 bytes)
Some of these diagnostics objects [are allocated in LLVM](https://github.com/rust-lang/rust/pull/113339#discussion_r1252387484) *before* they're emitted to our diagnostic handler, where they'll be filtered out. So we could remove those in the future, but that will require changing a few LLVM call-sites upstream, so I left a FIXME.
this will eliminate many short-lived allocations (e.g. 20% of the memory used
building cargo) when unpacking the diagnostic and converting its various
C++ strings into rust strings, just to be filtered out most of the time.
getHostCPUName calls into libkstat but as of
LLVM 16.0.6 libLLVMTargetParser is not explicitly
linked against libkstat causing builds to fail
due to undefined symbols.
See also: llvm/llvm-project#64186
Both GCC and Clang write by default a `.comment` section with compiler
information:
```txt
$ gcc -c -xc /dev/null && readelf -p '.comment' null.o
String dump of section '.comment':
[ 1] GCC: (GNU) 11.2.0
$ clang -c -xc /dev/null && readelf -p '.comment' null.o
String dump of section '.comment':
[ 1] clang version 14.0.1 (https://github.com/llvm/llvm-project.git c62053979489ccb002efe411c3af059addcb5d7d)
```
They also implement the `-Qn` flag to avoid doing so:
```txt
$ gcc -Qn -c -xc /dev/null && readelf -p '.comment' null.o
readelf: Warning: Section '.comment' was not dumped because it does not exist!
$ clang -Qn -c -xc /dev/null && readelf -p '.comment' null.o
readelf: Warning: Section '.comment' was not dumped because it does not exist!
```
So far, `rustc` only does it for WebAssembly targets and only
when debug info is enabled:
```txt
$ echo 'fn main(){}' | rustc --target=wasm32-unknown-unknown --emit=llvm-ir -Cdebuginfo=2 - && grep llvm.ident rust_out.ll
!llvm.ident = !{!27}
```
In the RFC part of this PR it was decided to always add
the information, which gets us closer to other popular compilers.
An opt-out flag like GCC and Clang may be added later on if deemed
necessary.
Implementation-wise, this covers both `ModuleLlvm::new()` and
`ModuleLlvm::new_metadata()` cases by moving the addition to
`context::create_module` and adds a few test cases.
ThinLTO also sees the `llvm.ident` named metadata duplicated (in
temporary outputs), so this deduplicates it like it is done for
`wasm.custom_sections`. The tests also check this duplication does
not take place.
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
Support `--print KIND=PATH` command line syntax
As is already done for `--emit KIND=PATH` and `-L KIND=PATH`.
In the discussion of #110785, it was pointed out that `--print KIND=PATH` is nicer than trying to apply the single global `-o` path to `--print`'s output, because in general there can be multiple print requests within a single rustc invocation, and anyway `-o` would already be used for a different meaning in the case of `link-args` and `native-static-libs`.
I am interested in using `--print cfg=PATH` in Buck2. Currently Buck2 works around the lack of support for `--print KIND=PATH` by [indirecting through a Python wrapper script](d43cf3a51a/prelude/rust/tools/get_rustc_cfg.py) to redirect rustc's stdout into the location dictated by the build system.
From skimming Cargo's usages of `--print`, it definitely seems like it would benefit from `--print KIND=PATH` too. Currently it is working around the lack of this by inserting `--crate-name=___ --print=crate-name` so that it can look for a line containing `___` as a delimiter between the 2 other `--print` informations it actually cares about. This is commented as a "HACK" and "abuse". 31eda6f7c3/src/cargo/core/compiler/build_context/target_info.rs (L242) (FYI `@weihanglo` as you dealt with this recently in https://github.com/rust-lang/cargo/pull/11633.)
Mentioning reviewers active in #110785: `@fee1-dead` `@jyn514` `@bjorn3`
Resurrect: rustc_llvm: Add a -Z `print-codegen-stats` option to expose LLVM statistics.
This resurrects PR https://github.com/rust-lang/rust/pull/104000, which has sat idle for a while. And I want to see the effect of stack-move optimizations on LLVM (like https://reviews.llvm.org/D153453) :).
I have applied the changes requested by `@oli-obk` and `@nagisa` https://github.com/rust-lang/rust/pull/104000#discussion_r1014625377 and https://github.com/rust-lang/rust/pull/104000#discussion_r1014642482 in the latest commits.
r? `@oli-obk`
-----
LLVM has a neat [statistics](https://llvm.org/docs/ProgrammersManual.html#the-statistic-class-stats-option) feature that tracks how often optimizations kick in. It's very handy for optimization work. Since we expose the LLVM pass timings, I thought it made sense to expose the LLVM statistics too.
-----
(Edit: fix broken link
(Edit2: fix segmentation fault and use malloc
If `rustc` is built with
```toml
[llvm]
assertions = true
```
Then you can see like
```
rustc +stage1 -Z print-codegen-stats -C opt-level=3 tmp.rs
===-------------------------------------------------------------------------===
... Statistics Collected ...
===-------------------------------------------------------------------------===
3 aa - Number of MayAlias results
193 aa - Number of MustAlias results
531 aa - Number of NoAlias results
...
```
And the current default build emits only
```
$ rustc +stage1 -Z print-codegen-stats -C opt-level=3 tmp.rs
===-------------------------------------------------------------------------===
... Statistics Collected ...
===-------------------------------------------------------------------------===
$
```
This might be better to emit the message to tell assertion flag necessity, but now I can't find how to do that...
CI: build CMake 3.20 to support LLVM 17
LLVM 17 will require CMake at least 3.20, so we have to go back to building our own CMake on the Linux x64 dist builder.
r? `@nikic`
LLVM has a neat [statistics] feature that tracks how often optimizations kick
in. It's very handy for optimization work. Since we expose the LLVM pass
timings, I thought it made sense to expose the LLVM statistics too.
[statistics]: https://llvm.org/docs/ProgrammersManual.html#the-statistic-class-stats-option
Remove `LLVMRustCoverageHashCString`
Coverage has two FFI functions for computing the hash of a byte string. One takes a ptr/len pair (`LLVMRustCoverageHashByteArray`), and the other takes a NUL-terminated C string (`LLVMRustCoverageHashCString`).
But on closer inspection, the C string version is unnecessary. The calling-side code converts a Rust `&str` into a `CString`, and the C++ code then immediately turns it back into a ptr/len string before actually hashing it. So we can just call the ptr/len version directly instead.
---
This PR also fixes a bug in the C++ declaration of `LLVMRustCoverageHashByteArray`. It should be `size_t`, since that's what is declared and passed on the Rust side, and it's what `StrRef`'s constructor expects to receive on the callee side.
Coverage has two FFI functions for computing the hash of a byte string. One
takes a ptr/len pair, and the other takes a NUL-terminated C string.
But on closer inspection, the C string version is unnecessary. The calling-side
code converts a Rust `&str` into a C string, and the C++ code then immediately
turns it back into a ptr/len string before actually hashing it.
Adds support for LLVM [SafeStack] which provides backward edge control
flow protection by separating the stack into two parts: data which is
only accessed in provable safe ways is allocated on the normal stack
(the "safe stack") and all other data is placed in a separate allocation
(the "unsafe stack").
SafeStack support is enabled by passing `-Zsanitizer=safestack`.
[SafeStack]: https://clang.llvm.org/docs/SafeStack.html
Remove the ThinLTO CU hack
This reverts #46722, commit e0ab5d5feb.
Since #111167, commit 10b69dde3f, we are
generating DWARF subprograms in a way that is meant to be more compatible
with LLVM's expectations, so hopefully we don't need this workaround
rewriting CUs anymore.
This reverts #46722, commit e0ab5d5feb.
Since #111167, commit 10b69dde3f, we are
generating DWARF subprograms in a way that is meant to be more compatible
with LLVM's expectations, so hopefully we don't need this workaround
rewriting CUs anymore.
Expand the LLVM coverage of `--print target-cpus`
We've been relying on a custom patch to add `MCSubtargetInfo::getCPUTable`
for `rustc --print target-cpus`, and just printing that it's not supported
on external LLVM builds. LLVM `main` now has `getAllProcessorDescriptions`
that can replace ours, so now we try to use that. In addition, the fallback
path can at least print the native and default cpu options.
There were also some mismatches in the function signatures here between
`LLVM_RUSTLLVM` and otherwise; this is now mitigated by sharing these
functions and only using cpp to adjust the function bodies.
debuginfo: split method declaration and definition
When we're adding a method to a type DIE, we only want a DW_AT_declaration
there, because LLVM LTO can't unify type definitions when a child DIE is a
full subprogram definition. Now the subprogram definition gets added at the
CU level with a specification link back to the abstract declaration.
Both GCC and Clang write debuginfo this way for C++ class methods.
Fixes#109730.
Fixes#109934.
We've been relying on a custom patch to add `MCSubtargetInfo::getCPUTable`
for `rustc --print target-cpus`, and just printing that it's not supported
on external LLVM builds. LLVM `main` now has `getAllProcessorDescriptions`
that can replace ours, so now we try to use that. In addition, the fallback
path can at least print the native and default cpu options.
There were also some mismatches in the function signatures here between
`LLVM_RUSTLLVM` and otherwise; this is now mitigated by sharing these
functions and only using cpp to adjust the function bodies.
When we're adding a method to a type DIE, we only want a DW_AT_declaration
there, because LLVM LTO can't unify type definitions when a child DIE is a
full subprogram definition. Now the subprogram definition gets added at the
CU level with a specification link back to the abstract declaration.
Fix printing native CPU on cross-compiled compiler.
If `rustc` is cross-compiled from a different host, then the "native" entry in `rustc --print=target-cpus` would not appear. There is a check in the printing code that will avoid printing the "native" entry if the user has passed `--target`. However, that check was comparing the `--target` value with the `LLVM_TARGET_TRIPLE` which is the triple of the host that `rustc` was built on (the "build" target in Rust lingo), not the target it was being built for (the "host" in Rust lingo). This fixes it to use the target that LLVM was built for (which I'm pretty sure this is the correct function to determine that).
This fixes the cpu listing for aarch64-apple-darwin which is built on CI using the x86_64-apple-darwin host.
Initial support for loongarch64-unknown-linux-gnu
Hi, We hope to add a new port in rust for LoongArch.
LoongArch intro
LoongArch is a RISC style ISA which is independently designed by Loongson
Technology in China. It is divided into two versions, the 32-bit version (LA32)
and the 64-bit version (LA64). LA64 applications have application-level
backward binary compatibility with LA32 applications. LoongArch is composed of
a basic part (Loongson Base) and an expanded part. The expansion part includes
Loongson Binary Translation (LBT), Loongson VirtualiZation (LVZ), Loongson SIMD
EXtension (LSX) and Loongson Advanced SIMD EXtension(LASX).
Currently the LA464 processor core supports LoongArch ISA and the Loongson
3A5000 processor integrates 4 64-bit LA464 cores. LA464 is a four-issue 64-bit
high-performance processor core. It can be used as a single core for high-end
embedded and desktop applications, or as a basic processor core to form an
on-chip multi-core system for server and high-performance machine applications.
Documentations:
ISA:
https://loongson.github.io/LoongArch-Documentation/LoongArch-Vol1-EN.html
ABI:
https://loongson.github.io/LoongArch-Documentation/LoongArch-ELF-ABI-EN.html
More docs can be found at:
https://loongson.github.io/LoongArch-Documentation/README-EN.html
Since last year, we have locally adapted two versions of rust, rust1.41 and rust1.57, and completed the test locally.
I'm not sure if I'm submitting all the patches at once, so I split up the patches and here's one of the commits
`-Cdebuginfo=1` was never line tables only and
can't be due to backwards compatibility issues.
This was clarified and an option for line tables only
was added. Additionally an option for line info
directives only was added, which is well needed for
some targets. The debug info options should now
behave the same as clang's debug info options.
SymbolWrapper.cpp doesn't use std::optional or llvm::Optional, so this
patch removes the extraneous include. Note that llvm/ADT/Optional.h
has been deprecated upstream. This patch ensures that
SymbolWrapper.cpp continues to compile even after the upcoming removal
of Optional.h.
Remove legacy PM leftovers
This drops two leftovers of legacy PM usage:
* We don't need to initialize passes anymore.
* The pass listing was still using legacy PM passes. Replace it with the corresponding new PM listing.
llvm-wrapper: adapt for LLVM API change
No functional changes intended.
The LLVM commit e6b02214c6 added `TargetExtTyID` to the `TypeID` enum. This adapts `RustWrapper` accordingly.
Convert all the crates that have had their diagnostic migration
completed (except save_analysis because that will be deleted soon and
apfloat because of the licensing problem).
Add LLVM KCFI support to the Rust compiler
This PR adds LLVM Kernel Control Flow Integrity (KCFI) support to the Rust compiler. It initially provides forward-edge control flow protection for operating systems kernels for Rust-compiled code only by aggregating function pointers in groups identified by their return and parameter types. (See llvm/llvm-project@cff5bef.)
Forward-edge control flow protection for C or C++ and Rust -compiled code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code share the same virtual address space) will be provided in later work as part of this project by identifying C char and integer type uses at the time types are encoded (see Type metadata in the design document in the tracking issue #89653).
LLVM KCFI can be enabled with -Zsanitizer=kcfi.
Thank you again, `@bjorn3,` `@eddyb,` `@nagisa,` and `@ojeda,` for all the help!
This commit adds LLVM Kernel Control Flow Integrity (KCFI) support to
the Rust compiler. It initially provides forward-edge control flow
protection for operating systems kernels for Rust-compiled code only by
aggregating function pointers in groups identified by their return and
parameter types. (See llvm/llvm-project@cff5bef.)
Forward-edge control flow protection for C or C++ and Rust -compiled
code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code
share the same virtual address space) will be provided in later work as
part of this project by identifying C char and integer type uses at the
time types are encoded (see Type metadata in the design document in the
tracking issue #89653).
LLVM KCFI can be enabled with -Zsanitizer=kcfi.
Co-authored-by: bjorn3 <17426603+bjorn3@users.noreply.github.com>
Throw error on failure in loading llvm-plugin
The following code silently ignores the error as the `LLVMRustSetLastError` only tracks one error at a time. At all other places where `LLVMRustSetLastError` is used the code immediately returns.
251831ece9/compiler/rustc_llvm/llvm-wrapper/PassWrapper.cpp (L801-L804)
Pass 128-bit C-style enum enumerator values to LLVM
Pass the full 128 bits of C-style enum enumerators through to LLVM. This means that debuginfo for C-style repr128 enums is now emitted correctly for DWARF platforms (as compared to not being correctly emitted on any platform).
Tracking issue: #56071
ci: Upgrade dist-x86_64-netbsd to NetBSD 9.0
This is another step in toolchain upgrades for LLVM 16, which will need at least GCC 7.1.
Our previous NetBSD 8.0 cross-toolchain used its system GCC 5.5. While there are newer versions available in pkgsrc, I could not get those working for cross-compilation. Upgrading to NetBSD 9.0 gets us GCC 7.4, which is sufficient for now.
This will affect the compatibility of the build we ship for `x86_64-unknown-netbsd`, but others may still build their own from source if that is needed. It is expected that NetBSD 8 will reach EOL soon anyway, approximately one month after 10 is released, but there is no firm date for that.
LLVM 15 added `Optional::has_value`, and LLVM `main` (16) has deprecated
`hasValue`. However, its `explicit operator bool` does the same thing,
and was added long ago, so we can use that across our full LLVM range of
compatibility.
Remove support for legacy PM
This removes support for optimizing with LLVM's legacy pass manager, as well as the unstable `-Znew-llvm-pass-manager` option. We have been defaulting to the new PM since LLVM 13 (except for s390x that waited for 14), and LLVM 15 removed support altogether. The only place we still use the legacy PM is for writing the output file, just like `llc` does.
cc #74705
r? ``@nikic``
Add tier-3 support for powerpc64 and riscv64 openbsd
# powerpc64
- MCP for [powerpc64-unknown-openbsd tier-3 support](https://github.com/rust-lang/compiler-team/issues/551)
- only need to add spec definition in rustc_target
# riscv64
- MCP for [riscv64-unknown-openbsd tier-3 support](https://github.com/rust-lang/compiler-team/issues/552)
- add spec definition in rustc_target
- follow freebsd about avoiding linking with `libatomic`
Use object instead of LLVM for reading bitcode from rlibs
Together with changes I plan to make as part of https://github.com/rust-lang/rust/pull/97485 this will allow entirely removing usage of LLVM's archive reader and thus allow removing `archive_ro.rs` and `ArchiveWrapper.cpp`.
No functional changes intended.
LLVM commit 633f5663c3 removed `createWriteThinLTOBitcodePass`.
This adapts PassWrapper similarly to the example mentioned upstream: 633f5663c3.
Update the minimum external LLVM to 13
With this change, we'll have stable support for LLVM 13 through 15 (pending release).
For reference, the previous increase to LLVM 12 was #90175.
r? `@nagisa`
Add support for generating unique profraw files by default when using `-C instrument-coverage`
Currently, enabling the rustc flag `-C instrument-coverage` instruments the given crate and by default uses the naming scheme `default.profraw` for any instrumented profile files generated during the execution of a binary linked against this crate. This leads to multiple binaries being executed overwriting one another and causing only the last executable run to contain actual coverage results.
This can be overridden by manually setting the environment variable `LLVM_PROFILE_FILE` to use a unique naming scheme.
This PR adds a change to add support for a reasonable default for rustc to use when enabling coverage instrumentation similar to how the Rust compiler treats generating these same `profraw` files when PGO is enabled.
The new naming scheme is set to `default_%m_%p.profraw` to ensure the uniqueness of each file being generated using [LLVMs special pattern strings](https://clang.llvm.org/docs/SourceBasedCodeCoverage.html#running-the-instrumented-program).
Today the compiler sets the default for PGO `profraw` files to `default_%m.profraw` to ensure a unique file for each run. The same can be done for the instrumented profile files generated via the `-C instrument-coverage` flag as well which LLVM has API support for.
Linked Issue: https://github.com/rust-lang/rust/issues/100381
r? `@wesleywiser`
Fixes a warning:
warning: llvm-wrapper/RustWrapper.cpp:159:11: warning: enumeration values 'AllocatedPointer' and 'AllocAlign' not handled in switch [-Wswitch]
warning: switch (Kind) {
warning: ^
Which was fall out from 130a1df71e.
This obviates the patch that teaches LLVM internals about
_rust_{re,de}alloc functions by putting annotations directly in the IR
for the optimizer.
The sole test change is required to anchor FileCheck to the body of the
`box_uninitialized` method, so it doesn't see the `allocalign` on
`__rust_alloc` and get mad about the string `alloca` showing up. Since I
was there anyway, I added some checks on the attributes to prove the
right attributes got set.
While we're here, we also emit allocator attributes on
__rust_alloc_zeroed. This should allow LLVM to perform more
optimizations for zeroed blocks, and probably fixes#90032. [This
comment](https://github.com/rust-lang/rust/issues/24194#issuecomment-308791157)
mentions "weird UB-like behaviour with bitvec iterators in
rustc_data_structures" so we may need to back this change out if things
go wrong.
The new test cases require LLVM 15, so we copy them into LLVM
14-supporting versions, which we can delete when we drop LLVM 14.
Add support for LLVM ShadowCallStack.
LLVMs ShadowCallStack provides backward edge control flow integrity protection by using a separate shadow stack to store and retrieve a function's return address.
LLVM currently only supports this for AArch64 targets. The x18 register is used to hold the pointer to the shadow stack, and therefore this only works on ABIs which reserve x18. Further details are available in the [LLVM ShadowCallStack](https://clang.llvm.org/docs/ShadowCallStack.html) docs.
# Usage
`-Zsanitizer=shadow-call-stack`
# Comments/Caveats
* Currently only enabled for the aarch64-linux-android target
* Requires the platform to define a runtime to initialize the shadow stack, see the [LLVM docs](https://clang.llvm.org/docs/ShadowCallStack.html) for more detail.
Allow to disable thinLTO buffer to support lto-embed-bitcode lld feature
Hello
This change is to fix issue (https://github.com/rust-lang/rust/issues/84395) in which passing "-lto-embed-bitcode=optimized" to lld when linking rust code via linker-plugin-lto doesn't produce the expected result.
Instead of emitting a single unified module into a llvmbc section of the linked elf, it emits multiple submodules.
This is caused because rustc emits the BC modules after running llvm `createWriteThinLTOBitcodePass` pass.
Which in turn triggers a thinLTO linkage and causes the said issue.
This patch allows via compiler flag (-Cemit-thin-lto=<bool>) to select between running `createWriteThinLTOBitcodePass` and `createBitcodeWriterPass`.
Note this pattern of selecting between those 2 passes is common inside of LLVM code.
The default is to match the old behavior.
Enable check-cfg in stage0
Now that the bootstrap cargo supports `rustc-check-cfg` we can now enable it with `-Zcheck-cfg=output` and use it in `rustc_llvm` to unblock `--check-cfg` support in stage0.
r? `@Mark-Simulacrum`
Remove branch target prologues from `#[naked] fn`
This patch hacks around rust-lang/rust#98768 for now via injecting appropriate attributes into the LLVMIR we emit for naked functions. I intend to pursue this upstream so that these attributes can be removed in general, but it's slow going wading through C++ for me.
Revert "Work around invalid DWARF bugs for fat LTO"
Since September, the toolchain has not been generating reliable DWARF
information for static variables when LTO is on. This has affected
projects in the embedded space where the use of LTO is typical. In our
case, it has kept us from bumping past the 2021-09-22 nightly toolchain
lest our debugger break. This has been a pretty dramatic regression for
people using debuggers and static variables. See #90357 for more info
and a repro case.
This commit is a mechanical revert of
d5de680e20 from PR #89041, which caused
the issue. (Note on that PR that the commit's author has requested it be
reverted.)
I have locally verified that this fixes#90357 by restoring the
functionality of both the repro case I posted on that bug, and debugger
behavior on real programs. There do not appear to be test cases for this
in the toolchain; if I've missed them, point me at 'em and I'll update
them.
Adding the option to control from rustc CLI
if the resulted ".o" bitcode module files are with
thinLTO info or regular LTO info.
Allows using "-lto-embed-bitcode=optimized" during linkage
correctly.
Signed-off-by: Ziv Dunkelman <ziv.dunkelman@nextsilicon.com>
This adds the typeid and `vcall_visibility` metadata to vtables when the
-Cvirtual-function-elimination flag is set.
The typeid is generated in the same way as for the
`llvm.type.checked.load` intrinsic from the trait_ref.
The offset that is added to the typeid is always 0. This is because LLVM
assumes that vtables are constructed according to the definition in the
Itanium ABI. This includes an "address point" of the vtable. In C++ this
is the offset in the vtable where information for RTTI is placed. Since
there is no RTTI information in Rust's vtables, this "address point" is
always 0. This "address point" in combination with the offset passed to
the `llvm.type.checked.load` intrinsic determines the final function
that should be loaded from the vtable in the
`WholeProgramDevirtualization` pass in LLVM. That's why the
`llvm.type.checked.load` intrinsics are generated with the typeid of the
trait, rather than with that of the function that is called. This
matches what `clang` does for C++.
The vcall_visibility metadata depends on three factors:
1. LTO level: Currently this is always fat LTO, because LLVM only
supports this optimization with fat LTO.
2. Visibility of the trait: If the trait is publicly visible, VFE
can only act on its vtables after linking.
3. Number of CGUs: if there is more than one CGU, also vtables with
restricted visibility could be seen outside of the CGU, so VFE can
only act on them after linking.
To reflect this, there are three visibility levels: Public, LinkageUnit,
and TranslationUnit.
To apply the optimization the `Virtual Function Elim` module flag has to
be set. To apply this optimization post-link the `LTOPostLink` module
flag has to be set.
In https://reviews.llvm.org/D125556 upstream changed sext() and zext()
to allow some no-op cases, which previously required use of the *OrSelf()
methods, which I assume is what was going on here. The *OrSelf() methods
got removed in https://reviews.llvm.org/D125559 after two weeks of
deprecation because they came with some bonus (probably-undesired)
behavior. Since the behavior of sext() and zext() changed slightly, I
kept the old *OrSelf() calls in LLVM 14 and earlier, and only use the
new version in LLVM 15.
r? @nikic
This new enum entry was introduced in https://reviews.llvm.org/D122268,
and if I'm reading correctly there's no case where we'd ever encounter
it in our uses of LLVM. To preserve the ability to compile this file
with -Werror -Wswitch we add an explicit case for this entry.
Since September, the toolchain has not been generating reliable DWARF
information for static variables when LTO is on. This has affected
projects in the embedded space where the use of LTO is typical. In our
case, it has kept us from bumping past the 2021-09-22 nightly toolchain
lest our debugger break. This has been a pretty dramatic regression for
people using debuggers and static variables. See #90357 for more info
and a repro case.
This commit is a mechanical revert of
d5de680e20 from PR #89041, which caused
the issue. (Note on that PR that the commit's author has requested it be
reverted.)
I have locally verified that this fixes#90357 by restoring the
functionality of both the repro case I posted on that bug, and debugger
behavior on real programs. There do not appear to be test cases for this
in the toolchain; if I've missed them, point me at 'em and I'll update
them.
The majority of the code is only used by either rustbuild or
rustc_llvm's build script. Rust_build is compiled once for rustbuild and
once for every stage. This means that the majority of the code in this
crate is needlessly compiled multiple times. By moving only the code
actually used by the respective crates to rustbuild and rustc_llvm's
build script, this needless duplicate compilation is avoided.
Remove LLVM attribute removal
This was necessary before, because `declare_raw_fn` would always apply
the default optimization attributes to every declared function.
Then `attributes::from_fn_attrs` would have to remove the default
attributes in the case of, e.g. `#[optimize(speed)]` in a `-Os` build.
(see [`src/test/codegen/optimize-attr-1.rs`](03a8cc7df1/src/test/codegen/optimize-attr-1.rs (L33)))
However, every relevant callsite of `declare_raw_fn` (i.e. where we
actually generate code for the function, and not e.g. a call to an
intrinsic, where optimization attributes don't [?] matter)
calls `from_fn_attrs`, so we can remove the attribute setting
from `declare_raw_fn`, and rely on `from_fn_attrs` to apply the correct
attributes all at once.
r? `@ghost` (blocked on #94221)
`@rustbot` label S-blocked
This was necessary before, because `declare_raw_fn` would always apply
the default optimization attributes to every declared function,
and then `attributes::from_fn_attrs` would have to remove the default
attributes in the case of, e.g. `#[optimize(speed)]` in a `-Os` build.
However, every relevant callsite of `declare_raw_fn` (i.e. where we
actually generate code for the function, and not e.g. a call to an
intrinsic, where optimization attributes don't [?] matter)
calls `from_fn_attrs`, so we can simply remove the attribute setting
from `declare_raw_fn`, and rely on `from_fn_attrs` to apply the correct
attributes all at once.
Add MemTagSanitizer Support
Add support for the LLVM [MemTagSanitizer](https://llvm.org/docs/MemTagSanitizer.html).
On hardware which supports it (see caveats below), the MemTagSanitizer can catch bugs similar to AddressSanitizer and HardwareAddressSanitizer, but with lower overhead.
On a tag mismatch, a SIGSEGV is signaled with code SEGV_MTESERR / SEGV_MTEAERR.
# Usage
`-Zsanitizer=memtag -C target-feature="+mte"`
# Comments/Caveats
* MemTagSanitizer is only supported on AArch64 targets with hardware support
* Requires `-C target-feature="+mte"`
* LLVM MemTagSanitizer currently only performs stack tagging.
# TODO
* Tests
* Example
In https://reviews.llvm.org/D114543 the uwtable attribute gained a flag
so that we can ask for sync uwtables instead of async, as the former are
much cheaper. The default is async, so that's what I've done here, but I
left a TODO that we might be able to do better.
While in here I went ahead and dropped support for removing uwtable
attributes in rustc: we never did it, so I didn't write the extra C++
bridge code to make it work. Maybe I should have done the same thing
with the `sync|async` parameter but we'll see.
This doesn't handle `char` because it's a bit awkward to distinguish it
from u32 at this point in codegen.
Note that for some types (like `&Struct` and `&mut Struct`),
we already apply `dereferenceable`, which implies `noundef`,
so the IR does not change.
This agrees with Clang, and avoids an error when using LTO with mixed
C/Rust. LLVM considers different behaviour flags to be a mismatch,
even when the flag value itself is the same.
This also makes the flag setting explicit for all uses of
LLVMRustAddModuleFlag.
This was originally introduced in #10916 as a way to remove all landing
pads when performing LTO. However this is no longer necessary today
since rustc properly marks all functions and call-sites as nounwind
where appropriate.
In fact this is incorrect in the presence of `extern "C-unwind"` which
must create a landing pad when compiled with `-C panic=abort` so that
foreign exceptions are caught and properly turned into aborts.
RustWrapper: adapt to new AttributeMask API
Upstream LLVM change 9290ccc3c1a1 migrated attribute removal to use
AttributeMask instead of AttrBuilder, so we need to follow suit here.
r? ``@nagisa`` cc ``@nikic``
No functional changes intended.
The LLVM commit
ec501f15a8
removed the signed version of `createExpression`. This adapts the Rust
LLVM wrappers accordingly.
Mark drop calls in landing pads `cold` instead of `noinline`
Now that deferred inlining has been disabled in LLVM (#92110), this shouldn't cause catastrophic size blowup.
I confirmed that the test cases from https://github.com/rust-lang/rust/issues/41696#issuecomment-298696944 still compile quickly (<1s) after this change. ~Although note that I wasn't able to reproduce the original issue using a recent rustc/llvm with deferred inlining enabled, so those tests may no longer be representative. I was also unable to create a modified test case that reproduced the original issue.~ (edit: I reproduced it on CI by accident--the first commit timed out on the LLVM 12 builder, because I forgot to make it conditional on LLVM version)
r? `@nagisa`
cc `@arielb1` (this effectively reverts #42771 "mark calls in the unwind path as !noinline")
cc `@RalfJung` (fixes#46515)
edit: also fixes#87055
Add support for riscv64gc-unknown-freebsd
For https://doc.rust-lang.org/nightly/rustc/target-tier-policy.html#tier-3-target-policy:
* A tier 3 target must have a designated developer or developers (the "target maintainers") on record to be CCed when issues arise regarding the target. (The mechanism to track and CC such developers may evolve over time.)
For all Rust targets on FreeBSD, it's [rust@FreeBSD.org](mailto:rust@FreeBSD.org).
* Targets must use naming consistent with any existing targets; for instance, a target for the same CPU or OS as an existing Rust target should use the same name for that CPU or OS. Targets should normally use the same names and naming conventions as used elsewhere in the broader ecosystem beyond Rust (such as in other toolchains), unless they have a very good reason to diverge. Changing the name of a target can be highly disruptive, especially once the target reaches a higher tier, so getting the name right is important even for a tier 3 target.
Done.
* Target names should not introduce undue confusion or ambiguity unless absolutely necessary to maintain ecosystem compatibility. For example, if the name of the target makes people extremely likely to form incorrect beliefs about what it targets, the name should be changed or augmented to disambiguate it.
Done
* Tier 3 targets may have unusual requirements to build or use, but must not create legal issues or impose onerous legal terms for the Rust project or for Rust developers or users.
Done.
* The target must not introduce license incompatibilities.
Done.
* Anything added to the Rust repository must be under the standard Rust license (MIT OR Apache-2.0).
Fine with me.
* The target must not cause the Rust tools or libraries built for any other host (even when supporting cross-compilation to the target) to depend on any new dependency less permissive than the Rust licensing policy. This applies whether the dependency is a Rust crate that would require adding new license exceptions (as specified by the tidy tool in the rust-lang/rust repository), or whether the dependency is a native library or binary. In other words, the introduction of the target must not cause a user installing or running a version of Rust or the Rust tools to be subject to any new license requirements.
Done.
* If the target supports building host tools (such as rustc or cargo), those host tools must not depend on proprietary (non-FOSS) libraries, other than ordinary runtime libraries supplied by the platform and commonly used by other binaries built for the target. For instance, rustc built for the target may depend on a common proprietary C runtime library or console output library, but must not depend on a proprietary code generation library or code optimization library. Rust's license permits such combinations, but the Rust project has no interest in maintaining such combinations within the scope of Rust itself, even at tier 3.
Done.
* Targets should not require proprietary (non-FOSS) components to link a functional binary or library.
Done.
* "onerous" here is an intentionally subjective term. At a minimum, "onerous" legal/licensing terms include but are not limited to: non-disclosure requirements, non-compete requirements, contributor license agreements (CLAs) or equivalent, "non-commercial"/"research-only"/etc terms, requirements conditional on the employer or employment of any particular Rust developers, revocable terms, any requirements that create liability for the Rust project or its developers or users, or any requirements that adversely affect the livelihood or prospects of the Rust project or its developers or users.
Fine with me.
* Neither this policy nor any decisions made regarding targets shall create any binding agreement or estoppel by any party. If any member of an approving Rust team serves as one of the maintainers of a target, or has any legal or employment requirement (explicit or implicit) that might affect their decisions regarding a target, they must recuse themselves from any approval decisions regarding the target's tier status, though they may otherwise participate in discussions.
Ok.
* This requirement does not prevent part or all of this policy from being cited in an explicit contract or work agreement (e.g. to implement or maintain support for a target). This requirement exists to ensure that a developer or team responsible for reviewing and approving a target does not face any legal threats or obligations that would prevent them from freely exercising their judgment in such approval, even if such judgment involves subjective matters or goes beyond the letter of these requirements.
Ok.
* Tier 3 targets should attempt to implement as much of the standard libraries as possible and appropriate (core for most targets, alloc for targets that can support dynamic memory allocation, std for targets with an operating system or equivalent layer of system-provided functionality), but may leave some code unimplemented (either unavailable or stubbed out as appropriate), whether because the target makes it impossible to implement or challenging to implement. The authors of pull requests are not obligated to avoid calling any portions of the standard library on the basis of a tier 3 target not implementing those portions.
std is implemented.
* The target must provide documentation for the Rust community explaining how to build for the target, using cross-compilation if possible. If the target supports running tests (even if they do not pass), the documentation must explain how to run tests for the target, using emulation if possible or dedicated hardware if necessary.
Building is possible the same way as other Rust on FreeBSD targets.
* Tier 3 targets must not impose burden on the authors of pull requests, or other developers in the community, to maintain the target. In particular, do not post comments (automated or manual) on a PR that derail or suggest a block on the PR based on a tier 3 target. Do not send automated messages or notifications (via any medium, including via `@)` to a PR author or others involved with a PR regarding a tier 3 target, unless they have opted into such messages.
Ok.
* Backlinks such as those generated by the issue/PR tracker when linking to an issue or PR are not considered a violation of this policy, within reason. However, such messages (even on a separate repository) must not generate notifications to anyone involved with a PR who has not requested such notifications.
Ok.
* Patches adding or updating tier 3 targets must not break any existing tier 2 or tier 1 target, and must not knowingly break another tier 3 target without approval of either the compiler team or the maintainers of the other tier 3 target.
Ok.
* In particular, this may come up when working on closely related targets, such as variations of the same architecture with different features. Avoid introducing unconditional uses of features that another variation of the target may not have; use conditional compilation or runtime detection, as appropriate, to let each target run code supported by that target.
Ok.
Add support for LLVM coverage mapping format versions 5 and 6
This PR cherry-pick's Swatinem's initial commit in unsubmitted PR #90047.
My additional commit augments Swatinem's great starting point, but adds full support for LLVM
Coverage Mapping Format version 6, conditionally, if compiling with LLVM 13.
Version 6 requires adding the compilation directory when file paths are
relative, and since Rustc coverage maps use relative paths, we should
add the expected compilation directory entry.
Note, however, that with the compilation directory, coverage reports
from `llvm-cov show` can now report file names (when the report includes
more than one file) with the full absolute path to the file.
This would be a problem for test results, but the workaround (for the
rust coverage tests) is to include an additional `llvm-cov show`
parameter: `--compilation-dir=.`
Emit LLVM optimization remarks when enabled with `-Cremark`
The default diagnostic handler considers all remarks to be disabled by
default unless configured otherwise through LLVM internal flags:
`-pass-remarks`, `-pass-remarks-missed`, and `-pass-remarks-analysis`.
This behaviour makes `-Cremark` ineffective on its own.
Fix this by configuring a custom diagnostic handler that enables
optimization remarks based on the value of `-Cremark` option. With
`-Cremark=all` enabling all remarks.
Fixes#90924.
r? `@nikic`
LLVM has built-in heuristics for adding stack canaries to functions. These
heuristics can be selected with LLVM function attributes. This patch adds a
rustc option `-Z stack-protector={none,basic,strong,all}` which controls the use
of these attributes. This gives rustc the same stack smash protection support as
clang offers through options `-fno-stack-protector`, `-fstack-protector`,
`-fstack-protector-strong`, and `-fstack-protector-all`. The protection this can
offer is demonstrated in test/ui/abi/stack-protector.rs. This fills a gap in the
current list of rustc exploit
mitigations (https://doc.rust-lang.org/rustc/exploit-mitigations.html),
originally discussed in #15179.
Stack smash protection adds runtime overhead and is therefore still off by
default, but now users have the option to trade performance for security as they
see fit. An example use case is adding Rust code in an existing C/C++ code base
compiled with stack smash protection. Without the ability to add stack smash
protection to the Rust code, the code base artifacts could be exploitable in
ways not possible if the code base remained pure C/C++.
Stack smash protection support is present in LLVM for almost all the current
tier 1/tier 2 targets: see
test/assembly/stack-protector/stack-protector-target-support.rs. The one
exception is nvptx64-nvidia-cuda. This patch follows clang's example, and adds a
warning message printed if stack smash protection is used with this target (see
test/ui/stack-protector/warn-stack-protector-unsupported.rs). Support for tier 3
targets has not been checked.
Since the heuristics are applied at the LLVM level, the heuristics are expected
to add stack smash protection to a fraction of functions comparable to C/C++.
Some experiments demonstrating how Rust code is affected by the different
heuristics can be found in
test/assembly/stack-protector/stack-protector-heuristics-effect.rs. There is
potential for better heuristics using Rust-specific safety information. For
example it might be reasonable to skip stack smash protection in functions which
transitively only use safe Rust code, or which uses only a subset of functions
the user declares safe (such as anything under `std.*`). Such alternative
heuristics could be added at a later point.
LLVM also offers a "safestack" sanitizer as an alternative way to guard against
stack smashing (see #26612). This could possibly also be included as a
stack-protection heuristic. An alternative is to add it as a sanitizer (#39699).
This is what clang does: safestack is exposed with option
`-fsanitize=safe-stack`.
The options are only supported by the LLVM backend, but as with other codegen
options it is visible in the main codegen option help menu. The heuristic names
"basic", "strong", and "all" are hopefully sufficiently generic to be usable in
other backends as well.
Reviewed-by: Nikita Popov <nikic@php.net>
Extra commits during review:
- [address-review] make the stack-protector option unstable
- [address-review] reduce detail level of stack-protector option help text
- [address-review] correct grammar in comment
- [address-review] use compiler flag to avoid merging functions in test
- [address-review] specify min LLVM version in fortanix stack-protector test
Only for Fortanix test, since this target specifically requests the
`--x86-experimental-lvi-inline-asm-hardening` flag.
- [address-review] specify required LLVM components in stack-protector tests
- move stack protector option enum closer to other similar option enums
- rustc_interface/tests: sort debug option list in tracking hash test
- add an explicit `none` stack-protector option
Revert "set LLVM requirements for all stack protector support test revisions"
This reverts commit a49b74f92a4e7d701d6f6cf63d207a8aff2e0f68.
`Module::getOrInsertGlobal` returns a `Constant*`, which is a super
class of `GlobalVariable`, but if the given type doesn't match an
existing declaration, it returns a bitcast of that global instead.
This causes UB when we pass that to `LLVMGetVisibility` which
unconditionally casts the opaque argument to a `GlobalValue*`.
Instead, we can do our own get-or-insert without worrying whether
existing types match exactly. It's not relevant when we're just trying
to get/set the linkage and visibility, and if types are needed we can
bitcast or error nicely from `rustc_codegen_llvm` instead.
The default diagnostic handler considers all remarks to be disabled by
default unless configured otherwise through LLVM internal flags:
`-pass-remarks`, `-pass-remarks-missed`, and `-pass-remarks-analysis`.
This behaviour makes `-Cremark` ineffective on its own.
Fix this by configuring a custom diagnostic handler that enables
optimization remarks based on the value of `-Cremark` option. With
`-Cremark=all` enabling all remarks.
In https://reviews.llvm.org/D71059 LLVM 11, the time trace profiler was
extended to support multiple threads.
`timeTraceProfilerInitialize` creates a thread local profiler instance.
When a thread finishes `timeTraceProfilerFinishThread` moves a thread
local instance into a global collection of instances. Finally when all
codegen work is complete `timeTraceProfilerWrite` writes data from the
current thread local instance and the instances in global collection
of instances.
Previously, the profiler was intialized on a single thread only. Since
this thread performs no code generation on its own, the resulting
profile was empty.
Update LLVM codegen to initialize & finish time trace profiler on each
code generation thread.
Repace use of `static_nobundle` with `native_link_modifiers` within Rust codebase
This fixes warnings when building Rust and running tests:
```
warning: library kind `static-nobundle` has been superseded by specifying `-bundle` on library kind `static`. Try `static:-bundle`
warning: `rustc_llvm` (lib) generated 2 warnings (1 duplicate)
```
Add -Z no-unique-section-names to reduce ELF header bloat.
This change adds a new compiler flag that can help reduce the size of ELF binaries that contain many functions.
By default, when enabling function sections (which is the default for most targets), the LLVM backend will generate different section names for each function. For example, a function `func` would generate a section called `.text.func`. Normally this is fine because the linker will merge all those sections into a single one in the binary. However, starting with [LLVM 12](https://github.com/llvm/llvm-project/commit/ee5d1a04), the backend will also generate unique section names for exception handling, resulting in thousands of `.gcc_except_table.*` sections ending up in the final binary because some linkers like LLD don't currently merge or strip these EH sections (see discussion [here](https://reviews.llvm.org/D83655)). This can bloat the ELF headers and string table significantly in binaries that contain many functions.
The new option is analogous to Clang's `-fno-unique-section-names`, and instructs LLVM to generate the same `.text` and `.gcc_except_table` section for each function, resulting in a smaller final binary.
The motivation to add this new option was because we have a binary that ended up with so many ELF sections (over 65,000) that it broke some existing ELF tools, which couldn't handle so many sections.
Here's our old binary:
```
$ readelf --sections old.elf | head -1
There are 71746 section headers, starting at offset 0x2a246508:
$ readelf --sections old.elf | grep shstrtab
[71742] .shstrtab STRTAB 0000000000000000 2977204c ad44bb 00 0 0 1
```
That's an 11MB+ string table. Here's the new binary using this option:
```
$ readelf --sections new.elf | head -1
There are 43 section headers, starting at offset 0x29143ca8:
$ readelf --sections new.elf | grep shstrtab
[40] .shstrtab STRTAB 0000000000000000 29143acc 0001db 00 0 0 1
```
The whole binary size went down by over 20MB, which is quite significant.
This fixes warning when building Rust and running tests:
```
warning: library kind `static-nobundle` has been superseded by specifying `-bundle` on library kind `static`. Try `static:-bundle`
warning: `rustc_llvm` (lib) generated 2 warnings (1 duplicate)
```
This change adds a new compiler flag that can help reduce the size of
ELF binaries that contain many functions.
By default, when enabling function sections (which is the default for most
targets), the LLVM backend will generate different section names for each
function. For example, a function "func" would generate a section called
".text.func". Normally this is fine because the linker will merge all those
sections into a single one in the binary. However, starting with LLVM 12
(llvm/llvm-project@ee5d1a0), the backend will
also generate unique section names for exception handling, resulting in
thousands of ".gcc_except_table.*" sections ending up in the final binary
because some linkers don't currently merge or strip these EH sections.
This can bloat the ELF headers and string table significantly in
binaries that contain many functions.
The new option is analogous to Clang's -fno-unique-section-names, and
instructs LLVM to generate the same ".text" and ".gcc_except_table"
section for each function, resulting in smaller object files and
potentially a smaller final binary.
Implement `#[link_ordinal(n)]`
Allows the use of `#[link_ordinal(n)]` with `#[link(kind = "raw-dylib")]`, allowing Rust to link against DLLs that export symbols by ordinal rather than by name. As long as the ordinal matches, the name of the function in Rust is not required to match the name of the corresponding function in the exporting DLL.
Part of #58713.
Enable AutoFDO.
This largely involves implementing the options debug-info-for-profiling
and profile-sample-use and forwarding them on to LLVM.
AutoFDO can be used on x86-64 Linux like this:
rustc -O -Clink-arg='Wl,--no-rosegment' -Cdebug-info-for-profiling main.rs -o main
perf record -b ./main
create_llvm_prof --binary=main --out=code.prof
rustc -O -Cprofile-sample-use=code.prof main.rs -o main2
Now `main2` will have feedback directed optimization applied to it.
The create_llvm_prof tool can be obtained from this github repository:
https://github.com/google/autofdo
The option -Clink-arg='Wl,--no-rosegment' is necessary to avoid lld
putting an extra RO segment before the executable code, which would make
the binary silently incompatible with create_llvm_prof.
This largely involves implementing the options debug-info-for-profiling
and profile-sample-use and forwarding them on to LLVM.
AutoFDO can be used on x86-64 Linux like this:
rustc -O -Cdebug-info-for-profiling main.rs -o main
perf record -b ./main
create_llvm_prof --binary=main --out=code.prof
rustc -O -Cprofile-sample-use=code.prof main.rs -o main2
Now `main2` will have feedback directed optimization applied to it.
The create_llvm_prof tool can be obtained from this github repository:
https://github.com/google/autofdoFixes#64892.
No functional changes intended.
The LLVM commit
e463b69736
changed an argument of fatal_error_handler_t from std::string to char*.
This adapts RustWrapper accordingly.
thinLTOResolvePrevailingInModule became thinLTOFinalizeInModule and
gained the ability to propagate noRecurse and noUnwind function
attributes. I ran codegen tests with it both on and off, as the upstream
patch uses it in both modes, and the tests pass both ways. Given that,
it seemed reasonable to go ahead and let the propagation be enabled in
rustc, and see what happens. See https://reviews.llvm.org/D36850 for
more examples of how the new version of the function gets used.
Change ab41eef9aca3 in LLVM split MemorySanitizerPass into
MemorySanitizerPass for functions and ModuleMemorySanitizerPass for
modules. There's a related change for ThreadSanitizerPass, and in here
since we're using a ModulePassManager I only add the module flavor of
the pass on LLVM 14.
r? @nikic cc @nagisa
These were deleted in https://reviews.llvm.org/D108614, and in C++ I
definitely see the argument for their removal. I didn't try and
propagate the changes up into higher layers of rustc in this change
because my initial goal was to get rustc working against LLVM HEAD
promptly, but I'm happy to follow up with some refactoring to make the
API on the Rust side match the LLVM API more directly (though the way
the enum works in Rust makes the API less scary IMO).
r? @nagisa cc @nikic
The above-mentioned commit (part of the LLVM 14 development cycle)
removes a method that rustc uses somewhat extensively. We mostly switch
to lower-level methods that exist in all versions of LLVM we use, so no
new ifdef logic is required in most cases.
PassWrapper: adapt for LLVM 14 changes
These API changes appear to have all taken place in
https://reviews.llvm.org/D105007, which moved HWAddressSanitizerPass and
AddressSanitizerPass to only accept their options type as a ctor
argument instead of the sequence of bools etc. This required a couple of
parameter additions, which I made match the default prior to the
mentioned upstream LLVM change.
This patch restores rustc to building (though not quite passing all
tests, I've mailed other patches for those issues) against LLVM HEAD.
These API changes appear to have all taken place in
https://reviews.llvm.org/D105007, which moved HWAddressSanitizerPass and
AddressSanitizerPass to only accept their options type as a ctor
argument instead of the sequence of bools etc. This required a couple of
parameter additions, which I made match the default prior to the
mentioned upstream LLVM change.
This patch restores rustc to building (though not quite passing all
tests, I've mailed other patches for those issues) against LLVM HEAD.
PassWrapper: handle move of OptimizationLevel class out of PassBuilder
This is the first build break of the LLVM 14 cycle, and was caused by
https://reviews.llvm.org/D107025. Mercifully an easy fix.
Rather than relying on `getPointerElementType()` from LLVM function
pointers, we now pass the function type explicitly when building `call`
or `invoke` instructions.