debuginfo: add compiler option to allow compressed debuginfo sections
LLVM already supports emitting compressed debuginfo. In debuginfo=full builds, the debug section is often a large amount of data, and it typically compresses very well (3x is not unreasonable.) We add a new knob to allow debuginfo to be compressed when the matching LLVM functionality is present. Like clang, if a known-but-disabled compression mechanism is requested, we disable compression and emit uncompressed debuginfo sections.
The API is different enough on older LLVMs we just pretend the support
is missing on LLVM older than 16.
lto: load bitcode sections by name
Upstream change
llvm/llvm-project@6b539f5eb8 changed `isSectionBitcode` works and it now only respects `.llvm.lto` sections instead of also `.llvmbc`, which it says was never intended to be used for LTO. We instead load sections by name, and sniff for raw bitcode by hand.
This is an alternative approach to #115136, where we tried the same thing using the `object` crate, but it got too fraught to continue.
r? `@nikic`
`@rustbot` label: +llvm-main
LLVM already supports emitting compressed debuginfo. In debuginfo=full
builds, the debug section is often a large amount of data, and it
typically compresses very well (3x is not unreasonable.) We add a new
knob to allow debuginfo to be compressed when the matching LLVM
functionality is present. Like clang, if a known-but-disabled
compression mechanism is requested, we disable compression and emit
uncompressed debuginfo sections.
The API is different enough on older LLVMs we just pretend the support
is missing on LLVM older than 16.
Upstream change
llvm/llvm-project@6b539f5eb8 changed
`isSectionBitcode` works and it now only respects `.llvm.lto` sections
instead of also `.llvmbc`, which it says was never intended to be used
for LTO. We instead load sections by name, and sniff for raw bitcode by
hand.
r? @nikic
@rustbot label: +llvm-main
`-Cllvm-args` usability improvement
fixes: #26338fixes: #115564
Two problems were found during playing with `-Cllvm-args`
1. When `llvm.link-shared` is set to `false` in `config.toml`, output of `rustc -C llvm-args='--help-list-hidden'` doesn't contain `--emit-dwarf-unwind` and `--emulated-tls`. When it is set to `true`, `rustc -C llvm-args='--help-list-hidden'` emits `--emit-dwarf-unwind`, but `--emulated-tls` is still missing.
2. Setting `-Cllvm-args=--emit-dwarf-unwind=always` doesn't take any effect, but `-Cllvm-args=-machine-outliner-reruns=3` does work.
### 1
Adding `RegisterCodeGenFlags` to register codegen flags fixed the first problem. `rustc -C llvm-args='--help-list-hidden'` emits full codegen flags including `--emit-dwarf-unwind` and `--emulated-tls`.
### 2
Constructing `TargetOptions` from `InitTargetOptionsFromCodeGenFlags` in `LLVMRustCreateTargetMachine` fixed the second problem. The `LLVMRustSetLLVMOptions` calls `ParseCommandLineOptions` which parses given `llvm-args`. For options like `machine-outliner-reruns`, it just works, since the codegen logic directly consumes the parsing result:
[machine-outliner-reruns register](0537f6354c/llvm/lib/CodeGen/MachineOutliner.cpp (L114))
[machine-outliner-reruns consumption](0537f6354c/llvm/lib/CodeGen/MachineOutliner.cpp (L1138))
But for flags defined in `TargetOptions` and `MCTargetOptions` to take effect, constructing them with `InitTargetOptionsFromCodeGenFlags` is essential, or the parsing result is just not consumed. Similar patterns can be observed in [lli](0537f6354c/llvm/tools/llc/llc.cpp (L494)), [llc](0537f6354c/llvm/tools/lli/lli.cpp (L517)), etc.
Add CL and CMD into to pdb debug info
Partial fix for https://github.com/rust-lang/rust/issues/96475
The Arg0 and CommandLineArgs of the MCTargetOptions cpp class are not set within bb548f9645/compiler/rustc_llvm/llvm-wrapper/PassWrapper.cpp (L378)
This causes LLVM to not neither output any compiler path (cl) nor the arguments that were used when invoking it (cmd) in the PDB file.
This fix adds the missing information to the target machine so LLVM can use it.
Upstream change
llvm/llvm-project@6b539f5eb8 changed
`isSectionBitcode` works and it now only respects `.llvm.lto` sections
instead of also `.llvmbc`, which it says was never intended to be used
for LTO. We instead load sections by name, and sniff for raw bitcode by
hand.
r? @nikic
@rustbot label: +llvm-main
rustc_llvm: Link to `zlib` on dragonfly and solaris
On native builds `llvm-config` picks up `zlib` and this gets pased into
the rust build tools, but on cross builds `llvm-config` is explicitly
ignored as it contains information for the host system and cannot be
trusted to be accurate for the target system.
Both DragonFly and Solaris contain `zlib` in the base system, so this is
both a safe assumption and required for a successful cross build unless
`zlib` support is disabled in LLVM.
This is more or less in the same vein as rust-lang#75713 and rust-lang#75655.
Move a local to the `#if` block where it is used
For other cases (LLVM < 17), this was complaining under `-Wall`:
```
warning: llvm-wrapper/PassWrapper.cpp: In function ‘void LLVMRustPrintTargetCPUs(LLVMTargetMachineRef, const char*)’:
warning: llvm-wrapper/PassWrapper.cpp:311:26: warning: unused variable ‘MCInfo’ [-Wunused-variable]
warning: 311 | const MCSubtargetInfo *MCInfo = Target->getMCSubtargetInfo();
warning: | ^~~~~~
```
coverage: Don't convert filename/symbol strings to `CString` for FFI
LLVM APIs are usually perfectly happy to accept pointer/length strings, as long as we supply a suitable length value when creating a `StringRef` or `std::string`.
This lets us avoid quite a few intermediate `CString` copies during coverage codegen. It also lets us use an `IndexSet<Symbol>` (instead of an `IndexSet<CString>`) when building the deduplicated filename table.
update llvm-wrapper include to silence deprecation warning
Includes of `include/llvm/Support/Host.h` now emit a deprecated warning: `warning: This header is deprecated, please use llvm/TargetParser/Host.h`.
I don't believe we are using this include.
I don't believe we need to bump the `download-ci-llvm` stamp since these warnings are emitted while building the `llvm-wrapper`.
r? ```@nikic```
CFI: Fix error compiling core with LLVM CFI enabled
Fix#90546 by filtering out global value function pointer types from the type tests, and adding the LowerTypeTests pass to the rustc LTO optimization pipelines.
Add hotness data to LLVM remarks
Slight improvement of https://github.com/rust-lang/rust/pull/113040. This makes sure that if PGO is used, remarks generated using `-Zremark-dir` will include the `Hotness` attribute.
r? `@tmiasko`
On native builds `llvm-config` picks up `zlib` and this gets pased into
the rust build tools, but on cross builds `llvm-config` is explicitly
ignored as it contains information for the host system and cannot be
trusted to be accurate for the target system.
Both DragonFly and Solaris contain `zlib` in the base system, so this is
both a safe assumption and required for a successful cross build unless
`zlib` support is disabled in LLVM.
This is more or less in the same vein as #75713 and #75655.
Fix#90546 by filtering out global value function pointer types from the
type tests, and adding the LowerTypeTests pass to the rustc LTO
optimization pipelines.
Filter out short-lived LLVM diagnostics before they reach the rustc handler
During profiling I saw remark passes being unconditionally enabled: for example `Machine Optimization Remark Emitter`.
The diagnostic remarks enabled by default are [from missed optimizations and opt analyses](https://github.com/rust-lang/rust/pull/113339#discussion_r1259480303). They are created by LLVM, passed to the diagnostic handler on the C++ side, emitted to rust, where they are unpacked, C++ strings are converted to rust, etc.
Then they are discarded in the vast majority of the time (i.e. unless some kind of `-Cremark` has enabled some of these passes' output to be printed).
These unneeded allocations are very short-lived, basically only lasting between the LLVM pass emitting them and the rust handler where they are discarded. So it doesn't hugely impact max-rss, and is only a slight reduction in instruction count (cachegrind reports a reduction between 0.3% and 0.5%) _on linux_. It's possible that targets without `jemalloc` or with a worse allocator, may optimize these less.
It is however significant in the aggregate, looking at the total number of allocated bytes:
- it's the biggest source of allocations according to dhat, on the benchmarks I've tried e.g. `syn` or `cargo`
- allocations on `syn` are reduced by 440MB, 17% (from 2440722647 bytes total, to 2030461328 bytes)
- allocations on `cargo` are reduced by 6.6GB, 19% (from 35371886402 bytes total, to 28723987743 bytes)
Some of these diagnostics objects [are allocated in LLVM](https://github.com/rust-lang/rust/pull/113339#discussion_r1252387484) *before* they're emitted to our diagnostic handler, where they'll be filtered out. So we could remove those in the future, but that will require changing a few LLVM call-sites upstream, so I left a FIXME.
this will eliminate many short-lived allocations (e.g. 20% of the memory used
building cargo) when unpacking the diagnostic and converting its various
C++ strings into rust strings, just to be filtered out most of the time.
Both GCC and Clang write by default a `.comment` section with compiler
information:
```txt
$ gcc -c -xc /dev/null && readelf -p '.comment' null.o
String dump of section '.comment':
[ 1] GCC: (GNU) 11.2.0
$ clang -c -xc /dev/null && readelf -p '.comment' null.o
String dump of section '.comment':
[ 1] clang version 14.0.1 (https://github.com/llvm/llvm-project.git c62053979489ccb002efe411c3af059addcb5d7d)
```
They also implement the `-Qn` flag to avoid doing so:
```txt
$ gcc -Qn -c -xc /dev/null && readelf -p '.comment' null.o
readelf: Warning: Section '.comment' was not dumped because it does not exist!
$ clang -Qn -c -xc /dev/null && readelf -p '.comment' null.o
readelf: Warning: Section '.comment' was not dumped because it does not exist!
```
So far, `rustc` only does it for WebAssembly targets and only
when debug info is enabled:
```txt
$ echo 'fn main(){}' | rustc --target=wasm32-unknown-unknown --emit=llvm-ir -Cdebuginfo=2 - && grep llvm.ident rust_out.ll
!llvm.ident = !{!27}
```
In the RFC part of this PR it was decided to always add
the information, which gets us closer to other popular compilers.
An opt-out flag like GCC and Clang may be added later on if deemed
necessary.
Implementation-wise, this covers both `ModuleLlvm::new()` and
`ModuleLlvm::new_metadata()` cases by moving the addition to
`context::create_module` and adds a few test cases.
ThinLTO also sees the `llvm.ident` named metadata duplicated (in
temporary outputs), so this deduplicates it like it is done for
`wasm.custom_sections`. The tests also check this duplication does
not take place.
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
Support `--print KIND=PATH` command line syntax
As is already done for `--emit KIND=PATH` and `-L KIND=PATH`.
In the discussion of #110785, it was pointed out that `--print KIND=PATH` is nicer than trying to apply the single global `-o` path to `--print`'s output, because in general there can be multiple print requests within a single rustc invocation, and anyway `-o` would already be used for a different meaning in the case of `link-args` and `native-static-libs`.
I am interested in using `--print cfg=PATH` in Buck2. Currently Buck2 works around the lack of support for `--print KIND=PATH` by [indirecting through a Python wrapper script](d43cf3a51a/prelude/rust/tools/get_rustc_cfg.py) to redirect rustc's stdout into the location dictated by the build system.
From skimming Cargo's usages of `--print`, it definitely seems like it would benefit from `--print KIND=PATH` too. Currently it is working around the lack of this by inserting `--crate-name=___ --print=crate-name` so that it can look for a line containing `___` as a delimiter between the 2 other `--print` informations it actually cares about. This is commented as a "HACK" and "abuse". 31eda6f7c3/src/cargo/core/compiler/build_context/target_info.rs (L242) (FYI `@weihanglo` as you dealt with this recently in https://github.com/rust-lang/cargo/pull/11633.)
Mentioning reviewers active in #110785: `@fee1-dead` `@jyn514` `@bjorn3`
Resurrect: rustc_llvm: Add a -Z `print-codegen-stats` option to expose LLVM statistics.
This resurrects PR https://github.com/rust-lang/rust/pull/104000, which has sat idle for a while. And I want to see the effect of stack-move optimizations on LLVM (like https://reviews.llvm.org/D153453) :).
I have applied the changes requested by `@oli-obk` and `@nagisa` https://github.com/rust-lang/rust/pull/104000#discussion_r1014625377 and https://github.com/rust-lang/rust/pull/104000#discussion_r1014642482 in the latest commits.
r? `@oli-obk`
-----
LLVM has a neat [statistics](https://llvm.org/docs/ProgrammersManual.html#the-statistic-class-stats-option) feature that tracks how often optimizations kick in. It's very handy for optimization work. Since we expose the LLVM pass timings, I thought it made sense to expose the LLVM statistics too.
-----
(Edit: fix broken link
(Edit2: fix segmentation fault and use malloc
If `rustc` is built with
```toml
[llvm]
assertions = true
```
Then you can see like
```
rustc +stage1 -Z print-codegen-stats -C opt-level=3 tmp.rs
===-------------------------------------------------------------------------===
... Statistics Collected ...
===-------------------------------------------------------------------------===
3 aa - Number of MayAlias results
193 aa - Number of MustAlias results
531 aa - Number of NoAlias results
...
```
And the current default build emits only
```
$ rustc +stage1 -Z print-codegen-stats -C opt-level=3 tmp.rs
===-------------------------------------------------------------------------===
... Statistics Collected ...
===-------------------------------------------------------------------------===
$
```
This might be better to emit the message to tell assertion flag necessity, but now I can't find how to do that...
CI: build CMake 3.20 to support LLVM 17
LLVM 17 will require CMake at least 3.20, so we have to go back to building our own CMake on the Linux x64 dist builder.
r? `@nikic`
LLVM has a neat [statistics] feature that tracks how often optimizations kick
in. It's very handy for optimization work. Since we expose the LLVM pass
timings, I thought it made sense to expose the LLVM statistics too.
[statistics]: https://llvm.org/docs/ProgrammersManual.html#the-statistic-class-stats-option