mir opt + codegen: handle subtyping
fixes#107205
the same issue was caused in multiple places:
- mir opts: both copy and destination propagation
- codegen: assigning operands to locals (which also propagates values)
I changed codegen to always update the type in the operands used for locals which should guard against any new occurrences of this bug going forward. I don't know how to make mir optimizations more resilient here. Hopefully the added tests will be enough to detect any trivially wrong optimizations going forward.
Add trustzone and virtualization target features for aarch32.
These are LLVM target features which allow the `smc` and `hvc` instructions respectively to be used in inline assembly.
For non-incremental builds on Unix, currently all the thread names look
like `opt regex.f10ba03eb5ec7975-cgu.0`. But they are truncated by
`pthread_setname` to `opt regex.f10ba`, hiding the numeric suffix that
distinguishes them. This is really annoying when using a profiler like
Samply.
This commit changes these thread names to a form like `opt cgu.0`, which
is much better.
The codegen main loop has two bools, `codegen_done` and
`codegen_aborted`. There are only three valid combinations: `(false,
false)`, `(true, false)`, `(true, true)`.
This commit replaces them with a single tri-state enum, which makes
things clearer.
`Message` is an enum with multiple variants. Four of those variants map
directly onto the four variants of `WorkItemResult`. This commit reduces
those four `Message` variants to a single variant containing a
`WorkItemResult`. This requires increasing `WorkItemResult`'s visibility
to `pub(crate)` visibility, but `WorkItem` and `Message` can also have
their visibility reduced to `pub(crate)`.
This change avoids some boilerplate enum translation code, and makes
`Message` easier to understand.
`Message` is an enum with multiple variants, for messages sent to the
coordinator thread. *Except* for `Message::CodegenItem`, which is
entirely disjoint, being for messages sent from the coordinator thread
to the main thread.
This commit move `Message::CodegenItem` into a separate type,
`CguMessage`, which makes the code much clearer.
Like for rlibs, the paths on the linker command line need to be relative
paths if the sysroot was specified by the user to be a relative path.
Dylibs put the path in /LIBPATH instead of into the file path of the
library itself, so we rehome the libpath and adjust the rehoming function
to be able to support both use cases, rlibs and dylibs.
When the `--sysroot` is specified as relative to the current working
directory, the sysroot's rlibs should also be specified as relative
paths. Otherwise, the current working directory ends up in the
absolute paths, and in the linker command line. And the entire linker
command line appears in the PDB file generated by the MSVC linker.
When adding an rlib to the linker command line, if the rlib's canonical
path is in the sysroot's canonical path, then use the current sysroot
path + filename instead of the full absolute path to the rlib. This
means that when `--sysroot=foo` is specified, the linker command line
will contain `foo/rustlib/target/lib/lib*.rlib` instead of the full
absolute path to the same.
This addresses https://github.com/rust-lang/rust/issues/112586
Because tiny CGUs make compilation less efficient *and* result in worse
generated code.
We don't do this when the number of CGUs is explicitly given, because
there are times when the requested number is very important, as
described in some comments within the commit. So the commit also
introduces a `CodegenUnits` type that distinguishes between default
values and user-specified values.
This change has a roughly neutral effect on walltimes across the
rustc-perf benchmarks; there are some speedups and some slowdowns. But
it has significant wins for most other metrics on numerous benchmarks,
including instruction counts, cycles, binary size, and max-rss. It also
reduces parallelism, which is good for reducing jobserver competition
when multiple rustc processes are running at the same time. It's smaller
benchmarks that benefit the most; larger benchmarks already have CGUs
that are all larger than the minimum size.
Here are some example before/after CGU sizes for opt builds.
- html5ever
- CGUs: 16, mean size: 1196.1, sizes: [3908, 2992, 1706, 1652, 1572,
1136, 1045, 948, 946, 938, 579, 471, 443, 327, 286, 189]
- CGUs: 4, mean size: 4396.0, sizes: [6706, 3908, 3490, 3480]
- libc
- CGUs: 12, mean size: 35.3, sizes: [163, 93, 58, 53, 37, 8, 2 (x6)]
- CGUs: 1, mean size: 424.0, sizes: [424]
- tt-muncher
- CGUs: 5, mean size: 1819.4, sizes: [8508, 350, 198, 34, 7]
- CGUs: 1, mean size: 9075.0, sizes: [9075]
Note that CGUs of size 100,000+ aren't unusual in larger programs.
Write to stdout if `-` is given as output file
With this PR, if `-o -` or `--emit KIND=-` is provided, output will be written to stdout instead. Binary output (those of type `obj`, `llvm-bc`, `link` and `metadata`) being written this way will result in an error unless stdout is not a tty. Multiple output types going to stdout will trigger an error too, as they will all be mixded together.
This implements https://github.com/rust-lang/compiler-team/issues/431
The idea behind the changes is to introduce an `OutFileName` enum that represents the output - be it a real path or stdout - and to use this enum along the code paths that handle different output types.
Rollup of 9 pull requests
Successful merges:
- #112034 (Migrate `item_opaque_ty` to Askama)
- #112179 (Avoid passing --cpu-features when empty)
- #112309 (bootstrap: remove dependency `is-terminal`)
- #112388 (Migrate GUI colors test to original CSS color format)
- #112389 (Add a test for #105709)
- #112392 (Fix ICE for while loop with assignment condition with LHS place expr)
- #112394 (Remove accidental comment)
- #112396 (Track more diagnostics in `rustc_expand`)
- #112401 (Don't `use compile_error as print`)
r? `@ghost`
`@rustbot` modify labels: rollup
Removed use of iteration through a HashMap/HashSet in rustc_incremental and replaced with IndexMap/IndexSet
This allows for the `#[allow(rustc::potential_query_instability)]` in rustc_incremental to be removed, moving towards fixing #84447 (although a LOT more modules have to be changed to fully resolve it). Only HashMaps/HashSets that are being iterated through have been modified (although many structs and traits outside of rustc_incremental had to be modified as well, as they had fields/methods that involved a HashMap/HashSet that would be iterated through)
I'm making a PR for just 1 module changed to test for performance regressions and such, for future changes I'll either edit this PR to reflect additional modules being converted, or batch multiple modules of changes together and make a PR for each group of modules.
Force all native libraries to be statically linked when linking a static binary
Previously, `#[link]` without an explicit `kind = "static"` would confuse the linker and end up producing a dynamically linked library because of the `-Bdynamic` flag. However this binary would not work correctly anyways since it was linked with startup code for a static binary.
This PR solves this by forcing all native libraries to be statically linked when the output is a static binary that cannot link to dynamic libraries anyways.
Fixes#108878Fixes#102993
If `-o -` or `--emit KIND=-` is provided, output will be written
to stdout instead. Binary output (`obj`, `llvm-bc`, `link` and
`metadata`) being written this way will result in an error unless
stdout is not a tty. Multiple output types going to stdout will
trigger an error too, as they will all be mixded together.
Use `load`+`store` instead of `memcpy` for small integer arrays
I was inspired by #98892 to see whether, rather than making `mem::swap` do something smart in the library, we could update MIR assignments like `*_1 = *_2` to do something smarter than `memcpy` for sufficiently-small types that doing it inline is going to be better than a `memcpy` call in assembly anyway. After all, special code may help `mem::swap`, but if the "obvious" MIR can just result in the correct thing that helps everything -- other code like `mem::replace`, people doing it manually, and just passing around by value in general -- as well as makes MIR inlining happier since it doesn't need to deal with all the complicated library code if it just sees a couple assignments.
LLVM will turn the short, known-length `memcpy`s into direct instructions in the backend, but that's too late for it to be able to remove `alloca`s. In general, replacing `memcpy`s with typed instructions is hard in the middle-end -- even for `memcpy.inline` where it knows it won't be a function call -- is hard [due to poison propagation issues](https://rust-lang.zulipchat.com/#narrow/stream/187780-t-compiler.2Fwg-llvm/topic/memcpy.20vs.20load-store.20for.20MIR.20assignments/near/360376712). So because we know more about the type invariants -- these are typed copies -- rustc can emit something more specific, allowing LLVM to `mem2reg` away the `alloca`s in some situations.
#52051 previously did something like this in the library for `mem::swap`, but it ended up regressing during enabling mir inlining (cbbf06b0cd), so this has been suboptimal on stable for ≈5 releases now.
The code in this PR is narrowly targeted at just integer arrays in LLVM, but works via a new method on the [`LayoutTypeMethods`](https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/traits/trait.LayoutTypeMethods.html) trait, so specific backends based on cg_ssa can enable this for more situations over time, as we find them. I don't want to try to bite off too much in this PR, though. (Transparent newtypes and simple things like the 3×usize `String` would be obvious candidates for a follow-up.)
Codegen demonstrations: <https://llvm.godbolt.org/z/fK8hT9aqv>
Before:
```llvm
define void `@swap_rgb48_old(ptr` noalias nocapture noundef align 2 dereferenceable(6) %x, ptr noalias nocapture noundef align 2 dereferenceable(6) %y) unnamed_addr #1 {
%a.i = alloca [3 x i16], align 2
call void `@llvm.lifetime.start.p0(i64` 6, ptr nonnull %a.i)
call void `@llvm.memcpy.p0.p0.i64(ptr` noundef nonnull align 2 dereferenceable(6) %a.i, ptr noundef nonnull align 2 dereferenceable(6) %x, i64 6, i1 false)
tail call void `@llvm.memcpy.p0.p0.i64(ptr` noundef nonnull align 2 dereferenceable(6) %x, ptr noundef nonnull align 2 dereferenceable(6) %y, i64 6, i1 false)
call void `@llvm.memcpy.p0.p0.i64(ptr` noundef nonnull align 2 dereferenceable(6) %y, ptr noundef nonnull align 2 dereferenceable(6) %a.i, i64 6, i1 false)
call void `@llvm.lifetime.end.p0(i64` 6, ptr nonnull %a.i)
ret void
}
```
Note it going to stack:
```nasm
swap_rgb48_old: # `@swap_rgb48_old`
movzx eax, word ptr [rdi + 4]
mov word ptr [rsp - 4], ax
mov eax, dword ptr [rdi]
mov dword ptr [rsp - 8], eax
movzx eax, word ptr [rsi + 4]
mov word ptr [rdi + 4], ax
mov eax, dword ptr [rsi]
mov dword ptr [rdi], eax
movzx eax, word ptr [rsp - 4]
mov word ptr [rsi + 4], ax
mov eax, dword ptr [rsp - 8]
mov dword ptr [rsi], eax
ret
```
Now:
```llvm
define void `@swap_rgb48(ptr` noalias nocapture noundef align 2 dereferenceable(6) %x, ptr noalias nocapture noundef align 2 dereferenceable(6) %y) unnamed_addr #0 {
start:
%0 = load <3 x i16>, ptr %x, align 2
%1 = load <3 x i16>, ptr %y, align 2
store <3 x i16> %1, ptr %x, align 2
store <3 x i16> %0, ptr %y, align 2
ret void
}
```
still lowers to `dword`+`word` operations, but has no stack traffic:
```nasm
swap_rgb48: # `@swap_rgb48`
mov eax, dword ptr [rdi]
movzx ecx, word ptr [rdi + 4]
movzx edx, word ptr [rsi + 4]
mov r8d, dword ptr [rsi]
mov dword ptr [rdi], r8d
mov word ptr [rdi + 4], dx
mov word ptr [rsi + 4], cx
mov dword ptr [rsi], eax
ret
```
And as a demonstration that this isn't just `mem::swap`, a `mem::replace` on a small array (since replace doesn't use swap since #83022), which used to be `memcpy`s in LLVM changes in IR
```llvm
define void `@replace_short_array(ptr` noalias nocapture noundef sret([3 x i32]) dereferenceable(12) %0, ptr noalias noundef align 4 dereferenceable(12) %r, ptr noalias nocapture noundef readonly dereferenceable(12) %v) unnamed_addr #0 {
start:
%1 = load <3 x i32>, ptr %r, align 4
store <3 x i32> %1, ptr %0, align 4
%2 = load <3 x i32>, ptr %v, align 4
store <3 x i32> %2, ptr %r, align 4
ret void
}
```
but that lowers to reasonable `dword`+`qword` instructions still
```nasm
replace_short_array: # `@replace_short_array`
mov rax, rdi
mov rcx, qword ptr [rsi]
mov edi, dword ptr [rsi + 8]
mov dword ptr [rax + 8], edi
mov qword ptr [rax], rcx
mov rcx, qword ptr [rdx]
mov edx, dword ptr [rdx + 8]
mov dword ptr [rsi + 8], edx
mov qword ptr [rsi], rcx
ret
```
These tend to have special handling in a bunch of places anyway, so the variant helps remember that. And I think it's easier to grok than non-Scalar Aggregates sometimes being `Immediates` (like I got wrong and caused 109992). As a minor bonus, it means we don't need to generate poison LLVM values for them to pass around in `OperandValue::Immediate`s.
linker: Report linker flavors incompatible with the current target
The linker flavor is checked for target compatibility even if linker is never used (e.g. we are producing a rlib).
If it causes trouble, we can move the check to `link.rs` so it will run if the linker (flavor) is actually used.
And also feature gate explicitly specifying linker flavors for tier 3 targets.
The next step is supporting all the internal linker flavors in user-visible interfaces (command line and json).
offset_of: don't require type to be `Sized`
Fixes#112051
~~The RFC [explicitly forbids](https://rust-lang.github.io/rfcs/3308-offset_of.html#limitations) non-`Sized` types, but it looks like only the fields being recursed into were checked. The sized check also seemed to have been completely missing for tuples~~
change `BorrowKind::Unique` to be a mutating `PlaceContext`
fixes#112056
I believe that `BorrowKind::Unique` is a footgun in general, so I added a FIXME and opened https://github.com/rust-lang/rust/issues/112072. This is a bit too involved for this PR though.
Optimize scalar and scalar pair representations loaded from ByRef in llvm
in https://github.com/rust-lang/rust/pull/105653 I noticed that we were generating suboptimal LLVM IR if we had a `ConstValue::ByRef` that could be represented by a `ScalarPair`. Before https://github.com/rust-lang/rust/pull/105653 this is probably rare, but after it, every slice will go down this suboptimal code path that requires LLVM to untangle a bunch of indirections and translate static allocations that are only used once to read a scalar pair from.
Go through an intermediate pair of `cc`and `lld` hints instead of mapping CLI options to `LinkerFlavor` directly, and use the target's default linker flavor as a reference.
Each of `{D,Subd}iagnosticMessage::{Str,Eager}` has a comment:
```
// FIXME(davidtwco): can a `Cow<'static, str>` be used here?
```
This commit answers that question in the affirmative. It's not the most
compelling change ever, but it might be worth merging.
This requires changing the `impl<'a> From<&'a str>` impls to `impl
From<&'static str>`, which involves a bunch of knock-on changes that
require/result in call sites being a little more precise about exactly
what kind of string they use to create errors, and not just `&str`. This
will result in fewer unnecessary allocations, though this will not have
any notable perf effects given that these are error paths.
Note that I was lazy within Clippy, using `to_string` in a few places to
preserve the existing string imprecision. I could have used `impl
Into<{D,Subd}iagnosticMessage>` in various places as is done in the
compiler, but that would have required changes to *many* call sites
(mostly changing `&format("...")` to `format!("...")`) which didn't seem
worthwhile.
Add support for LLVM SafeStack
Adds support for LLVM [SafeStack] which provides backward edge control
flow protection by separating the stack into two parts: data which is
only accessed in provable safe ways is allocated on the normal stack
(the "safe stack") and all other data is placed in a separate allocation
(the "unsafe stack").
SafeStack support is enabled by passing `-Zsanitizer=safestack`.
[SafeStack]: https://clang.llvm.org/docs/SafeStack.html
cc `@rcvalle` #39699
Adds support for LLVM [SafeStack] which provides backward edge control
flow protection by separating the stack into two parts: data which is
only accessed in provable safe ways is allocated on the normal stack
(the "safe stack") and all other data is placed in a separate allocation
(the "unsafe stack").
SafeStack support is enabled by passing `-Zsanitizer=safestack`.
[SafeStack]: https://clang.llvm.org/docs/SafeStack.html
Fix linking Mac Catalyst by including LC_BUILD_VERSION in object files
Hello. My first rustc PR!
Issue #106021 prevents Rust code from being linked into Mac Catalyst applications. Apple's LD has started requiring object files to contain version information about the platform they were built for, such as:
* the "deployment target" (minimum supported OS version),
* the SDK version
* the type of the platform (macOS/iOS/catalyst/tvOS/watchOS all have a different number).
This is currently only enforced when building for Mac Catalyst.
Rust uses the `object` crate which added support for including this information starting with `0.31.0`. ~~I upgraded it along with `thorin-dwp` so that everything depends on 0.31.
Apparently 0.31 [pulls in](https://github.com/gimli-rs/object/issues/463) `ruzstd` due to a [new ELF standard](https://maskray.me/blog/2022-09-09-zstd-compressed-debug-sections) because its `compression` feature is enabled by thorin. If you find this objectionable, let me know what the best way to avoid pulling in those dependencies might be.~~
**(`object` upgraded in https://github.com/rust-lang/rust/pull/111413)**
I then added two commits:
* The first one adds very basic, hard-coded support for calling `set_macho_build_version` for `-macabi` (Catalyst) targets, where it claims deployment target of Catalyst 14.0 and SDK of 16.2.
* The second weaves the versioning through `rust_target::spec::TargetOptions`, so that we can stick to specifying all target-related info in one place.
Kudos to ``@ara4n`` for writing [this gist](https://gist.github.com/ara4n/320a53ea768aba51afad4c9ed2168536).
Ensure Fluent messages are in alphabetical order
Fixes#111847
This adds a tidy check to ensure Fluent messages are in alphabetical order, as well as sorting all existing messages. I think the error could be worded better, would appreciate suggestions.
<details>
<summary>Script used to sort files</summary>
```py
import sys
import re
fn = sys.argv[1]
with open(fn, 'r') as f:
data = f.read().split("\n")
chunks = []
cur = ""
for line in data:
if re.match(r"^([a-zA-Z0-9_]+)\s*=\s*", line):
chunks.append(cur)
cur = ""
cur += line + "\n"
chunks.append(cur)
chunks.sort()
with open(fn, 'w') as f:
f.write(''.join(chunks).strip("\n\n") + "\n")
```
</details>
Support #[global_allocator] without the allocator shim
This makes it possible to use liballoc/libstd in combination with `--emit obj` if you use `#[global_allocator]`. This is what rust-for-linux uses right now and systemd may use in the future. Currently they have to depend on the exact implementation of the allocator shim to create one themself as `--emit obj` doesn't create an allocator shim.
Note that currently the allocator shim also defines the oom error handler, which is normally required too. Once `#![feature(default_alloc_error_handler)]` becomes the only option, this can be avoided. In addition when using only fallible allocator methods and either `--cfg no_global_oom_handling` for liballoc (like rust-for-linux) or `--gc-sections` no references to the oom error handler will exist.
To avoid this feature being insta-stable, you will have to define `__rust_no_alloc_shim_is_unstable` to avoid linker errors.
(Labeling this with both T-compiler and T-lang as it originally involved both an implementation detail and had an insta-stable user facing change. As noted above, the `__rust_no_alloc_shim_is_unstable` symbol requirement should prevent unintended dependence on this unstable feature.)
Use `Option::is_some_and` and `Result::is_ok_and` in the compiler
`.is_some_and(..)`/`.is_ok_and(..)` replace `.map_or(false, ..)` and `.map(..).unwrap_or(false)`, making the code more readable.
This PR is a sibling of https://github.com/rust-lang/rust/pull/111873#issuecomment-1561316515
Preprocess and cache dominator tree
Preprocessing dominators has a very strong effect for https://github.com/rust-lang/rust/pull/111344.
That pass checks that assignments dominate their uses repeatedly. Using the unprocessed dominator tree caused a quadratic runtime (number of bbs x depth of the dominator tree).
This PR also caches the dominator tree and the pre-processed dominators in the MIR cfg cache.
Rebase of https://github.com/rust-lang/rust/pull/107157
cc `@tmiasko`
Don't assume that `-Bdynamic` is the default linker mode
In particular this is false when passing `-static` or `-static-pie` to the linker, which changes the default to `-Bstatic`. This PR ensures we explicitly initialize the correct mode when we first need it.
Fix local libs not included when printing native static libs
This PR fixes https://github.com/rust-lang/rust/issues/111643 by adding the local used libs to the printed `--print=native-static-libs` output.
It seems that `--print=native-static-libs` doesn't have any test, so I added one. It's very simple and doesn't even tries to compile the result to a binary as I don't know how to handle external library linking in CI. (Note that https://github.com/rust-lang/rust/blob/master/tests/run-make/staticlib-dylib-linkage/Makefile does compile to a binary)
r? `@bjorn3`
Fix dependency tracking for debugger visualizers
This PR fixes dependency tracking for debugger visualizer files by changing the `debugger_visualizers` query to an `eval_always` query that scans the AST while it is still available. This way the set of visualizer files is already available when dep-info is emitted. Since the query is turned into an `eval_always` query, dependency tracking will now reliably detect changes to the visualizer script files themselves.
TODO:
- [x] perf.rlo
- [x] Needs a bit more documentation in some places
- [x] Needs regression test for the incr. comp. case
Fixes https://github.com/rust-lang/rust/issues/111226
Fixes https://github.com/rust-lang/rust/issues/111227
Fixes https://github.com/rust-lang/rust/issues/111295
r? `@wesleywiser`
cc `@gibbyfree`
Only depend on CFG_VERSION in rustc_interface
This avoids having to rebuild the whole compiler on each commit when `omit-git-hash = false`.
cc https://github.com/rust-lang/rust/issues/76720 - this won't fix it, and I'm not suggesting we turn this on by default, but it will make it less painful for people who do have `omit-git-hash` on as a workaround.
Support RISC-V unaligned-scalar-mem target feature
This adds `unaligned-scalar-mem` as an allowed RISC-V target feature. Some RISC-V cores support unaligned access to memory without trapping. On such cores, the compiler could significantly improve code-size and performance when using functions like core::ptr::read_unaligned<u32> by emitting a single load or store instruction with an unaligned address, rather than a long sequence of byte load/store/bitmanip instructions.
Enabling the `unaligned-scalar-mem` target feature allows LLVM to do this optimization.
Fixes#110883
In particular this is false when passing `-static` or `-static-pie` to
the linker, which changes the default to `-Bstatic`. This PR ensures we
explicitly initialize the correct mode when we first need it.
Remove misleading target feature aliases
Fixes#100752. This is a follow up to #103750. These aliases could not be completely removed until rust-lang/stdarch#1355 landed.
cc `@Amanieu`
Allow MIR debuginfo to point to a variable's address
MIR optimizations currently do not to operate on borrowed locals.
When enabling #106285, many borrows will be left as-is because they are used in debuginfo. This pass allows to replace this pattern directly in MIR debuginfo:
```rust
a => _1
_1 = &raw? mut? _2
```
becomes
```rust
a => &_2
// No statement to borrow _2.
```
This pass is implemented as a drive-by in ReferencePropagation MIR pass.
This transformation allows following following MIR opts to treat _2 as an unborrowed local, and optimize it as such, even in builds with debuginfo.
In codegen, when encountering `a => &..&_2`, we create a list of allocas:
```llvm
store ptr %_2.dbg.spill, ptr %a.ref0.dbg.spill
store ptr %a.ref0.dbg.spill, ptr %a.ref1.dbg.spill
...
call void `@llvm.dbg.declare(metadata` ptr %a.ref{n}.dbg.spill, /* ... */)
```
Caveat: this transformation looses the exact type, we do not differentiate `a` as a immutable, mutable reference or a raw pointer. Everything is declared to `*mut` to codegen. I'm not convinced this is a blocker.
Align unsized locals
Allocate an extra space for unsized locals and manually align the storage, since alloca doesn't support dynamic alignment.
Fixes#71416.
Fixes#71695.
Introduce `DynSend` and `DynSync` auto trait for parallel compiler
part of parallel-rustc #101566
This PR introduces `DynSend / DynSync` trait and `FromDyn / IntoDyn` structure in rustc_data_structure::marker. `FromDyn` can dynamically check data structures for thread safety when switching to parallel environments (such as calling `par_for_each_in`). This happens only when `-Z threads > 1` so it doesn't affect single-threaded mode's compile efficiency.
r? `@cjgillot`