Like #107044, this will let us track compatibility with LLVM 16 going
forward, especially after we eventually upgrade our own to the next.
This also drops `tidy` here and in `x86_64-gnu-llvm-15`, syncing with
that change in #106085.
Preserve argument indexes when inlining MIR
We store argument indexes on VarDebugInfo. Unlike the previous method of relying on the variable index to know whether a variable is an argument, this survives MIR inlining.
We also no longer check if var.source_info.scope is the outermost scope. When a function gets inlined, the arguments to the inner function will no longer be in the outermost scope. What we care about though is whether they were in the outermost scope prior to inlining, which we know by whether we assigned an argument index.
Fixes#83217
I considered using `Option<NonZeroU16>` instead of `Option<u16>` to store the index. I didn't because `TypeFoldable` isn't implemented for `NonZeroU16` and because it looks like due to padding, it currently wouldn't make any difference. But I indexed from 1 anyway because (a) it'll make it easier if later it becomes worthwhile to use a `NonZeroU16` and because the arguments were previously indexed from 1, so it made for a smaller change.
This is my first PR on rust-lang/rust, so apologies if I've gotten anything not quite right.
These don't optimize with debug assertions. For one of them, this
is due to the new alignment checks, for the other I'm not sure
what specifically blocks it.
We store argument indexes on VarDebugInfo. Unlike the previous method of
relying on the variable index to know whether a variable is an argument,
this survives MIR inlining.
We also no longer check if var.source_info.scope is the outermost scope.
When a function gets inlined, the arguments to the inner function will
no longer be in the outermost scope. What we care about though is
whether they were in the outermost scope prior to inlining, which we
know by whether we assigned an argument index.
Use SipHash-1-3 instead of SipHash-2-4 for StableHasher
Noticed this, and it seems easy and likely a perf win. IIUC we don't need DDOS resistance (just collision) so we ideally would have an even faster hash, but it's hard to beat this SipHash impl here, since it's been so highly tuned for the interface.
It wouldn't surprise me if there's some subtle reason changing this sucks, as it's so obvious it seems likely to have been done. Still, SipHash-1-3 seems to still have the guarantees StableHasher should need (and seemingly more), and is clearly less work. So it's worth a shot.
Not fully tested locally.
Validate `ignore` and `only` compiletest directive, and add human-readable ignore reasons
This PR adds strict validation for the `ignore` and `only` compiletest directives, failing if an unknown value is provided to them. Doing so uncovered 79 tests in `tests/ui` that had invalid directives, so this PR also fixes them.
Finally, this PR adds human-readable ignore reasons when tests are ignored due to `ignore` or `only` directives, like *"only executed when the architecture is aarch64"* or *"ignored when the operative system is windows"*. This was the original reason why I started working on this PR and #108659, as we need both of them for Ferrocene.
The PR is a draft because the code is extremely inefficient: it calls `rustc --print=cfg --target $target` for every rustc target (to gather the list of allowed ignore values), which on my system takes between 4s and 5s, and performs a lot of allocations of constant values. I'll fix both of them in the coming days.
r? `@ehuss`
Allow `transmute`s to produce `OperandValue`s instead of needing `alloca`s
LLVM can usually optimize these away, but especially for things like transmutes of newtypes it's silly to generate the `alloc`+`store`+`load` at all when it's actually a nop at LLVM level.
LLVM can usually optimize these away, but especially for things like transmutes of newtypes it's silly to generate the `alloc`+`store`+`load` at all when it's actually a nop at LLVM level.
`-Cdebuginfo=1` was never line tables only and
can't be due to backwards compatibility issues.
This was clarified and an option for line tables only
was added. Additionally an option for line info
directives only was added, which is well needed for
some targets. The debug info options should now
behave the same as clang's debug info options.
Insert alignment checks for pointer dereferences when debug assertions are enabled
Closes https://github.com/rust-lang/rust/issues/54915
- [x] Jake tells me this sounds like a place to use `MirPatch`, but I can't figure out how to insert a new basic block with a new terminator in the middle of an existing basic block, using `MirPatch`. (if nobody else backs up this point I'm checking this as "not actually a good idea" because the code looks pretty clean to me after rearranging it a bit)
- [x] Using `CastKind::PointerExposeAddress` is definitely wrong, we don't want to expose. Calling a function to get the pointer address seems quite excessive. ~I'll see if I can add a new `CastKind`.~ `CastKind::Transmute` to the rescue!
- [x] Implement a more helpful panic message like slice bounds checking.
r? `@oli-obk`
Upgrade to LLVM 16, again
Relative to the previous attempt in https://github.com/rust-lang/rust/pull/107224:
* Update to GCC 8.5 on dist-x86_64-linux, to avoid std::optional ABI-incompatibility between libstdc++ 7 and 8.
* Cherry-pick 96df79af02.
* Cherry-pick 6fc670e5e3.
r? `@cuviper`
Use poison instead of undef
In cases where it is legal, we should prefer poison values over undef values.
This replaces undef with poison for aggregate construction and for uninhabited types. There are more places where we can likely use poison, but I wanted to stay conservative to start with.
In particular the aggregate case is important for newer LLVM versions, which are not able to handle an undef base value during early optimization due to poison-propagation concerns.
r? `@cuviper`
Updates `interpret`, `codegen_ssa`, and `codegen_cranelift` to consume the new cast instead of the intrinsic.
Includes `CastTransmute` for custom MIR building, to be able to test the extra UB.
Remove the assume(!is_null) from Vec::as_ptr
At a guess, this code is leftover from LLVM was worse at keeping track of the niche information here. In any case, we don't need this anymore: Removing this `assume` doesn't get rid of the `nonnull` attribute on the return type.
Upgrade to LLVM 16
This updates Rust to LLVM 16. It also updates our host compiler for dist-x86_64-linux to LLVM 16. The reason for that is that Bolt from LLVM 15 is not capable of compiling LLVM 16 (https://github.com/llvm/llvm-project/issues/61114).
LLVM 16.0.0 has been [released](https://discourse.llvm.org/t/llvm-16-0-0-release/69326) on March 18, while Rust 1.70 will become stable on June 1.
Tested images: `dist-x86_64-linux`, `dist-riscv64-linux` (alt), `dist-x86_64-illumos`, `dist-various-1`, `dist-various-2`, `dist-powerpc-linux`, `wasm32`, `armhf-gnu`
Tested images until the usual IPv6 failures: `test-various`
inherit_overflow: adapt pattern to also work with v0 mangling
This test was failing under new-symbol-mangling = true. Adapt pattern to work in both cases.
Related to #106002 from December.
Wrap the whole LocalInfo in ClearCrossCrate.
MIR contains a lot of information about locals. The primary purpose of this information is the quality of borrowck diagnostics.
This PR aims to drop this information after MIR analyses are finished, ie. starting from post-cleanup runtime MIR.
In cases where it is legal, we should prefer poison values over
undef values.
This replaces undef with poison for aggregate construction and
for uninhabited types. There are more places where we can likely
use poison, but I wanted to stay conservative to start with.
In particular the aggregate case is important for newer LLVM
versions, which are not able to handle an undef base value during
early optimization due to poison-propagation concerns.
Ensure `ptr::read` gets all the same LLVM `load` metadata that dereferencing does
I was looking into `array::IntoIter` optimization, and noticed that it wasn't annotating the loads with `noundef` for simple things like `array::IntoIter<i32, N>`. Trying to narrow it down, it seems that was because `MaybeUninit::assume_init_read` isn't marking the load as initialized (<https://rust.godbolt.org/z/Mxd8TPTnv>), which is unfortunate since that's basically its reason to exist.
The root cause is that `ptr::read` is currently implemented via the *untyped* `copy_nonoverlapping`, and thus the `load` doesn't get any type-aware metadata: no `noundef`, no `!range`. This PR solves that by lowering `ptr::read(p)` to `copy *p` in MIR, for which the backends already do the right thing.
Fortuitiously, this also improves the IR we give to LLVM for things like `mem::replace`, and fixes a couple of long-standing bugs where `ptr::read` on `Copy` types was worse than `*`ing them.
Zulip conversation: <https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/Move.20array.3A.3AIntoIter.20to.20ManuallyDrop/near/341189936>
cc `@erikdesjardins` `@JakobDegen` `@workingjubilee` `@the8472`
Fixes#106369Fixes#73258
Move `Option::as_slice` to an always-sound implementation
This approach depends on CSE to not have any branches or selects when the guessed offset is correct -- which it always will be right now -- but to also be *sound* (just less efficient) if the layout algorithms change such that the guess is incorrect.
The codegen test confirms that CSE handles this as expected, leaving the optimal codegen.
cc JakobDegen #108545
This approach depends on CSE to not have any branches or selects when the guessed offset is correct -- which it always will be right now -- but to also be *sound* (just less efficient) if the layout algorithms change such that the guess is incorrect.
I was looking into `array::IntoIter` optimization, and noticed that it wasn't annotating the loads with `noundef` for simple things like `array::IntoIter<i32, N>`.
Turned out to be a more general problem as `MaybeUninit::assume_init_read` isn't marking the load as initialized (<https://rust.godbolt.org/z/Mxd8TPTnv>), which is unfortunate since that's basically its reason to exist.
This PR lowers `ptr::read(p)` to `copy *p` in MIR, which fortuitiously also improves the IR we give to LLVM for things like `mem::replace`.
Use `nuw` when calculating slice lengths from `Range`s
An `assume` would definitely not be worth it, but since the flag is almost free we might as well tell LLVM this, especially on `_unchecked` calls where there's no obvious way for it to deduce it.
(Today neither safe nor unsafe indexing gets it: <https://rust.godbolt.org/z/G1jYT548s>)
An `assume` would definitely not be worth it, but since the flag is almost free we might as well tell LLVM this, especially on `_unchecked` calls where there's no obvious way for it to deduce it.
(Today neither safe nor unsafe indexing gets it: <https://rust.godbolt.org/z/G1jYT548s>)
Use `partial_cmp` to implement tuple `lt`/`le`/`ge`/`gt`
In today's implementation, `(A, B)::gt` contains calls to *both* `A::eq` *and* `A::gt`.
That's fine for primitives, but for things like `String`s it's kinda weird -- `(String, usize)::gt` has a call to both `bcmp` and `memcmp` (<https://rust.godbolt.org/z/7jbbPMesf>) because when `bcmp` says the `String`s aren't equal, it turns around and calls `memcmp` to find out which one's bigger.
This PR changes the implementation to instead implement `(A, …, C, Z)::gt` using `A::partial_cmp`, `…::partial_cmp`, `C::partial_cmp`, and `Z::gt`. (And analogously for `lt`, `le`, and `ge`.) That way expensive comparisons don't need to be repeated.
Technically this is an observable change on stable, so I've marked it `needs-fcp` + `T-libs-api` and will
r? rust-lang/libs-api
I'm hoping that this will be non-controversial, however, since it's very similar to the observable changes that were made to the derives (#81384#98655) -- like those, this only changes behaviour if a type overrode behaviour in a way inconsistent with the rules for the various traits involved.
(The first commit here is #108156, adding the codegen test, which I used to make sure this doesn't regress behaviour for primitives.)
Zulip conversation about this change: <https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/.60.3E.60.20on.20Tuples/near/328392927>.
Add support for QNX Neutrino to standard library
This change:
- adds standard library support for QNX Neutrino (7.1).
- upgrades `libc` to version `0.2.139` which supports QNX Neutrino
`@gh-tr`
⚠️ Backtraces on QNX require https://github.com/rust-lang/backtrace-rs/pull/507 which is not yet merged! (But everything else works without these changes) ⚠️
Tested mainly with a x86_64 virtual machine (see qnx-nto.md) and partially with an aarch64 hardware (some tests fail due to constrained resources).
Merge two different equality specialization traits in `core`
Arrays and slices each had their own version of this, without a matching set of `impl`s.
Merge them into one (still-`pub(crate)`) `cmp::BytewiseEq` trait, so we can stop doing all these things twice.
And that means that the `[T]::eq` → `memcmp` specialization picks up a bunch of types where that previously only worked for arrays, so examples like <https://rust.godbolt.org/z/KjsG8MGGT> will use it now instead of emitting loops.
r? the8472
Name LLVM anonymous constants by a hash of their contents
This makes the names stable between different versions of a crate unlike the `AllocId` naming, making LLVM IR comparisons with `llvm-diff` more practical.
This adds the following functions:
* `Option<T>::as_slice(&self) -> &[T]`
* `Option<T>::as_slice_mut(&mut self) -> &[T]`
The `as_slice` and `as_slice_mut` functions benefit from an
optimization that makes them completely branch-free.
Note that the optimization's soundness hinges on the fact that either
the niche optimization makes the offset of the `Some(_)` contents zero
or the mempory layout of `Option<T>` is equal to that of
`Option<MaybeUninit<T>>`.
Make codegen choose whether to emit overflow checks
ConstProp and DataflowConstProp currently have a specific code path not to propagate constants when they overflow. This is meant to have the correct behaviour when inlining from a crate with overflow checks (like `core`) into a crate compiled without.
This PR shifts the behaviour change to the `Assert(Overflow*)` MIR terminators: if the crate is compiled without overflow checks, just skip emitting the assertions. This is already what happens with `OverflowNeg`.
This allows ConstProp and DataflowConstProp to transform `CheckedBinaryOp(Add, u8::MAX, 1)` into `const (0, true)`, and let codegen ignore the `true`.
The interpreter is modified to conform to this behaviour.
Fixes#35310
Add `kernel-address` sanitizer support for freestanding targets
This PR adds support for KASan (kernel address sanitizer) instrumentation in freestanding targets. I included the minimal set of `x86_64-unknown-none`, `riscv64{imac, gc}-unknown-none-elf`, and `aarch64-unknown-none` but there's likely other targets it can be added to. (`linux_kernel_base.rs`?) KASan uses the address sanitizer attributes but has the `CompileKernel` parameter set to `true` in the pass creation.
The operators are all overridden in full for tuples, so those parts pass easily, but they're worth pinning.
Going via `Ord::cmp`, though, doesn't optimize away for anything but `cmp`+`is_le`. So this leaves `FIXME`s in the tests for the others.
Update the minimum external LLVM to 14
With this change, we'll have stable support for LLVM 14 through 16 (pending release).
For reference, the previous increase to LLVM 13 was #100460.
Improve the `array::map` codegen
The `map` method on arrays [is documented as sometimes performing poorly](https://doc.rust-lang.org/std/primitive.array.html#note-on-performance-and-stack-usage), and after [a question on URLO](https://users.rust-lang.org/t/try-trait-residual-o-trait-and-try-collect-into-array/88510?u=scottmcm) prompted me to take another look at the core [`try_collect_into_array`](7c46fb2111/library/core/src/array/mod.rs (L865-L912)) function, I had some ideas that ended up working better than I'd expected.
There's three main ideas in here, split over three commits:
1. Don't use `array::IntoIter` when we can avoid it, since that seems to not get SRoA'd, meaning that every step writes things like loop counters into the stack unnecessarily
2. Don't return arrays in `Result`s unnecessarily, as that doesn't seem to optimize away even with `unwrap_unchecked` (perhaps because it needs to get moved into a new LLVM type to account for the discriminant)
3. Don't distract LLVM with all the `Option` dances when we know for sure we have enough items (like in `map` and `zip`). This one's a larger commit as to do it I ended up adding a new `pub(crate)` trait, but hopefully those changes are still straight-forward.
(No libs-api changes; everything should be completely implementation-detail-internal.)
It's still not completely fixed -- I think it needs pcwalton's `memcpy` optimizations still (#103830) to get further -- but this seems to go much better than before. And the remaining `memcpy`s are just `transmute`-equivalent (`[T; N] -> ManuallyDrop<[T; N]>` and `[MaybeUninit<T>; N] -> [T; N]`), so hopefully those will be easier to remove with LLVM16 than the previous subobject copies 🤞
r? `@thomcc`
As a simple example, this test
```rust
pub fn long_integer_map(x: [u32; 64]) -> [u32; 64] {
x.map(|x| 13 * x + 7)
}
```
On nightly <https://rust.godbolt.org/z/xK7548TGj> takes `sub rsp, 808`
```llvm
start:
%array.i.i.i.i = alloca [64 x i32], align 4
%_3.sroa.5.i.i.i = alloca [65 x i32], align 4
%_5.i = alloca %"core::iter::adapters::map::Map<core::array::iter::IntoIter<u32, 64>, [closure@/app/example.rs:2:11: 2:14]>", align 8
```
(and yes, that's a 6**5**-element array `alloca` despite 6**4**-element input and output)
But with this PR it's only `sub rsp, 520`
```llvm
start:
%array.i.i.i.i.i.i = alloca [64 x i32], align 4
%array1.i.i.i = alloca %"core::mem::manually_drop::ManuallyDrop<[u32; 64]>", align 4
```
Similarly, the loop it emits on nightly is scalar-only and horrifying
```nasm
.LBB0_1:
mov esi, 64
mov edi, 0
cmp rdx, 64
je .LBB0_3
lea rsi, [rdx + 1]
mov qword ptr [rsp + 784], rsi
mov r8d, dword ptr [rsp + 4*rdx + 528]
mov edi, 1
lea edx, [r8 + 2*r8]
lea r8d, [r8 + 4*rdx]
add r8d, 7
.LBB0_3:
test edi, edi
je .LBB0_11
mov dword ptr [rsp + 4*rcx + 272], r8d
cmp rsi, 64
jne .LBB0_6
xor r8d, r8d
mov edx, 64
test r8d, r8d
jne .LBB0_8
jmp .LBB0_11
.LBB0_6:
lea rdx, [rsi + 1]
mov qword ptr [rsp + 784], rdx
mov edi, dword ptr [rsp + 4*rsi + 528]
mov r8d, 1
lea esi, [rdi + 2*rdi]
lea edi, [rdi + 4*rsi]
add edi, 7
test r8d, r8d
je .LBB0_11
.LBB0_8:
mov dword ptr [rsp + 4*rcx + 276], edi
add rcx, 2
cmp rcx, 64
jne .LBB0_1
```
whereas with this PR it's unrolled and vectorized
```nasm
vpmulld ymm1, ymm0, ymmword ptr [rsp + 64]
vpaddd ymm1, ymm1, ymm2
vmovdqu ymmword ptr [rsp + 328], ymm1
vpmulld ymm1, ymm0, ymmword ptr [rsp + 96]
vpaddd ymm1, ymm1, ymm2
vmovdqu ymmword ptr [rsp + 360], ymm1
```
(though sadly still stack-to-stack)
Now that the compiler accepts "-Z instrument-xray" option only when
targeting one of the supported targets, make sure to not run the
codegen tests where the compiler will fail.
Like with other compiletests, we don't have access to internals,
so simply hardcode a list of supported architectures here.
Let's add at least some tests to verify that this option is accepted
and produces expected LLVM attributes. More tests can be added later
with attribute support.
The code that consumes PointerKind (`adjust_for_rust_scalar` in rustc_ty_utils)
ended up using PointerKind variants to talk about Rust reference types (& and
&mut) anyway, making the old code structure quite confusing: one always had to
keep in mind which PointerKind corresponds to which type. So this changes
PointerKind to directly reflect the type.
This does not change behavior.
Don't merge vtables when full debuginfo is enabled.
This PR makes the compiler not emit the `unnamed_addr` attribute for vtables when full debuginfo is enabled, so that they don't get merged even if they have the same contents. This allows debuggers to more reliably map from a dyn pointer to the self-type of a trait object by looking at the vtable's debuginfo.
The PR only changes the behavior of the LLVM backend as other backends don't emit vtable debuginfo (as far as I can tell).
The performance impact of this change should be small as [measured](https://github.com/rust-lang/rust/pull/103514#issuecomment-1290833854) in a previous PR.
...and remove it from `PointeeInfo`, which isn't meant for this.
There are still various places (marked with FIXMEs) that assume all pointers
have the same size and alignment. Fixing this requires parsing non-default
address spaces in the data layout string, which will be done in a followup.
Implement `alloc::vec::IsZero` for `Option<$NUM>` types
Fixes#106911
Mirrors the `NonZero$NUM` implementations with an additional `assert_zero_valid`.
`None::<i32>` doesn't stricly satisfy `IsZero` but for the purpose of allocating we can produce more efficient codegen.
Previously, it was only put on scalars with range validity invariants
like bool, was uninit was obviously invalid for those.
Since then, we have normatively declared all uninit primitives to be
undefined behavior and can therefore put `noundef` on them.
The remaining concern was the `mem::uninitialized` function, which cause
quite a lot of UB in the older parts of the ecosystem. This function now
doesn't return uninit values anymore, making users of it safe from this
change.
The only real sources of UB where people could encounter uninit
primitives are `MaybeUninit::uninit().assume_init()`, which has always
be clear in the docs about being UB and from heap allocations (like
reading from the spare capacity of a vec. This is hopefully rare enough
to not break anything.