Enable ScalarReplacementOfAggregates in optimized builds
Like MatchBranchSimplification, this pass is known to produce significant runtime improvements in Cranelift artifacts, and I believe based on the perf runs here that the primary effect of this pass is to empower MatchBranchSimplification. ScalarReplacementOfAggregates on its own has little effect on anything, but when this was rebased up to include https://github.com/rust-lang/rust/pull/112001 we started seeing significant and majority-positive results.
Based on the fact that we see most of the regressions in debug builds (https://github.com/rust-lang/rust/pull/112002#issuecomment-1566270144) and some rather significant ones in cycles and wall time, I'm only enabling this in optimized builds at the moment.
These tend to have special handling in a bunch of places anyway, so the variant helps remember that. And I think it's easier to grok than non-Scalar Aggregates sometimes being `Immediates` (like I got wrong and caused 109992). As a minor bonus, it means we don't need to generate poison LLVM values for them to pass around in `OperandValue::Immediate`s.
Only rewrite valtree-constants to patterns and keep other constants opaque
Now that we can reliably fall back to comparing constants with `PartialEq::eq` to the match scrutinee, we can
1. eagerly try to convert constants to valtrees
2. then deeply convert the valtree to a pattern
3. if the to-valtree conversion failed, create an "opaque constant" pattern.
This PR specifically avoids any behavioral changes or major cleanups. What we can now do as follow ups is
* move the two remaining call sites to `destructure_mir_constant` off that query
* make valtree to pattern conversion infallible
* this needs to be done after careful analysis of the effects. There may be user visible changes from that.
based on https://github.com/rust-lang/rust/pull/111768
Optimize scalar and scalar pair representations loaded from ByRef in llvm
in https://github.com/rust-lang/rust/pull/105653 I noticed that we were generating suboptimal LLVM IR if we had a `ConstValue::ByRef` that could be represented by a `ScalarPair`. Before https://github.com/rust-lang/rust/pull/105653 this is probably rare, but after it, every slice will go down this suboptimal code path that requires LLVM to untangle a bunch of indirections and translate static allocations that are only used once to read a scalar pair from.
`[T; N]::zip` is "eager" but most zips are mapped.
This causes poor optimization in generated code.
This is a fundamental design issue and "zip" is
"prime real estate" in terms of function names,
so let's free it up again.
Adds support for LLVM [SafeStack] which provides backward edge control
flow protection by separating the stack into two parts: data which is
only accessed in provable safe ways is allocated on the normal stack
(the "safe stack") and all other data is placed in a separate allocation
(the "unsafe stack").
SafeStack support is enabled by passing `-Zsanitizer=safestack`.
[SafeStack]: https://clang.llvm.org/docs/SafeStack.html
Support #[global_allocator] without the allocator shim
This makes it possible to use liballoc/libstd in combination with `--emit obj` if you use `#[global_allocator]`. This is what rust-for-linux uses right now and systemd may use in the future. Currently they have to depend on the exact implementation of the allocator shim to create one themself as `--emit obj` doesn't create an allocator shim.
Note that currently the allocator shim also defines the oom error handler, which is normally required too. Once `#![feature(default_alloc_error_handler)]` becomes the only option, this can be avoided. In addition when using only fallible allocator methods and either `--cfg no_global_oom_handling` for liballoc (like rust-for-linux) or `--gc-sections` no references to the oom error handler will exist.
To avoid this feature being insta-stable, you will have to define `__rust_no_alloc_shim_is_unstable` to avoid linker errors.
(Labeling this with both T-compiler and T-lang as it originally involved both an implementation detail and had an insta-stable user facing change. As noted above, the `__rust_no_alloc_shim_is_unstable` symbol requirement should prevent unintended dependence on this unstable feature.)
CFI: Fix encode_region: unexpected ReEarlyBound(0, 'a)
Fixes#111515 and complements #106547 by adding support for encoding early bound regions and also excluding projections when transforming trait objects' traits into their identities before emitting type checks.
Fixes#111515 and complements #106547 by adding support for encoding
early bound regions and also excluding projections when transforming
trait objects' traits into their identities before emitting type checks.
Fix some issues with folded AArch64 features
In #91608 the `fp` feature was removed for AArch64 and folded into the `neon` feature, however disabling the `neon` feature doesn't actually disable the `fp` feature. If my understanding on that thread is correct it should do.
While doing this, I also noticed that disabling some features would disable features that it shouldn't. For instance enabling `sve` will enable `neon`, however, when disabling `sve` it would then also disable `neon`, I wouldn't expect disabling `sve` to also disable `neon`.
cc `@workingjubilee`
When compiling with panic=abort (or using a target that doesn't have
unwinding support), the compiler adds the "nounwind" attribute to
functions. This results in a different LLVM IR, which results in a #NNN
added after the function name:
tail call void @bar() #13, !dbg !467
attributes #13 = { nounwind }
...instead of:
tail call void @bar(), !dbg !467
This commit changes the matchers to swallow the #NNN, as it's not needed
for these specific tests.
Rollup of 6 pull requests
Successful merges:
- #111461 (Fix symbol conflict diagnostic mistakenly being shown instead of missing crate diagnostic)
- #111579 (Also assume wrap-around discriminants in `as` MIR building)
- #111704 (Remove return type sized check hack from hir typeck)
- #111853 (Check opaques for mismatch during writeback)
- #111854 (rustdoc: clean up `settings.css`)
- #111860 (Don't ICE if method receiver fails to unify with `arbitrary_self_types`)
r? `@ghost`
`@rustbot` modify labels: rollup
[rustc_ty_utils] Treat `drop_in_place`'s *mut argument like &mut when adding LLVM attributes
This resurrects PR #103614, which has sat idle for a while.
This could probably use a new perf run, since we're on a new LLVM version now.
r? `@oli-obk`
cc `@RalfJung`
---
LLVM can make use of the `noalias` parameter attribute on the parameter to `drop_in_place` in areas like argument promotion. Because the Rust compiler fully controls the code for `drop_in_place`, it can soundly deduce parameter attributes on it.
In #103957, Miri was changed to retag `drop_in_place`'s argument as if it was `&mut`, matching this change.
Fix duplicate `arcinner_layout_for_value_layout` calls when using the uninit `Arc` constructors
What this fixes is the duplicate calls to `arcinner_layout_for_value_layout` seen here: https://godbolt.org/z/jr5Gxozhj
The issue was discovered alongside #111603 but is otherwise unrelated to the duplicate `alloca`s, which remain unsolved. Everything I tried to solve said main issue has failed.
As for the duplicate layout calculations, I also tried slapping `#[inline]` and `#[inline(always)]` on everything in sight but the only thing that worked in the end is to dedup the calls by hand.
Rather than returning an array of features from to_llvm_features, return a structure that contains
the dependencies. This also contains metadata on how the features depend on each other to allow for
the correct enabling and disabling.
Some features that are tied together only make sense to be folded
together when enabling the feature. For example on AArch64 sve and
neon are tied together, however it doesn't make sense to disable neon
when disabling sve.
CFI: Fix encode_ty: unexpected Param(B/#1)
Fixes#111510 and complements #106547 by adding support for encoding type parameters and also by transforming trait objects' traits into their identities before emitting type checks.
The CHECK, -NOT, -SAME pattern ensures that the `CHECK-NOT: noalias`
is limited to only one line, and won't match unrelated lines further
down in the file.
Explicit drop call added to preserve the `foo` argument name, since
names of unused arguments are not preserved.
We've done measurements with Miri and have determined that `noalias` shouldn't
break code. The requirements that allow us to add dereferenceable and align
have been long documented in the standard library documentation.
LLVM can make use of the `noalias` parameter attribute on the parameter to
`drop_in_place` in areas like argument promotion. Because the Rust compiler
fully controls the code for `drop_in_place`, it can soundly deduce parameter
attributes on it. In the case of a value that has a programmer-defined Drop
implementation, we know that the first thing `drop_in_place` will do is pass a
pointer to the object to `Drop::drop`. `Drop::drop` takes `&mut`, so it must be
guaranteed that there are no pointers to the object upon entering that
function. Therefore, it should be safe to mark `noalias` there.
With this patch, we mark `noalias` only when the type is a value with a
programmer-defined Drop implementation. This is possibly overly conservative,
but I thought that proceeding cautiously was best in this instance.
Fixes#111510 and complements #106547 by adding support for encoding
type parameters and also by transforming trait objects' traits into
their identities before emitting type checks.
Stop checking for the absence of something that doesn't exist
A couple of codegen tests are doing
```
// CHECK-NOT: slice_index_len_fail
```
However, that function no longer exists: [the only places](https://github.com/search?q=repo%3Arust-lang%2Frust+slice_index_len_fail&type=code) it occurs in the repo are in those tests.
So this PR updates the tests to check for the absense of the functions that are actually used today to panic for out-of-bounds indexing.
allow mutating function args through `&raw const`
Fixes https://github.com/rust-lang/rust/issues/111502 by "turning off the sketchy optimization while we figure out if this is ok", like `@JakobDegen` said.
The first commit in this PR removes some suspicious looking logic from the same method, but should have no functional changes, since it doesn't modify the `context` outside of the method. Best reviewed commit by commit.
r? opsem
A couple of codegen tests are doing
```
// CHECK-NOT: slice_index_len_fail
```
However, that function no longer exists: [the only places](https://github.com/search?q=repo%3Arust-lang%2Frust+slice_index_len_fail&type=code) it occurs in the repo are in those tests.
So this PR updates the tests to check for the absense of the functions that are actually used today to panic for out-of-bounds indexing.
CFI: Fix SIGILL reached via trait objects
Fix#106547 by transforming the concrete self into a reference to a trait object before emitting type metadata identifiers for trait methods.
You will need to add the following as replacement for the old __rust_*
definitions when not using the alloc shim.
#[no_mangle]
static __rust_no_alloc_shim_is_unstable: u8 = 0;
vec-shrink-panik: update expectations to work on LLVM 17
For some reason, the called function is `cleanup` on LLVM 17 instead of `filter`.
r? `@Amanieu`
Remove some `assume`s from slice iterators that don't do anything
Because the start pointer is iterators is already a `NonNull`, we emit the appropriate `!nonnull` metadata when loading the pointer to tell LLVM that it's non-null.
Probably the best way to see that it's the metadata that's important (and not the `assume`) is to observe that LLVM actually *removes* the `assume` from the optimized IR: <https://rust.godbolt.org/z/KhE6G963n>.
(I also checked that, yes, the if-not-ZST `assume` on `end` is still doing something: it's how there's a `!nonnull` metadata on its load, even though it's an ordinary raw pointer. The codegen test added in this PR fails if the other `assume` is removed.)
Rollup of 6 pull requests
Successful merges:
- #104070 (Prevent aborting guard from aborting the process in a forced unwind)
- #109410 (Introduce `AliasKind::Inherent` for inherent associated types)
- #111004 (Migrate `mir_transform` to translatable diagnostics)
- #111118 (Suggest struct when we get colon in fileds in enum)
- #111170 (Diagnostic args are still args if they're documented)
- #111354 (Fix miscompilation when calling default methods on `Future`)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Add cross-language LLVM CFI support to the Rust compiler
This PR adds cross-language LLVM Control Flow Integrity (CFI) support to the Rust compiler by adding the `-Zsanitizer-cfi-normalize-integers` option to be used with Clang `-fsanitize-cfi-icall-normalize-integers` for normalizing integer types (see https://reviews.llvm.org/D139395).
It provides forward-edge control flow protection for C or C++ and Rust -compiled code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code share the same virtual address space). For more information about LLVM CFI and cross-language LLVM CFI support for the Rust compiler, see design document in the tracking issue #89653.
Cross-language LLVM CFI can be enabled with -Zsanitizer=cfi and -Zsanitizer-cfi-normalize-integers, and requires proper (i.e., non-rustc) LTO (i.e., -Clinker-plugin-lto).
Thank you again, ``@bjorn3,`` ``@nikic,`` ``@samitolvanen,`` and the Rust community for all the help!
This commit adds cross-language LLVM Control Flow Integrity (CFI)
support to the Rust compiler by adding the
`-Zsanitizer-cfi-normalize-integers` option to be used with Clang
`-fsanitize-cfi-icall-normalize-integers` for normalizing integer types
(see https://reviews.llvm.org/D139395).
It provides forward-edge control flow protection for C or C++ and Rust
-compiled code "mixed binaries" (i.e., for when C or C++ and Rust
-compiled code share the same virtual address space). For more
information about LLVM CFI and cross-language LLVM CFI support for the
Rust compiler, see design document in the tracking issue #89653.
Cross-language LLVM CFI can be enabled with -Zsanitizer=cfi and
-Zsanitizer-cfi-normalize-integers, and requires proper (i.e.,
non-rustc) LTO (i.e., -Clinker-plugin-lto).
Improve niche placement by trying two strategies and picking the better result
Fixes#104807Fixes#105371
Determining which sort order is better requires calculating the struct size (so we can calculate the niche offset). But that in turn depends on the field order, so happens after sorting. So the simple way to solve that is to run the whole thing twice and pick the better result.
1st commit is just code motion, the meat is in the later ones.
Add `intrinsics::transmute_unchecked`
This takes a whole 3 lines in `compiler/` since it lowers to `CastKind::Transmute` in MIR *exactly* the same as the existing `intrinsics::transmute` does, it just doesn't have the fancy checking in `hir_typeck`.
Added to enable experimenting with the request in <https://github.com/rust-lang/rust/pull/106281#issuecomment-1496648190> and because the portable-simd folks might be interested for dependently-sized array-vector conversions.
It also simplifies a couple places in `core`.
See also https://github.com/rust-lang/rust/pull/108442#issuecomment-1474777273, where `CastKind::Transmute` was added having exactly these semantics before the lang meeting (which I wasn't in) independently expressed interest.
This takes a whole 3 lines in `compiler/` since it lowers to `CastKind::Transmute` in MIR *exactly* the same as the existing `intrinsics::transmute` does, it just doesn't have the fancy checking in `hir_typeck`.
Added to enable experimenting with the request in <https://github.com/rust-lang/rust/pull/106281#issuecomment-1496648190> and because the portable-simd folks might be interested for dependently-sized array-vector conversions.
It also simplifies a couple places in `core`.
ci: add a runner for vanilla LLVM 16
Like #107044, this will let us track compatibility with LLVM 16 going
forward, especially after we eventually upgrade our own to the next.
This also drops `tidy` here and in `x86_64-gnu-llvm-15`, syncing with
that change in #106085.
tests: adapt for LLVM change 5b386b864c7619897c51a1da97d78f1cf6f3eff6
The above-mentioned change modified the output of thread-local.rs by changing some variable names. Rather than assume things get put in %0, we capture the variable so the test passes in both the old and new version.
The above-mentioned change modified the output of thread-local.rs by
changing some variable names. Rather than assume things get put in %0,
we capture the variable so the test passes in both the old and new
version.
Permit MIR inlining without #[inline]
I noticed that there are at least a handful of portable-simd functions that have no `#[inline]` but compile to an assign + return.
I locally benchmarked inlining thresholds between 0 and 50 in increments of 5, and 50 seems to be the best. Interesting. That didn't include check builds though, ~maybe perf will have something to say about that~.
Perf has little useful to say about this. We generally regress all the check builds, as best as I can tell, due to a number of small codegen changes in a particular hot function in the compiler. Probably this is because we've nudged the inlining outcomes all over, and uses of `#[inline(always)]`/`#[inline(never)]` might need to be adjusted.
Like #107044, this will let us track compatibility with LLVM 16 going
forward, especially after we eventually upgrade our own to the next.
This also drops `tidy` here and in `x86_64-gnu-llvm-15`, syncing with
that change in #106085.
Preserve argument indexes when inlining MIR
We store argument indexes on VarDebugInfo. Unlike the previous method of relying on the variable index to know whether a variable is an argument, this survives MIR inlining.
We also no longer check if var.source_info.scope is the outermost scope. When a function gets inlined, the arguments to the inner function will no longer be in the outermost scope. What we care about though is whether they were in the outermost scope prior to inlining, which we know by whether we assigned an argument index.
Fixes#83217
I considered using `Option<NonZeroU16>` instead of `Option<u16>` to store the index. I didn't because `TypeFoldable` isn't implemented for `NonZeroU16` and because it looks like due to padding, it currently wouldn't make any difference. But I indexed from 1 anyway because (a) it'll make it easier if later it becomes worthwhile to use a `NonZeroU16` and because the arguments were previously indexed from 1, so it made for a smaller change.
This is my first PR on rust-lang/rust, so apologies if I've gotten anything not quite right.
These don't optimize with debug assertions. For one of them, this
is due to the new alignment checks, for the other I'm not sure
what specifically blocks it.
We store argument indexes on VarDebugInfo. Unlike the previous method of
relying on the variable index to know whether a variable is an argument,
this survives MIR inlining.
We also no longer check if var.source_info.scope is the outermost scope.
When a function gets inlined, the arguments to the inner function will
no longer be in the outermost scope. What we care about though is
whether they were in the outermost scope prior to inlining, which we
know by whether we assigned an argument index.
Use SipHash-1-3 instead of SipHash-2-4 for StableHasher
Noticed this, and it seems easy and likely a perf win. IIUC we don't need DDOS resistance (just collision) so we ideally would have an even faster hash, but it's hard to beat this SipHash impl here, since it's been so highly tuned for the interface.
It wouldn't surprise me if there's some subtle reason changing this sucks, as it's so obvious it seems likely to have been done. Still, SipHash-1-3 seems to still have the guarantees StableHasher should need (and seemingly more), and is clearly less work. So it's worth a shot.
Not fully tested locally.
Validate `ignore` and `only` compiletest directive, and add human-readable ignore reasons
This PR adds strict validation for the `ignore` and `only` compiletest directives, failing if an unknown value is provided to them. Doing so uncovered 79 tests in `tests/ui` that had invalid directives, so this PR also fixes them.
Finally, this PR adds human-readable ignore reasons when tests are ignored due to `ignore` or `only` directives, like *"only executed when the architecture is aarch64"* or *"ignored when the operative system is windows"*. This was the original reason why I started working on this PR and #108659, as we need both of them for Ferrocene.
The PR is a draft because the code is extremely inefficient: it calls `rustc --print=cfg --target $target` for every rustc target (to gather the list of allowed ignore values), which on my system takes between 4s and 5s, and performs a lot of allocations of constant values. I'll fix both of them in the coming days.
r? `@ehuss`
Allow `transmute`s to produce `OperandValue`s instead of needing `alloca`s
LLVM can usually optimize these away, but especially for things like transmutes of newtypes it's silly to generate the `alloc`+`store`+`load` at all when it's actually a nop at LLVM level.
LLVM can usually optimize these away, but especially for things like transmutes of newtypes it's silly to generate the `alloc`+`store`+`load` at all when it's actually a nop at LLVM level.
`-Cdebuginfo=1` was never line tables only and
can't be due to backwards compatibility issues.
This was clarified and an option for line tables only
was added. Additionally an option for line info
directives only was added, which is well needed for
some targets. The debug info options should now
behave the same as clang's debug info options.
Insert alignment checks for pointer dereferences when debug assertions are enabled
Closes https://github.com/rust-lang/rust/issues/54915
- [x] Jake tells me this sounds like a place to use `MirPatch`, but I can't figure out how to insert a new basic block with a new terminator in the middle of an existing basic block, using `MirPatch`. (if nobody else backs up this point I'm checking this as "not actually a good idea" because the code looks pretty clean to me after rearranging it a bit)
- [x] Using `CastKind::PointerExposeAddress` is definitely wrong, we don't want to expose. Calling a function to get the pointer address seems quite excessive. ~I'll see if I can add a new `CastKind`.~ `CastKind::Transmute` to the rescue!
- [x] Implement a more helpful panic message like slice bounds checking.
r? `@oli-obk`
Upgrade to LLVM 16, again
Relative to the previous attempt in https://github.com/rust-lang/rust/pull/107224:
* Update to GCC 8.5 on dist-x86_64-linux, to avoid std::optional ABI-incompatibility between libstdc++ 7 and 8.
* Cherry-pick 96df79af02.
* Cherry-pick 6fc670e5e3.
r? `@cuviper`
Use poison instead of undef
In cases where it is legal, we should prefer poison values over undef values.
This replaces undef with poison for aggregate construction and for uninhabited types. There are more places where we can likely use poison, but I wanted to stay conservative to start with.
In particular the aggregate case is important for newer LLVM versions, which are not able to handle an undef base value during early optimization due to poison-propagation concerns.
r? `@cuviper`
Updates `interpret`, `codegen_ssa`, and `codegen_cranelift` to consume the new cast instead of the intrinsic.
Includes `CastTransmute` for custom MIR building, to be able to test the extra UB.
Remove the assume(!is_null) from Vec::as_ptr
At a guess, this code is leftover from LLVM was worse at keeping track of the niche information here. In any case, we don't need this anymore: Removing this `assume` doesn't get rid of the `nonnull` attribute on the return type.
Upgrade to LLVM 16
This updates Rust to LLVM 16. It also updates our host compiler for dist-x86_64-linux to LLVM 16. The reason for that is that Bolt from LLVM 15 is not capable of compiling LLVM 16 (https://github.com/llvm/llvm-project/issues/61114).
LLVM 16.0.0 has been [released](https://discourse.llvm.org/t/llvm-16-0-0-release/69326) on March 18, while Rust 1.70 will become stable on June 1.
Tested images: `dist-x86_64-linux`, `dist-riscv64-linux` (alt), `dist-x86_64-illumos`, `dist-various-1`, `dist-various-2`, `dist-powerpc-linux`, `wasm32`, `armhf-gnu`
Tested images until the usual IPv6 failures: `test-various`
inherit_overflow: adapt pattern to also work with v0 mangling
This test was failing under new-symbol-mangling = true. Adapt pattern to work in both cases.
Related to #106002 from December.
Wrap the whole LocalInfo in ClearCrossCrate.
MIR contains a lot of information about locals. The primary purpose of this information is the quality of borrowck diagnostics.
This PR aims to drop this information after MIR analyses are finished, ie. starting from post-cleanup runtime MIR.
In cases where it is legal, we should prefer poison values over
undef values.
This replaces undef with poison for aggregate construction and
for uninhabited types. There are more places where we can likely
use poison, but I wanted to stay conservative to start with.
In particular the aggregate case is important for newer LLVM
versions, which are not able to handle an undef base value during
early optimization due to poison-propagation concerns.
Ensure `ptr::read` gets all the same LLVM `load` metadata that dereferencing does
I was looking into `array::IntoIter` optimization, and noticed that it wasn't annotating the loads with `noundef` for simple things like `array::IntoIter<i32, N>`. Trying to narrow it down, it seems that was because `MaybeUninit::assume_init_read` isn't marking the load as initialized (<https://rust.godbolt.org/z/Mxd8TPTnv>), which is unfortunate since that's basically its reason to exist.
The root cause is that `ptr::read` is currently implemented via the *untyped* `copy_nonoverlapping`, and thus the `load` doesn't get any type-aware metadata: no `noundef`, no `!range`. This PR solves that by lowering `ptr::read(p)` to `copy *p` in MIR, for which the backends already do the right thing.
Fortuitiously, this also improves the IR we give to LLVM for things like `mem::replace`, and fixes a couple of long-standing bugs where `ptr::read` on `Copy` types was worse than `*`ing them.
Zulip conversation: <https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/Move.20array.3A.3AIntoIter.20to.20ManuallyDrop/near/341189936>
cc `@erikdesjardins` `@JakobDegen` `@workingjubilee` `@the8472`
Fixes#106369Fixes#73258
Move `Option::as_slice` to an always-sound implementation
This approach depends on CSE to not have any branches or selects when the guessed offset is correct -- which it always will be right now -- but to also be *sound* (just less efficient) if the layout algorithms change such that the guess is incorrect.
The codegen test confirms that CSE handles this as expected, leaving the optimal codegen.
cc JakobDegen #108545
This approach depends on CSE to not have any branches or selects when the guessed offset is correct -- which it always will be right now -- but to also be *sound* (just less efficient) if the layout algorithms change such that the guess is incorrect.
I was looking into `array::IntoIter` optimization, and noticed that it wasn't annotating the loads with `noundef` for simple things like `array::IntoIter<i32, N>`.
Turned out to be a more general problem as `MaybeUninit::assume_init_read` isn't marking the load as initialized (<https://rust.godbolt.org/z/Mxd8TPTnv>), which is unfortunate since that's basically its reason to exist.
This PR lowers `ptr::read(p)` to `copy *p` in MIR, which fortuitiously also improves the IR we give to LLVM for things like `mem::replace`.
Use `nuw` when calculating slice lengths from `Range`s
An `assume` would definitely not be worth it, but since the flag is almost free we might as well tell LLVM this, especially on `_unchecked` calls where there's no obvious way for it to deduce it.
(Today neither safe nor unsafe indexing gets it: <https://rust.godbolt.org/z/G1jYT548s>)
An `assume` would definitely not be worth it, but since the flag is almost free we might as well tell LLVM this, especially on `_unchecked` calls where there's no obvious way for it to deduce it.
(Today neither safe nor unsafe indexing gets it: <https://rust.godbolt.org/z/G1jYT548s>)
Use `partial_cmp` to implement tuple `lt`/`le`/`ge`/`gt`
In today's implementation, `(A, B)::gt` contains calls to *both* `A::eq` *and* `A::gt`.
That's fine for primitives, but for things like `String`s it's kinda weird -- `(String, usize)::gt` has a call to both `bcmp` and `memcmp` (<https://rust.godbolt.org/z/7jbbPMesf>) because when `bcmp` says the `String`s aren't equal, it turns around and calls `memcmp` to find out which one's bigger.
This PR changes the implementation to instead implement `(A, …, C, Z)::gt` using `A::partial_cmp`, `…::partial_cmp`, `C::partial_cmp`, and `Z::gt`. (And analogously for `lt`, `le`, and `ge`.) That way expensive comparisons don't need to be repeated.
Technically this is an observable change on stable, so I've marked it `needs-fcp` + `T-libs-api` and will
r? rust-lang/libs-api
I'm hoping that this will be non-controversial, however, since it's very similar to the observable changes that were made to the derives (#81384#98655) -- like those, this only changes behaviour if a type overrode behaviour in a way inconsistent with the rules for the various traits involved.
(The first commit here is #108156, adding the codegen test, which I used to make sure this doesn't regress behaviour for primitives.)
Zulip conversation about this change: <https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/.60.3E.60.20on.20Tuples/near/328392927>.
Add support for QNX Neutrino to standard library
This change:
- adds standard library support for QNX Neutrino (7.1).
- upgrades `libc` to version `0.2.139` which supports QNX Neutrino
`@gh-tr`
⚠️ Backtraces on QNX require https://github.com/rust-lang/backtrace-rs/pull/507 which is not yet merged! (But everything else works without these changes) ⚠️
Tested mainly with a x86_64 virtual machine (see qnx-nto.md) and partially with an aarch64 hardware (some tests fail due to constrained resources).
Merge two different equality specialization traits in `core`
Arrays and slices each had their own version of this, without a matching set of `impl`s.
Merge them into one (still-`pub(crate)`) `cmp::BytewiseEq` trait, so we can stop doing all these things twice.
And that means that the `[T]::eq` → `memcmp` specialization picks up a bunch of types where that previously only worked for arrays, so examples like <https://rust.godbolt.org/z/KjsG8MGGT> will use it now instead of emitting loops.
r? the8472
Name LLVM anonymous constants by a hash of their contents
This makes the names stable between different versions of a crate unlike the `AllocId` naming, making LLVM IR comparisons with `llvm-diff` more practical.
This adds the following functions:
* `Option<T>::as_slice(&self) -> &[T]`
* `Option<T>::as_slice_mut(&mut self) -> &[T]`
The `as_slice` and `as_slice_mut` functions benefit from an
optimization that makes them completely branch-free.
Note that the optimization's soundness hinges on the fact that either
the niche optimization makes the offset of the `Some(_)` contents zero
or the mempory layout of `Option<T>` is equal to that of
`Option<MaybeUninit<T>>`.
Make codegen choose whether to emit overflow checks
ConstProp and DataflowConstProp currently have a specific code path not to propagate constants when they overflow. This is meant to have the correct behaviour when inlining from a crate with overflow checks (like `core`) into a crate compiled without.
This PR shifts the behaviour change to the `Assert(Overflow*)` MIR terminators: if the crate is compiled without overflow checks, just skip emitting the assertions. This is already what happens with `OverflowNeg`.
This allows ConstProp and DataflowConstProp to transform `CheckedBinaryOp(Add, u8::MAX, 1)` into `const (0, true)`, and let codegen ignore the `true`.
The interpreter is modified to conform to this behaviour.
Fixes#35310
Add `kernel-address` sanitizer support for freestanding targets
This PR adds support for KASan (kernel address sanitizer) instrumentation in freestanding targets. I included the minimal set of `x86_64-unknown-none`, `riscv64{imac, gc}-unknown-none-elf`, and `aarch64-unknown-none` but there's likely other targets it can be added to. (`linux_kernel_base.rs`?) KASan uses the address sanitizer attributes but has the `CompileKernel` parameter set to `true` in the pass creation.
The operators are all overridden in full for tuples, so those parts pass easily, but they're worth pinning.
Going via `Ord::cmp`, though, doesn't optimize away for anything but `cmp`+`is_le`. So this leaves `FIXME`s in the tests for the others.
Update the minimum external LLVM to 14
With this change, we'll have stable support for LLVM 14 through 16 (pending release).
For reference, the previous increase to LLVM 13 was #100460.
Improve the `array::map` codegen
The `map` method on arrays [is documented as sometimes performing poorly](https://doc.rust-lang.org/std/primitive.array.html#note-on-performance-and-stack-usage), and after [a question on URLO](https://users.rust-lang.org/t/try-trait-residual-o-trait-and-try-collect-into-array/88510?u=scottmcm) prompted me to take another look at the core [`try_collect_into_array`](7c46fb2111/library/core/src/array/mod.rs (L865-L912)) function, I had some ideas that ended up working better than I'd expected.
There's three main ideas in here, split over three commits:
1. Don't use `array::IntoIter` when we can avoid it, since that seems to not get SRoA'd, meaning that every step writes things like loop counters into the stack unnecessarily
2. Don't return arrays in `Result`s unnecessarily, as that doesn't seem to optimize away even with `unwrap_unchecked` (perhaps because it needs to get moved into a new LLVM type to account for the discriminant)
3. Don't distract LLVM with all the `Option` dances when we know for sure we have enough items (like in `map` and `zip`). This one's a larger commit as to do it I ended up adding a new `pub(crate)` trait, but hopefully those changes are still straight-forward.
(No libs-api changes; everything should be completely implementation-detail-internal.)
It's still not completely fixed -- I think it needs pcwalton's `memcpy` optimizations still (#103830) to get further -- but this seems to go much better than before. And the remaining `memcpy`s are just `transmute`-equivalent (`[T; N] -> ManuallyDrop<[T; N]>` and `[MaybeUninit<T>; N] -> [T; N]`), so hopefully those will be easier to remove with LLVM16 than the previous subobject copies 🤞
r? `@thomcc`
As a simple example, this test
```rust
pub fn long_integer_map(x: [u32; 64]) -> [u32; 64] {
x.map(|x| 13 * x + 7)
}
```
On nightly <https://rust.godbolt.org/z/xK7548TGj> takes `sub rsp, 808`
```llvm
start:
%array.i.i.i.i = alloca [64 x i32], align 4
%_3.sroa.5.i.i.i = alloca [65 x i32], align 4
%_5.i = alloca %"core::iter::adapters::map::Map<core::array::iter::IntoIter<u32, 64>, [closure@/app/example.rs:2:11: 2:14]>", align 8
```
(and yes, that's a 6**5**-element array `alloca` despite 6**4**-element input and output)
But with this PR it's only `sub rsp, 520`
```llvm
start:
%array.i.i.i.i.i.i = alloca [64 x i32], align 4
%array1.i.i.i = alloca %"core::mem::manually_drop::ManuallyDrop<[u32; 64]>", align 4
```
Similarly, the loop it emits on nightly is scalar-only and horrifying
```nasm
.LBB0_1:
mov esi, 64
mov edi, 0
cmp rdx, 64
je .LBB0_3
lea rsi, [rdx + 1]
mov qword ptr [rsp + 784], rsi
mov r8d, dword ptr [rsp + 4*rdx + 528]
mov edi, 1
lea edx, [r8 + 2*r8]
lea r8d, [r8 + 4*rdx]
add r8d, 7
.LBB0_3:
test edi, edi
je .LBB0_11
mov dword ptr [rsp + 4*rcx + 272], r8d
cmp rsi, 64
jne .LBB0_6
xor r8d, r8d
mov edx, 64
test r8d, r8d
jne .LBB0_8
jmp .LBB0_11
.LBB0_6:
lea rdx, [rsi + 1]
mov qword ptr [rsp + 784], rdx
mov edi, dword ptr [rsp + 4*rsi + 528]
mov r8d, 1
lea esi, [rdi + 2*rdi]
lea edi, [rdi + 4*rsi]
add edi, 7
test r8d, r8d
je .LBB0_11
.LBB0_8:
mov dword ptr [rsp + 4*rcx + 276], edi
add rcx, 2
cmp rcx, 64
jne .LBB0_1
```
whereas with this PR it's unrolled and vectorized
```nasm
vpmulld ymm1, ymm0, ymmword ptr [rsp + 64]
vpaddd ymm1, ymm1, ymm2
vmovdqu ymmword ptr [rsp + 328], ymm1
vpmulld ymm1, ymm0, ymmword ptr [rsp + 96]
vpaddd ymm1, ymm1, ymm2
vmovdqu ymmword ptr [rsp + 360], ymm1
```
(though sadly still stack-to-stack)
Now that the compiler accepts "-Z instrument-xray" option only when
targeting one of the supported targets, make sure to not run the
codegen tests where the compiler will fail.
Like with other compiletests, we don't have access to internals,
so simply hardcode a list of supported architectures here.
Let's add at least some tests to verify that this option is accepted
and produces expected LLVM attributes. More tests can be added later
with attribute support.
The code that consumes PointerKind (`adjust_for_rust_scalar` in rustc_ty_utils)
ended up using PointerKind variants to talk about Rust reference types (& and
&mut) anyway, making the old code structure quite confusing: one always had to
keep in mind which PointerKind corresponds to which type. So this changes
PointerKind to directly reflect the type.
This does not change behavior.
Don't merge vtables when full debuginfo is enabled.
This PR makes the compiler not emit the `unnamed_addr` attribute for vtables when full debuginfo is enabled, so that they don't get merged even if they have the same contents. This allows debuggers to more reliably map from a dyn pointer to the self-type of a trait object by looking at the vtable's debuginfo.
The PR only changes the behavior of the LLVM backend as other backends don't emit vtable debuginfo (as far as I can tell).
The performance impact of this change should be small as [measured](https://github.com/rust-lang/rust/pull/103514#issuecomment-1290833854) in a previous PR.
...and remove it from `PointeeInfo`, which isn't meant for this.
There are still various places (marked with FIXMEs) that assume all pointers
have the same size and alignment. Fixing this requires parsing non-default
address spaces in the data layout string, which will be done in a followup.
Implement `alloc::vec::IsZero` for `Option<$NUM>` types
Fixes#106911
Mirrors the `NonZero$NUM` implementations with an additional `assert_zero_valid`.
`None::<i32>` doesn't stricly satisfy `IsZero` but for the purpose of allocating we can produce more efficient codegen.
Previously, it was only put on scalars with range validity invariants
like bool, was uninit was obviously invalid for those.
Since then, we have normatively declared all uninit primitives to be
undefined behavior and can therefore put `noundef` on them.
The remaining concern was the `mem::uninitialized` function, which cause
quite a lot of UB in the older parts of the ecosystem. This function now
doesn't return uninit values anymore, making users of it safe from this
change.
The only real sources of UB where people could encounter uninit
primitives are `MaybeUninit::uninit().assume_init()`, which has always
be clear in the docs about being UB and from heap allocations (like
reading from the spare capacity of a vec. This is hopefully rare enough
to not break anything.