This code for recalculating `mcdc_bitmap_bytes` doesn't provide any benefit,
because its result won't have changed from the value in `FunctionCoverageInfo`
that was computed during the MIR instrumentation pass.
Rollup of 5 pull requests
Successful merges:
- #124615 (coverage: Further simplify extraction of mapping info from MIR)
- #124778 (Fix parse error message for meta items)
- #124797 (Refactor float `Primitive`s to a separate `Float` type)
- #124888 (Migrate `run-make/rustdoc-output-path` to rmake)
- #124957 (Make `Ty::builtin_deref` just return a `Ty`)
r? `@ghost`
`@rustbot` modify labels: rollup
Refactor float `Primitive`s to a separate `Float` type
Now there are 4 of them, it makes sense to refactor `F16`, `F32`, `F64` and `F128` out of `Primitive` and into a separate `Float` type (like integers already are). This allows patterns like `F16 | F32 | F64 | F128` to be simplified into `Float(_)`, and is consistent with `ty::FloatTy`.
As a side effect, this PR also makes the `Ty::primitive_size` method work with `f16` and `f128`.
Tracking issue: #116909
`@rustbot` label +F-f16_and_f128
codegen: memmove/memset cannot be non-temporal
non-temporal memset is not a thing.
And for memmove, since the LLVM backend doesn't support this, surely we don't need it in the GCC backend.
LLVM has updated data layouts to specify `Fn32` on 64-bit ARM to avoid
C++ accidentally underaligning functions when trying to comply with
member function ABIs.
This should only affect Rust in cases where we had a similar bug (I
don't believe we have one), but our data layout must match to generate
code.
As a compatibility adaptatation, if LLVM is not version 19 yet, `Fn32`
gets voided from the data layout.
See llvm/llvm-project#90415
coverage: Clean up creation of MC/DC condition bitmaps
This PR improves the code for creating and initializing [MC/DC](https://en.wikipedia.org/wiki/Modified_condition/decision_coverage) condition bitmap variables, as introduced by #123409 and modified by #124255.
- The condition bitmap variables are now created eagerly at the start of per-function codegen, via a new `init_coverage` method in `CoverageInfoBuilderMethods`. This avoids having to retroactively create the bitmaps while doing codegen for an individual coverage statement.
- As a result, we can now create and initialize those bitmaps using existing safe APIs, instead of having to perform our own unsafe call to `llvm::LLVMBuildAlloca`.
- This PR also tweaks the way we count the number of condition bitmaps needed, by tracking the total number of bitmaps needed (max depth + 1), instead of only tracking the maximum depth. This reduces the potential for subtle off-by-one confusion.
Stabilize the size of incr comp object file names
The current implementation does not produce stable-length paths, and we create the paths in a way that makes our allocation behavior is nondeterministic. I think `@eddyb` fixed a number of other cases like this in the past, and this PR fixes another one. Whether that actually matters I have no idea, but we still have bimodal behavior in rustc-perf and the non-uniformity in `find` and `ls` was bothering me.
I've also removed the truncation of the mangled CGU names. Before this PR incr comp paths look like this:
```
target/debug/incremental/scratch-38izrrq90cex7/s-gux6gz0ow8-1ph76gg-ewe1xj434l26w9up5bedsojpd/261xgo1oqnd90ry5.o
```
And after, they look like this:
```
target/debug/incremental/scratch-035omutqbfkbw/s-gux6borni0-16r3v1j-6n64tmwqzchtgqzwwim5amuga/55v2re42sztc8je9bva6g8ft3.o
```
On the one hand, I'm sure this will break some people's builds because they're on Windows and only a few bytes from the path length limit. But if we're that seriously worried about the length of our file names, I have some other ideas on how to make them smaller. And last time I deleted some hash truncations from the compiler, there was a huge drop in the number if incremental compilation ICEs that were reported: https://github.com/rust-lang/rust/pull/110367https://github.com/rust-lang/rust/pull/110367
---
Upon further reading, this PR actually fixes a bug. This comment says the CGU names are supposed to be a fixed-length hash, and before this PR they aren't: ca7d34efa9/compiler/rustc_monomorphize/src/partitioning.rs (L445-L448)
Because this now always takes place at the start of the function, we can just
use the normal `alloca` method and then initialize each bitmap immediately.
This patch also moves bitmap setup out of the `mcdc_parameters` method, because
there is no longer any particular reason for it to be there.
Remove many `#[macro_use] extern crate foo` items
This requires the addition of more `use` items, which often make the code more verbose. But they also make the code easier to read, because `#[macro_use]` obscures where macros are defined.
r? `@fee1-dead`
MCDC coverage: support nested decision coverage
#123409 provided the initial MCDC coverage implementation.
As referenced in #124144, it does not currently support "nested" decisions, like the following example :
```rust
fn nested_if_in_condition(a: bool, b: bool, c: bool) {
if a && if b || c { true } else { false } {
say("yes");
} else {
say("no");
}
}
```
Note that there is an if-expression (`if b || c ...`) embedded inside a boolean expression in the decision of an outer if-expression.
This PR proposes a workaround for this cases, by introducing a Decision context stack, and by handing several `temporary condition bitmaps` instead of just one.
When instrumenting boolean expressions, if the current node is a leaf condition (i.e. not a `||`/`&&` logical operator nor a `!` not operator), we insert a new decision context, such that if there are more boolean expressions inside the condition, they are handled as separate expressions.
On the codegen LLVM side, we allocate as many `temp_cond_bitmap`s as necessary to handle the maximum encountered decision depth.
Add decision_depth field to TVBitmapUpdate/CondBitmapUpdate statements
Add decision_depth field to BcbMappingKinds MCDCBranch and MCDCDecision
Add decision_depth field to MCDCBranchSpan and MCDCDecisionSpan
Set writable and dead_on_unwind attributes for sret arguments
Set the `writable` and `dead_on_unwind` attributes for `sret` arguments. This allows call slot optimization to remove more memcpy's.
See https://llvm.org/docs/LangRef.html#parameter-attributes for the specification of these attributes. In short, the statement we're making here is that:
* The return slot is writable.
* The return slot will not be read if the function unwinds.
Fixes https://github.com/rust-lang/rust/issues/90595.
Stop using LLVM struct types for alloca
The alloca type has no semantic meaning, only the size (and alignment, but we specify it explicitly) matter. Using `[N x i8]` is a more direct way to specify that we want `N` bytes, and avoids relying on LLVM's struct layout. It is likely that a future LLVM version will change to an untyped alloca representation.
Split out from #121577.
r? `@ghost`
Dellvmize some intrinsics (use `u32` instead of `Self` in some integer intrinsics)
This implements https://github.com/rust-lang/compiler-team/issues/693 minus what was implemented in #123226.
Note: I decided to _not_ change `shl`/... builder methods, as it just doesn't seem worth it.
r? ``@scottmcm``
[cleanup] [llvm backend] Prevent creating the same `Instance::mono` multiple times
Just a little thing I came across while going through the code.
r? ```@oli-obk```
Add support for Arm64EC to the Standard Library
Adds the final pieces so that the standard library can be built for arm64ec-pc-windows-msvc (initially added in #119199)
* Bumps `windows-sys` to 0.56.0, which adds support for Arm64EC.
* Correctly set the `isEC` parameter for LLVM's `writeArchive` function.
* Add `#![feature(asm_experimental_arch)]` to library crates where Arm64EC inline assembly is used, as it is currently unstable.
This makes sure that &[] is just as efficient as indirecting through
unsafe code (from_raw_parts). No new stable guarantee is intended about
whether or not we do this, this is just an optimization.
Co-authored-by: Ralf Jung <post@ralfj.de>
Add the missing inttoptr when we ptrtoint in ptr atomics
Ralf noticed this here: https://github.com/rust-lang/rust/pull/122220#discussion_r1535172094
Our previous codegen forgot to add the cast back to integer type. The code compiles anyway, because of course all locals are in-memory to start with, so previous codegen would do the integer atomic, store the integer to a local, then load a pointer from that local. Which is definitely _not_ what we wanted: That's an integer-to-pointer transmute, so all pointers returned by these `AtomicPtr` methods didn't have provenance. Yikes.
Here's the IR for `AtomicPtr::fetch_byte_add` on 1.76: https://godbolt.org/z/8qTEjeraY
```llvm
define noundef ptr `@atomicptr_fetch_byte_add(ptr` noundef nonnull align 8 %a, i64 noundef %v) unnamed_addr #0 !dbg !7 {
start:
%0 = alloca ptr, align 8, !dbg !12
%val = inttoptr i64 %v to ptr, !dbg !12
call void `@llvm.lifetime.start.p0(i64` 8, ptr %0), !dbg !28
%1 = ptrtoint ptr %val to i64, !dbg !28
%2 = atomicrmw add ptr %a, i64 %1 monotonic, align 8, !dbg !28
store i64 %2, ptr %0, align 8, !dbg !28
%self = load ptr, ptr %0, align 8, !dbg !28
call void `@llvm.lifetime.end.p0(i64` 8, ptr %0), !dbg !28
ret ptr %self, !dbg !33
}
```
r? `@RalfJung`
cc `@nikic`
Make `PlaceRef` and `OperandValue::Ref` share a common `PlaceValue` type
Both `PlaceRef` and `OperandValue::Ref` need the triple of the backend pointer immediate, the optional backend metadata for DSTs, and the actual alignment of the place (since it can differ from the ABI alignment).
This PR introduces a new `PlaceValue` type for those three values, leaving [`PlaceRef`](https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/mir/place/struct.PlaceRef.html) with the `TyAndLayout` and a `PlaceValue`, just like how [`OperandRef`](https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/mir/operand/struct.OperandRef.html) is a `TyAndLayout` and an `OperandValue`.
This means that various places that use `Ref`s as places can just pass the `PlaceValue` along, like in the below excerpt from the diff:
```diff
match operand.val {
- OperandValue::Ref(ptr, meta, align) => {
- debug_assert_eq!(meta, None);
+ OperandValue::Ref(source_place_val) => {
+ debug_assert_eq!(source_place_val.llextra, None);
debug_assert!(matches!(operand_kind, OperandValueKind::Ref));
- let fake_place = PlaceRef::new_sized_aligned(ptr, cast, align);
+ let fake_place = PlaceRef { val: source_place_val, layout: cast };
Some(bx.load_operand(fake_place).val)
}
```
There's more refactoring that I'd like to do after this, but I wanted to stop the PR here where it's hopefully easy (albeit probably not quick) to review since I tried to keep every change line-by-line clear. (Most are just adding `.val` to get to a field.)
You can also go commit-at-a-time if you'd like. Each passed tidy and the codegen tests on my machine (though I didn't run the cg_gcc ones).
I added this back in 111999, but I no longer think it's a good idea
- It had to get scaled back to only power-of-two things to not break a bunch of targets
- LLVM seems to be getting better at memcpy removal anyway
- Introducing vector instructions has seemed to sometimes (115515) make autovectorization worse
So this removes it from the codegen crates entirely, and instead just tries to use <https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/traits/builder/trait.BuilderMethods.html#method.typed_place_copy> instead of direct `memcpy` so things will still use load/store for immediates.
sanitizers: Create the rustc_sanitizers crate
Create the `rustc_sanitizers` crate and move the source code for the CFI and KCFI sanitizers to it. The tracking issue for reviewing and moving sanitizers into a compiler crate is #123619. This is part of our work to organize and stabilize support for the sanitizers. (See our roadmap at https://hackmd.io/`@rcvalle/S1Ou9K6H6.)`
Create the rustc_sanitizers crate and move the source code for the CFI
and KCFI sanitizers to it.
Co-authored-by: David Wood <agile.lion3441@fuligin.ink>
The actual ABI implication here is that in some cases the values
are required to be "consecutive", i.e. must either all be passed
in registers or all on stack (without padding).
Adjust the code to either use Uniform::new() or Uniform::consecutive()
depending on which behavior is needed.
Then, when lowering this in LLVM, skip the [1 x i128] to i128
simplification if is_consecutive is set. i128 is the only case
I'm aware of where this is problematic right now. If we find
other cases, we can extend this (either based on target information
or possibly just by not simplifying for is_consecutive entirely).
When passing a 16 (or higher) aligned struct by value on ppc64le,
it needs to be passed as an array of `i128` rather than an array
of `i64`. This will force the use of an even starting register.
For the case of a 16 byte struct with alignment 16 it is important
that `[1 x i128]` is used instead of `i128` -- apparently, the
latter will get treated similarly to `[2 x i64]`, not exhibiting
the correct ABI. Add a `force_array` flag to `Uniform` to support
this.
The relevant clang code can be found here:
fe2119a7b0/clang/lib/CodeGen/Targets/PPC.cpp (L878-L884)fe2119a7b0/clang/lib/CodeGen/Targets/PPC.cpp (L780-L784)
I think the corresponding psABI wording is this:
> Fixed size aggregates and unions passed by value are mapped to as
> many doublewords of the parameter save area as the value uses in
> memory. Aggregrates and unions are aligned according to their
> alignment requirements. This may result in doublewords being
> skipped for alignment.
In particular the last sentence.
Fixes https://github.com/rust-lang/rust/issues/122767.
CFI: Restore typeid_for_instance default behavior
Restore typeid_for_instance default behavior of performing self type erasure, since it's the most common case and what it does most of the time. Using concrete self (or not performing self type erasure) is for assigning a secondary type id, and secondary type ids are only assigned when they're unique and to methods, and also are only tested for when methods are used as function pointers.
Add aarch64-apple-visionos and aarch64-apple-visionos-sim tier 3 targets
Introduces `aarch64-apple-visionos` and `aarch64-apple-visionos-sim` as tier 3 targets. This allows native development for the Apple Vision Pro's visionOS platform.
This work has been tracked in https://github.com/rust-lang/compiler-team/issues/642. There is a corresponding `libc` change https://github.com/rust-lang/libc/pull/3568 that is not required for merge.
Ideally we would be able to incorporate [this change](https://github.com/gimli-rs/object/pull/626) to the `object` crate, but the author has stated that a release will not be cut for quite a while. Therefore, the two locations that would reference the xrOS constant from `object` are hardcoded to their MachO values of 11 and 12, accompanied by TODOs to mark the code as needing change. I am open to suggestions on what to do here to get this checked in.
# Tier 3 Target Policy
At this tier, the Rust project provides no official support for a target, so we place minimal requirements on the introduction of targets.
> A tier 3 target must have a designated developer or developers (the "target maintainers") on record to be CCed when issues arise regarding the target. (The mechanism to track and CC such developers may evolve over time.)
See [src/doc/rustc/src/platform-support/apple-visionos.md](e88379034a/src/doc/rustc/src/platform-support/apple-visionos.md)
> Targets must use naming consistent with any existing targets; for instance, a target for the same CPU or OS as an existing Rust target should use the same name for that CPU or OS. Targets should normally use the same names and naming conventions as used elsewhere in the broader ecosystem beyond Rust (such as in other toolchains), unless they have a very good reason to diverge. Changing the name of a target can be highly disruptive, especially once the target reaches a higher tier, so getting the name right is important even for a tier 3 target.
> * Target names should not introduce undue confusion or ambiguity unless absolutely necessary to maintain ecosystem compatibility. For example, if the name of the target makes people extremely likely to form incorrect beliefs about what it targets, the name should be changed or augmented to disambiguate it.
> * If possible, use only letters, numbers, dashes and underscores for the name. Periods (.) are known to cause issues in Cargo.
This naming scheme matches `$ARCH-$VENDOR-$OS-$ABI` which is matches the iOS Apple Silicon simulator (`aarch64-apple-ios-sim`) and other Apple targets.
> Tier 3 targets may have unusual requirements to build or use, but must not
create legal issues or impose onerous legal terms for the Rust project or for
Rust developers or users.
> - The target must not introduce license incompatibilities.
> - Anything added to the Rust repository must be under the standard Rust license (`MIT OR Apache-2.0`).
> - The target must not cause the Rust tools or libraries built for any other host (even when supporting cross-compilation to the target) to depend on any new dependency less permissive than the Rust licensing policy. This applies whether the dependency is a Rust crate that would require adding new license exceptions (as specified by the `tidy` tool in the rust-lang/rust repository), or whether the dependency is a native library or binary. In other words, the introduction of the target must not cause a user installing or running a version of Rust or the Rust tools to besubject to any new license requirements.
> - Compiling, linking, and emitting functional binaries, libraries, or other code for the target (whether hosted on the target itself or cross-compiling from another target) must not depend on proprietary (non-FOSS) libraries. Host tools built for the target itself may depend on the ordinary runtime libraries supplied by the platform and commonly used by other applications built for the target, but those libraries must not be required for code generation for the target; cross-compilation to the target must not require such libraries at all. For instance, `rustc` built for the target may depend on a common proprietary C runtime library or console output library, but must not depend on a proprietary code generation library or code optimization library. Rust's license permits such combinations, but the Rust project has no interest in maintaining such combinations within the scope of Rust itself, even at tier 3.
> - "onerous" here is an intentionally subjective term. At a minimum, "onerous" legal/licensing terms include but are *not* limited to: non-disclosure requirements, non-compete requirements, contributor license agreements (CLAs) or equivalent, "non-commercial"/"research-only"/etc terms, requirements conditional on the employer or employment of any particular Rust developers, revocable terms, any requirements that create liability for the Rust project or its developers or users, or any requirements that adversely affect the livelihood or prospects of the Rust project or its developers or users.
This contribution is fully available under the standard Rust license with no additional legal restrictions whatsoever. This PR does not introduce any new dependency less permissive than the Rust license policy.
The new targets do not depend on proprietary libraries.
> Tier 3 targets should attempt to implement as much of the standard libraries as possible and appropriate (core for most targets, alloc for targets that can support dynamic memory allocation, std for targets with an operating system or equivalent layer of system-provided functionality), but may leave some code unimplemented (either unavailable or stubbed out as appropriate), whether because the target makes it impossible to implement or challenging to implement. The authors of pull requests are not obligated to avoid calling any portions of the standard library on the basis of a tier 3 target not implementing those portions.
This new target mirrors the standard library for watchOS and iOS, with minor divergences.
> The target must provide documentation for the Rust community explaining how to build for the target, using cross-compilation if possible. If the target supports running binaries, or running tests (even if they do not pass), the documentation must explain how to run such binaries or tests for the target, using emulation if possible or dedicated hardware if necessary.
Documentation is provided in [src/doc/rustc/src/platform-support/apple-visionos.md](e88379034a/src/doc/rustc/src/platform-support/apple-visionos.md)
> Neither this policy nor any decisions made regarding targets shall create any binding agreement or estoppel by any party. If any member of an approving Rust team serves as one of the maintainers of a target, or has any legal or employment requirement (explicit or implicit) that might affect their decisions regarding a target, they must recuse themselves from any approval decisions regarding the target's tier status, though they may otherwise participate in discussions.
> * This requirement does not prevent part or all of this policy from being cited in an explicit contract or work agreement (e.g. to implement or maintain support for a target). This requirement exists to ensure that a developer or team responsible for reviewing and approving a target does not face any legal threats or obligations that would prevent them from freely exercising their judgment in such approval, even if such judgment involves subjective matters or goes beyond the letter of these requirements.
> Tier 3 targets must not impose burden on the authors of pull requests, or other developers in the community, to maintain the target. In particular, do not post comments (automated or manual) on a PR that derail or suggest a block on the PR based on a tier 3 target. Do not send automated messages or notifications (via any medium, including via `@)` to a PR author or others involved with a PR regarding a tier 3 target, unless they have opted into such messages.
> * Backlinks such as those generated by the issue/PR tracker when linking to an issue or PR are not considered a violation of this policy, within reason. However, such messages (even on a separate repository) must not generate notifications to anyone involved with a PR who has not requested such notifications.
> Patches adding or updating tier 3 targets must not break any existing tier 2 or tier 1 target, and must not knowingly break another tier 3 target without approval of either the compiler team or the maintainers of the other tier 3 target.
> * In particular, this may come up when working on closely related targets, such as variations of the same architecture with different features. Avoid introducing unconditional uses of features that another variation of the target may not have; use conditional compilation or runtime detection, as appropriate, to let each target run code supported by that target.
I acknowledge these requirements and intend to ensure that they are met.
This target does not touch any existing tier 2 or tier 1 targets and should not break any other targets.
Restore typeid_for_instance default behavior of performing self type
erasure, since it's the most common case and what it does most of the
time. Using concrete self (or not performing self type erasure) is for
assigning a secondary type id, and secondary type ids are only assigned
when they're unique and to methods, and also are only tested for when
methods are used as function pointers.
Rollup of 9 pull requests
Successful merges:
- #121546 (Error out of layout calculation if a non-last struct field is unsized)
- #122448 (Port hir-tree run-make test to ui test)
- #123212 (CFI: Change type transformation to use TypeFolder)
- #123218 (Add test for getting parent HIR for synthetic HIR node)
- #123324 (match lowering: make false edges more precise)
- #123389 (Avoid panicking unnecessarily on startup)
- #123397 (Fix diagnostic for qualifier in extern block)
- #123431 (Stabilize `proc_macro_byte_character` and `proc_macro_c_str_literals`)
- #123439 (coverage: Remove useless constants)
r? `@ghost`
`@rustbot` modify labels: rollup
coverage: Remove useless constants
After #122972 and #123419, these constants don't serve any useful purpose, so get rid of them.
`@rustbot` label +A-code-coverage
coverage: Correctly report and check LLVM's coverage mapping version
I was puzzled by the fact that the LLVM 18 update (#120055) didn't need to modify this version check, despite the fact that LLVM 18 uses a newer version of the coverage mapping format.
This turned out to be because we were inappropriately hard-coding a specific version (`Version6`) in the C++ wrapper, instead of using `CovMapVersion::CurrentVersion` to reflect the version actually used by LLVM on our behalf.
This PR fixes that, and also changes the Rust-side version check to accept the new coverage mapping version used by LLVM 18, since the necessary compatibility work has already been done.
---
### Quick history of `LLVMRustCoverageMappingVersion`:
- Originally it returned LLVM's `coverage::CovMapVersion::CurrentVersion`, as intended. The Rust-side code would verify it, and also embed it as the actual coverage version number in the output binary.
- At some point it was changed to a hard-coded value, to work around a (now-irrelevant) compatibility issue. This was incorrect (but mostly benign), because the override should have been performed on the Rust side instead, after verifying LLVM's value.
- Later contributors dutifully updated the hard-coded value, because they didn't have enough context to identify the problem.
- With this PR, it once again returns LLVM's current coverage version number, and the Rust-side code checks it against an expected range. We don't override the result, but we do indicate where that override should occur if it ever becomes necessary.
CFI: Support function pointers for trait methods
Adds support for both CFI and KCFI for function pointers to trait methods by attaching both concrete and abstract types to functions.
KCFI does this through generation of a `ReifyShim` on any function pointer for a method that could go into a vtable, and keeping this separate from `ReifyShim`s that are *intended* for vtable us by setting a `ReifyReason` on them.
CFI does this by setting both the concrete and abstract type on every instance.
This should land after #123024 or a similar PR, as it diverges the implementation of CFI vs KCFI.
r? `@compiler-errors`
Rename `expose_addr` to `expose_provenance`
`expose_addr` is a bad name, an address is just a number and cannot be exposed. The operation is actually about the provenance of the pointer.
This PR thus changes the name of the method to `expose_provenance` without changing its return type. There is sufficient precedence for returning a useful value from an operation that does something else without the name indicating such, e.g. [`Option::insert`](https://doc.rust-lang.org/nightly/std/option/enum.Option.html#method.insert) and [`MaybeUninit::write`](https://doc.rust-lang.org/nightly/std/mem/union.MaybeUninit.html#method.write).
Returning the address is merely convenient, not a fundamental part of the operation. This is implied by the fact that integers do not have provenance since
```rust
let addr = ptr.addr();
ptr.expose_provenance();
let new = ptr::with_exposed_provenance(addr);
```
must behave exactly like
```rust
let addr = ptr.expose_provenance();
let new = ptr::with_exposed_provenance(addr);
```
as the result of `ptr.expose_provenance()` and `ptr.addr()` is the same integer. Therefore, this PR removes the `#[must_use]` annotation on the function and updates the documentation to reflect the important part.
~~An alternative name would be `expose_provenance`. I'm not at all opposed to that, but it makes a stronger implication than we might want that the provenance of the pointer returned by `ptr::with_exposed_provenance`[^1] is the same as that what was exposed, which is not yet specified as such IIUC. IMHO `expose` does not make that connection.~~
A previous version of this PR suggested `expose` as name, libs-api [decided on](https://github.com/rust-lang/rust/pull/122964#issuecomment-2033194319) `expose_provenance` to keep the symmetry with `with_exposed_provenance`.
CC `@RalfJung`
r? libs-api
[^1]: I'm using the new name for `from_exposed_addr` suggested by #122935 here.
rename ptr::from_exposed_addr -> ptr::with_exposed_provenance
As discussed on [Zulip](https://rust-lang.zulipchat.com/#narrow/stream/136281-t-opsem/topic/To.20expose.20or.20not.20to.20expose/near/427757066).
The old name, `from_exposed_addr`, makes little sense as it's not the address that is exposed, it's the provenance. (`ptr.expose_addr()` stays unchanged as we haven't found a better option yet. The intended interpretation is "expose the provenance and return the address".)
The new name nicely matches `ptr::without_provenance`.
Add `Ord::cmp` for primitives as a `BinOp` in MIR
Update: most of this OP was written months ago. See https://github.com/rust-lang/rust/pull/118310#issuecomment-2016940014 below for where we got to recently that made it ready for review.
---
There are dozens of reasonable ways to implement `Ord::cmp` for integers using comparison, bit-ops, and branches. Those differences are irrelevant at the rust level, however, so we can make things better by adding `BinOp::Cmp` at the MIR level:
1. Exactly how to implement it is left up to the backends, so LLVM can use whatever pattern its optimizer best recognizes and cranelift can use whichever pattern codegens the fastest.
2. By not inlining those details for every use of `cmp`, we drastically reduce the amount of MIR generated for `derive`d `PartialOrd`, while also making it more amenable to MIR-level optimizations.
Having extremely careful `if` ordering to μoptimize resource usage on broadwell (#63767) is great, but it really feels to me like libcore is the wrong place to put that logic. Similarly, using subtraction [tricks](https://graphics.stanford.edu/~seander/bithacks.html#CopyIntegerSign) (#105840) is arguably even nicer, but depends on the optimizer understanding it (https://github.com/llvm/llvm-project/issues/73417) to be practical. Or maybe [bitor is better than add](https://discourse.llvm.org/t/representing-in-ir/67369/2?u=scottmcm)? But maybe only on a future version that [has `or disjoint` support](https://discourse.llvm.org/t/rfc-add-or-disjoint-flag/75036?u=scottmcm)? And just because one of those forms happens to be good for LLVM, there's no guarantee that it'd be the same form that GCC or Cranelift would rather see -- especially given their very different optimizers. Not to mention that if LLVM gets a spaceship intrinsic -- [which it should](https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Suboptimal.20inlining.20in.20std.20function.20.60binary_search.60/near/404250586) -- we'll need at least a rustc intrinsic to be able to call it.
As for simplifying it in Rust, we now regularly inline `{integer}::partial_cmp`, but it's quite a large amount of IR. The best way to see that is with 8811efa88b (diff-d134c32d028fbe2bf835fef2df9aca9d13332dd82284ff21ee7ebf717bfa4765R113) -- I added a new pre-codegen MIR test for a simple 3-tuple struct, and this PR change it from 36 locals and 26 basic blocks down to 24 locals and 8 basic blocks. Even better, as soon as the construct-`Some`-then-match-it-in-same-BB noise is cleaned up, this'll expose the `Cmp == 0` branches clearly in MIR, so that an InstCombine (#105808) can simplify that to just a `BinOp::Eq` and thus fix some of our generated code perf issues. (Tracking that through today's `if a < b { Less } else if a == b { Equal } else { Greater }` would be *much* harder.)
---
r? `@ghost`
But first I should check that perf is ok with this
~~...and my true nemesis, tidy.~~
Use the `Align` type when parsing alignment attributes
Use the `Align` type in `rustc_attr::parse_alignment`, removing the need to call `Align::from_bytes(...).unwrap()` later in the compilation process.
Simplify trim-paths feature by merging all debuginfo options together
This PR simplifies the trim-paths feature by merging all debuginfo options together, as described in https://github.com/rust-lang/rust/issues/111540#issuecomment-1994010274.
And also do some correctness fixes found during the review.
cc `@weihanglo`
r? `@michaelwoerister`
CFI: Fix methods as function pointer cast
Fix casting between methods and function pointers by assigning a secondary type id to methods with their concrete self so they can be used as function pointers.
This was split off from #116404.
cc `@compiler-errors` `@workingjubilee`
Fix casting between methods and function pointers by assigning a
secondary type id to methods with their concrete self so they can be
used as function pointers.
Make `TyCtxt::coroutine_layout` take coroutine's kind parameter
For coroutines that come from coroutine-closures (i.e. async closures), we may have two kinds of bodies stored in the coroutine; one that takes the closure's captures by reference, and one that takes the captures by move.
These currently have identical layouts, but if we do any optimization for these layouts that are related to the upvars, then they will diverge -- e.g. https://github.com/rust-lang/rust/pull/120168#discussion_r1536943728.
This PR relaxes the assertion I added in #121122, and instead make the `TyCtxt::coroutine_layout` method take the `coroutine_kind_ty` argument from the coroutine, which will allow us to differentiate these by-move and by-ref bodies.
coverage: Re-enable `UnreachablePropagation` for coverage builds
This is a sequence of 3 related changes:
- Clean up the existing code that scans for unused functions
- Detect functions that were instrumented for coverage, but have had all their coverage statements removed by later MIR transforms (e.g. `UnreachablePropagation`)
- Re-enable `UnreachablePropagation` in coverage builds
Because we now detect functions that have lost their coverage statements, and treat them as unused, we don't need to worry about `UnreachablePropagation` removing all of those statements. This is demonstrated by `tests/coverage/unreachable.rs`.
Fixes#116171.
If a function was instrumented for coverage, but all of its coverage statements
have been removed by later MIR transforms, it should be treated as "unused"
even if the compiler generates an unreachable stub for it.
Unbox and unwrap the contents of `StatementKind::Coverage`
The payload of coverage statements was historically a structure with several fields, so it was boxed to avoid bloating `StatementKind`.
Now that the payload is a single relatively-small enum, we can replace `Box<Coverage>` with just `CoverageKind`.
This patch also adds a size assertion for `StatementKind`, to avoid accidentally bloating it in the future.
``@rustbot`` label +A-code-coverage
We already use `Instance` at declaration sites when available to glean
additional information about possible abstractions of the type in use.
This does the same when possible at callsites as well.
The primary purpose of this change is to allow CFI to alter how it
generates type information for indirect calls through `Virtual`
instances.
The payload of coverage statements was historically a structure with several
fields, so it was boxed to avoid bloating `StatementKind`.
Now that the payload is a single relatively-small enum, we can replace
`Box<Coverage>` with just `CoverageKind`.
This patch also adds a size assertion for `StatementKind`, to avoid
accidentally bloating it in the future.
Some of the marker statements used by coverage are added during MIR building
for use by the InstrumentCoverage pass (during analysis), and are not needed
afterwards.
various clippy fixes
We need to keep the order of the given clippy lint rules before passing them.
Since clap doesn't offer any useful interface for this purpose out of the box,
we have to handle it manually.
Additionally, this PR makes `-D` rules work as expected. Previously, lint rules were limited to `-W`. By enabling `-D`, clippy began to complain numerous lines in the tree, all of which have been resolved in this PR as well.
Fixes#121481
cc `@matthiaskrgr`
Update the minimum external LLVM to 17
With this change, we'll have stable support for LLVM 17 and 18.
For reference, the previous increase to LLVM 16 was #117947.
LLVM's default bad-alloc handler may throw if exceptions are enabled,
and `operator new` isn't hooked at all by default. Now we register our
own handler that prints a message similar to fatal errors, then aborts.
We also call the function that registers the C++ `std::new_handler`.
coverage: Initial support for branch coverage instrumentation
(This is a review-ready version of the changes that were drafted in #118305.)
This PR adds support for branch coverage instrumentation, gated behind the unstable flag value `-Zcoverage-options=branch`. (Coverage instrumentation must also be enabled with `-Cinstrument-coverage`.)
During THIR-to-MIR lowering (MIR building), if branch coverage is enabled, we collect additional information about branch conditions and their corresponding then/else blocks. We inject special marker statements into those blocks, so that the `InstrumentCoverage` MIR pass can reliably identify them even after the initially-built MIR has been simplified and renumbered.
The rest of the changes are mostly just plumbing needed to gather up the information that was collected during MIR building, and include it in the coverage metadata that we embed in the final binary.
Note that `llvm-cov show` doesn't print branch coverage information in its source views by default; that needs to be explicitly enabled with `--show-branches=count` or similar.
---
The current implementation doesn't have any support for instrumenting `if let` or let-chains. I think it's still useful without that, and adding it would be non-trivial, so I'm happy to leave that for future work.
coverage: Remove or migrate all unstable values of `-Cinstrument-coverage`
(This PR was substantially overhauled from its original version, which migrated all of the existing unstable values intact.)
This PR takes the three nightly-only values that are currently accepted by `-Cinstrument-coverage`, completely removes two of them (`except-unused-functions` and `except-unused-generics`), and migrates the third (`branch`) over to a newly-introduced unstable flag `-Zcoverage-options`.
I have a few motivations for wanting to do this:
- It's unclear whether anyone actually uses the `except-unused-*` values, so this serves as an opportunity to either remove them, or prompt existing users to object to their removal.
- After #117199, the stable values of `-Cinstrument-coverage` treat it as a boolean-valued flag, so having nightly-only extra values feels out-of-place.
- Nightly-only values also require extra ad-hoc code to make sure they aren't accidentally exposed to stable users.
- The new system allows multiple different settings to be toggled independently, which isn't possible in the current single-value system.
- The new system makes it easier to introduce new behaviour behind an unstable toggle, and then gather nightly-user feedback before possibly making it the default behaviour for all users.
- The new system also gives us a convenient place to put relatively-narrow options that won't ever be the default, but that nightly users might still want access to.
- It's likely that we will eventually want to give stable users more fine-grained control over coverage instrumentation. The new flag serves as a prototype of what that stable UI might eventually look like.
The `branch` option is a placeholder that currently does nothing. It will be used by #122322 to opt into branch coverage instrumentation.
---
I see `-Zcoverage-options` as something that will exist more-or-less indefinitely, though individual sub-options might come and go as appropriate. I think there will always be some demand for nightly-only toggles, so I don't see `-Zcoverage-options` itself ever being stable, though we might eventually stabilize something similar to it.
Only generate a ptrtoint in AtomicPtr codegen when absolutely necessary
This special case was added in this PR: https://github.com/rust-lang/rust/pull/77611 in response to this error message:
```
Intrinsic has incorrect argument type!
void ({}*)* `@llvm.ppc.cfence.p0sl_s`
in function rust_oom
LLVM ERROR: Broken function found, compilation aborted!
[RUSTC-TIMING] std test:false 20.161
error: could not compile `std`
```
But when I tried searching for more information about that intrinsic I found this: https://github.com/llvm/llvm-project/issues/55983 which is a report of someone hitting this same error and a fix was landed in LLVM, 2 years after the above Rust PR.
Ensure nested allocations in statics neither get deduplicated nor duplicated
This PR generates new `DefId`s for nested allocations in static items and feeds all the right queries to make the compiler believe these are regular `static` items. I chose this design, because all other designs are fragile and make the compiler horribly complex for such a niche use case.
At present this wrecks incremental compilation performance *in case nested allocations exist* (because any query creating a `DefId` will be recomputed and never loaded from the cache). This will be resolved later in https://github.com/rust-lang/rust/pull/115613 . All other statics are unaffected by this change and will not have performance regressions (heh, famous last words)
This PR contains various smaller refactorings that can be pulled out into separate PRs. It is best reviewed commit-by-commit. The last commit is where the actual magic happens.
r? `@RalfJung` on the const interner and engine changes
fixes https://github.com/rust-lang/rust/issues/79738
Fix 32-bit overflows in LLVM composite constants
Inspired by #121868. Fixes unsoundness created when constructing constant arrays, strings, and structs with 2^32 or more elements on x86_64. This introduces copies of a few LLVM functions that have their signatures updated to use size_t in place of unsigned int. Alternatively we could just add overflow checks and just disallow huge composite constants. That introduces less code, but maybe a huge static block of memory is useful in embedded/no-os situations?
Remove the unused `field_remapping` field from `TypeLowering`
The `field_remapping` field of `TypeLowering` has been unused since #121665. This PR removes it, then replaces the `TypeLowering` struct with its only remaining member `&'ll Type`.
std support for wasm32 panic=unwind
Tracking issue: #118168
This adds std support for `-Cpanic=unwind` on wasm, and with it slightly more fleshed out rustc support. Now, the stable default is still panic=abort without exception-handling, but if you `-Zbuild-std` with `RUSTFLAGS=-Cpanic=unwind`, you get wasm exception-handling try/catch blocks in the binary:
```rust
#[no_mangle]
pub fn foo_bar(x: bool) -> *mut u8 {
let s = Box::<str>::from("hello");
maybe_panic(x);
Box::into_raw(s).cast()
}
#[inline(never)]
#[no_mangle]
fn maybe_panic(x: bool) {
if x {
panic!("AAAAA");
}
}
```
```wat
;; snip...
(try $label$5
(do
(call $maybe_panic
(local.get $0)
)
(br $label$1)
)
(catch_all
(global.set $__stack_pointer
(local.get $1)
)
(call $__rust_dealloc
(local.get $2)
(i32.const 5)
(i32.const 1)
)
(rethrow $label$5)
)
)
;; snip...
```
Stop using LLVM struct types for byval/sret
For `byval` and `sret`, the type has no semantic meaning, only the size matters\*†. Using `[N x i8]` is a more direct way to specify that we want `N` bytes, and avoids relying on LLVM's struct layout.
\*: The alignment would matter, if we didn't explicitly specify it. From what I can tell, we always specified the alignment for `sret`; for `byval`, we didn't until #112157.
†: For `byval`, the hidden copy may be impacted by padding in the LLVM struct type, i.e. padding bytes may not be copied. (I'm not sure if this is done today, but I think it would be legal.) But we manually pad our LLVM struct types specifically to avoid there ever being LLVM-visible padding, so that shouldn't be an issue.
Split out from #121577.
r? `@nikic`
Add asm goto support to `asm!`
Tracking issue: #119364
This PR implements asm-goto support, using the syntax described in "future possibilities" section of [RFC2873](https://rust-lang.github.io/rfcs/2873-inline-asm.html#asm-goto).
Currently I have only implemented the `label` part, not the `fallthrough` part (i.e. fallthrough is implicit). This doesn't reduce the expressive though, since you can use label-break to get arbitrary control flow or simply set a value and rely on jump threading optimisation to get the desired control flow. I can add that later if deemed necessary.
r? ``@Amanieu``
cc ``@ojeda``
Introduces the `arm64ec-pc-windows-msvc` target for building Arm64EC ("Emulation Compatible") binaries for Windows.
For more information about Arm64EC see <https://learn.microsoft.com/en-us/windows/arm/arm64ec>.
Tier 3 policy:
> A tier 3 target must have a designated developer or developers (the "target maintainers") on record to be CCed when issues arise regarding the target. (The mechanism to track and CC such developers may evolve over time.)
I will be the maintainer for this target.
> Targets must use naming consistent with any existing targets; for instance, a target for the same CPU or OS as an existing Rust target should use the same name for that CPU or OS. Targets should normally use the same names and naming conventions as used elsewhere in the broader ecosystem beyond Rust (such as in other toolchains), unless they have a very good reason to diverge. Changing the name of a target can be highly disruptive, especially once the target reaches a higher tier, so getting the name right is important even for a tier 3 target.
Target uses the `arm64ec` architecture to match LLVM and MSVC, and the `-pc-windows-msvc` suffix to indicate that it targets Windows via the MSVC environment.
> Target names should not introduce undue confusion or ambiguity unless absolutely necessary to maintain ecosystem compatibility. For example, if the name of the target makes people extremely likely to form incorrect beliefs about what it targets, the name should be changed or augmented to disambiguate it.
Target name exactly specifies the type of code that will be produced.
> If possible, use only letters, numbers, dashes and underscores for the name. Periods (.) are known to cause issues in Cargo.
Done.
> Tier 3 targets may have unusual requirements to build or use, but must not create legal issues or impose onerous legal terms for the Rust project or for Rust developers or users.
> The target must not introduce license incompatibilities.
Uses the same dependencies, requirements and licensing as the other `*-pc-windows-msvc` targets.
> Anything added to the Rust repository must be under the standard Rust license (MIT OR Apache-2.0).
Understood.
> The target must not cause the Rust tools or libraries built for any other host (even when supporting cross-compilation to the target) to depend on any new dependency less permissive than the Rust licensing policy. This applies whether the dependency is a Rust crate that would require adding new license exceptions (as specified by the tidy tool in the rust-lang/rust repository), or whether the dependency is a native library or binary. In other words, the introduction of the target must not cause a user installing or running a version of Rust or the Rust tools to be subject to any new license requirements.
> Compiling, linking, and emitting functional binaries, libraries, or other code for the target (whether hosted on the target itself or cross-compiling from another target) must not depend on proprietary (non-FOSS) libraries. Host tools built for the target itself may depend on the ordinary runtime libraries supplied by the platform and commonly used by other applications built for the target, but those libraries must not be required for code generation for the target; cross-compilation to the target must not require such libraries at all. For instance, rustc built for the target may depend on a common proprietary C runtime library or console output library, but must not depend on a proprietary code generation library or code optimization library. Rust's license permits such combinations, but the Rust project has no interest in maintaining such combinations within the scope of Rust itself, even at tier 3.
> "onerous" here is an intentionally subjective term. At a minimum, "onerous" legal/licensing terms include but are not limited to: non-disclosure requirements, non-compete requirements, contributor license agreements (CLAs) or equivalent, "non-commercial"/"research-only"/etc terms, requirements conditional on the employer or employment of any particular Rust developers, revocable terms, any requirements that create liability for the Rust project or its developers or users, or any requirements that adversely affect the livelihood or prospects of the Rust project or its developers or users.
Uses the same dependencies, requirements and licensing as the other `*-pc-windows-msvc` targets.
> Neither this policy nor any decisions made regarding targets shall create any binding agreement or estoppel by any party. If any member of an approving Rust team serves as one of the maintainers of a target, or has any legal or employment requirement (explicit or implicit) that might affect their decisions regarding a target, they must recuse themselves from any approval decisions regarding the target's tier status, though they may otherwise participate in discussions.
> This requirement does not prevent part or all of this policy from being cited in an explicit contract or work agreement (e.g. to implement or maintain support for a target). This requirement exists to ensure that a developer or team responsible for reviewing and approving a target does not face any legal threats or obligations that would prevent them from freely exercising their judgment in such approval, even if such judgment involves subjective matters or goes beyond the letter of these requirements.
Understood, I am not a member of the Rust team.
> Tier 3 targets should attempt to implement as much of the standard libraries as possible and appropriate (core for most targets, alloc for targets that can support dynamic memory allocation, std for targets with an operating system or equivalent layer of system-provided functionality), but may leave some code unimplemented (either unavailable or stubbed out as appropriate), whether because the target makes it impossible to implement or challenging to implement. The authors of pull requests are not obligated to avoid calling any portions of the standard library on the basis of a tier 3 target not implementing those portions.
Both `core` and `alloc` are supported.
Support for `std` dependends on making changes to the standard library, `stdarch` and `backtrace` which cannot be done yet as the bootstrapping compiler raises a warning ("unexpected `cfg` condition value") for `target_arch = "arm64ec"`.
> The target must provide documentation for the Rust community explaining how to build for the target, using cross-compilation if possible. If the target supports running binaries, or running tests (even if they do not pass), the documentation must explain how to run such binaries or tests for the target, using emulation if possible or dedicated hardware if necessary.
Documentation is provided in src/doc/rustc/src/platform-support/arm64ec-pc-windows-msvc.md
> Tier 3 targets must not impose burden on the authors of pull requests, or other developers in the community, to maintain the target. In particular, do not post comments (automated or manual) on a PR that derail or suggest a block on the PR based on a tier 3 target. Do not send automated messages or notifications (via any medium, including via @) to a PR author or others involved with a PR regarding a tier 3 target, unless they have opted into such messages.
> Backlinks such as those generated by the issue/PR tracker when linking to an issue or PR are not considered a violation of this policy, within reason. However, such messages (even on a separate repository) must not generate notifications to anyone involved with a PR who has not requested such notifications.
> Patches adding or updating tier 3 targets must not break any existing tier 2 or tier 1 target, and must not knowingly break another tier 3 target without approval of either the compiler team or the maintainers of the other tier 3 target.
> In particular, this may come up when working on closely related targets, such as variations of the same architecture with different features. Avoid introducing unconditional uses of features that another variation of the target may not have; use conditional compilation or runtime detection, as appropriate, to let each target run code supported by that target.
Understood.
Rollup of 5 pull requests
Successful merges:
- #120761 (Add initial support for DataFlowSanitizer)
- #121622 (Preserve same vtable pointer when cloning raw waker, to fix Waker::will_wake)
- #121716 (match lowering: Lower bindings in a predictable order)
- #121731 (Now that inlining, mir validation and const eval all use reveal-all, we won't be constraining hidden types here anymore)
- #121841 (`f16` and `f128` step 2: intrinsics)
r? `@ghost`
`@rustbot` modify labels: rollup
`f16` and `f128` step 2: intrinsics
Continuation of https://github.com/rust-lang/rust/pull/121728, another portion of https://github.com/rust-lang/rust/pull/114607.
This PR adds `f16` and `f128` intrinsics, and hooks them up to both HIR and LLVM. This is all still unexposed to the frontend, which will probably be the next step. Also update itanium mangling per `@rcvalle's` in https://github.com/rust-lang/rust/pull/121728/files#r1506570300, and fix a typo from step 1.
Once these types are usable in code, I will add the codegen tests from #114607 (codegen is passing on that branch)
This does add more `unimplemented!`s to Clippy, but I still don't think we can do better until library support is added.
r? `@compiler-errors`
cc `@Nilstrieb`
`@rustbot` label +T-compiler +F-f16_and_f128
Adds initial support for DataFlowSanitizer to the Rust compiler. It
currently supports `-Zsanitizer-dataflow-abilist`. Additional options
for it can be passed to LLVM command line argument processor via LLVM
arguments using `llvm-args` codegen option (e.g.,
`-Cllvm-args=-dfsan-combine-pointer-labels-on-load=false`).
Add stubs in IR and ABI for `f16` and `f128`
This is the very first step toward the changes in https://github.com/rust-lang/rust/pull/114607 and the [`f16` and `f128` RFC](https://rust-lang.github.io/rfcs/3453-f16-and-f128.html). It adds the types to `rustc_type_ir::FloatTy` and `rustc_abi::Primitive`, and just propagates those out as `unimplemented!` stubs where necessary.
These types do not parse yet so there is no feature gate, and it should be okay to use `unimplemented!`.
The next steps will probably be AST support with parsing and the feature gate.
r? `@compiler-errors`
cc `@Nilstrieb` suggested breaking the PR up in https://github.com/rust-lang/rust/pull/120645#issuecomment-1925900572
Remove useless lifetime of ArchiveBuilder
`trait ArchiveBuilder<'a>` has a seemingly useless lifetime a, so I remove it. If this is intentional, please reject this PR.
```rust
pub trait ArchiveBuilder<'a> {
fn add_file(&mut self, path: &Path);
fn add_archive(
&mut self,
archive: &Path,
skip: Box<dyn FnMut(&str) -> bool + 'static>,
) -> io::Result<()>;
fn build(self: Box<Self>, output: &Path) -> bool;
}
```
rename 'try' intrinsic to 'catch_unwind'
The intrinsic has nothing to do with `try` blocks, and corresponds to the stable `catch_unwind` function, so this makes a lot more sense IMO.
Also rename Miri's special function while we are at it, to reflect the level of abstraction it works on: it's an unwinding mechanism, on which Rust implements panics.
Add newtypes for bool fields/params/return types
Fixed all the cases of this found with some simple searches for `*/ bool` and `bool /*`; probably many more
Improve codegen diagnostic handling
Clarify the workings of the temporary `Diagnostic` type used to send diagnostics from codegen threads to the main thread.
r? `@estebank`
First, introduce a typedef `DiagnosticArgMap`.
Second, make the `args` field public, and remove the `args` getter and
`replace_args` setter. These were necessary previously because the getter
had a `#[allow(rustc::potential_query_instability)]` attribute, but that
was removed in #120931 when the args were changed from `FxHashMap` to
`FxIndexMap`. (All the other `Diagnostic` fields are public.)
Add "algebraic" fast-math intrinsics, based on fast-math ops that cannot return poison
Setting all of LLVM's fast-math flags makes our fast-math intrinsics very dangerous, because some inputs are UB. This set of flags permits common algebraic transformations, but according to the [LangRef](https://llvm.org/docs/LangRef.html#fastmath), only the flags `nnan` (no nans) and `ninf` (no infs) can produce poison.
And this uses the algebraic float ops to fix https://github.com/rust-lang/rust/issues/120720
cc `@orlp`
errors: only eagerly translate subdiagnostics
Subdiagnostics don't need to be lazily translated, they can always be eagerly translated. Eager translation is slightly more complex as we need to have a `DiagCtxt` available to perform the translation, which involves slightly more threading of that context.
This slight increase in complexity should enable later simplifications - like passing `DiagCtxt` into `AddToDiagnostic` and moving Fluent messages into the diagnostic structs rather than having them in separate files (working on that was what led to this change).
r? ```@nnethercote```
Implement intrinsics with fallback bodies
fixes#93145 (though we can port many more intrinsics)
cc #63585
The way this works is that the backend logic for generating custom code for intrinsics has been made fallible. The only failure path is "this intrinsic is unknown". The `Instance` (that was `InstanceDef::Intrinsic`) then gets converted to `InstanceDef::Item`, which represents the fallback body. A regular function call to that body is then codegenned. This is currently implemented for
* codegen_ssa (so llvm and gcc)
* codegen_cranelift
other backends will need to adjust, but they can just keep doing what they were doing if they prefer (though adding new intrinsics to the compiler will then require them to implement them, instead of getting the fallback body).
cc `@scottmcm` `@WaffleLapkin`
### todo
* [ ] miri support
* [x] default intrinsic name to name of function instead of requiring it to be specified in attribute
* [x] make sure that the bodies are always available (must be collected for metadata)
Subdiagnostics don't need to be lazily translated, they can always be
eagerly translated. Eager translation is slightly more complex as we need
to have a `DiagCtxt` available to perform the translation, which involves
slightly more threading of that context.
This slight increase in complexity should enable later simplifications -
like passing `DiagCtxt` into `AddToDiagnostic` and moving Fluent messages
into the diagnostic structs rather than having them in separate files
(working on that was what led to this change).
Signed-off-by: David Wood <david@davidtw.co>
LLVM 18 requires the evex512 feature to allow use of zmm registers.
LLVM automatically sets it when using a generic CPU, but not when
`-C target-cpu` is specified. This will result either in backend
legalization crashes, or code unexpectedly using ymm instead of
zmm registers.
For now, make sure that `avx512*` features imply `evex512`. Long
term we'll probably have to deal with the AVX10 mess somehow.
Build DebugInfo for async closures
The test is pretty bare, because I don't really know how to write debuginfo tests. I'd like to land this first, and then flesh it out correctly one it's no longer ICEing on master (which breaks people's ability to test using async closures).
r? oli-obk cc `@rust-lang/wg-debugging` (if any of y'all want to help me write a more fleshed out async closures test)
improve normalization of `Pointee::Metadata`
This PR makes it so that `<Wrapper<Tail> as Pointee>::Metadata` is normalized to `<Tail as Pointee>::Metadata` if we don't know `Wrapper<Tail>: Sized`. With that, the trait solver can prove projection predicates like `<Wrapper<Tail> as Pointee>::Metadata == <Tail as Pointee>::Metadata`, which makes it possible to use the metadata APIs to cast between the tail and the wrapper:
```rust
#![feature(ptr_metadata)]
use std::ptr::{self, Pointee};
fn cast_same_meta<T: ?Sized, U: ?Sized>(ptr: *const T) -> *const U
where
T: Pointee<Metadata = <U as Pointee>::Metadata>,
{
let (thin, meta) = ptr.to_raw_parts();
ptr::from_raw_parts(thin, meta)
}
struct Wrapper<T: ?Sized>(T);
fn cast_to_wrapper<T: ?Sized>(ptr: *const T) -> *const Wrapper<T> {
cast_same_meta(ptr)
}
```
Previously, this failed to compile:
```
error[E0271]: type mismatch resolving `<Wrapper<T> as Pointee>::Metadata == <T as Pointee>::Metadata`
--> src/lib.rs:16:5
|
15 | fn cast_to_wrapper<T: ?Sized>(ptr: *const T) -> *const Wrapper<T> {
| - found this type parameter
16 | cast_same_meta(ptr)
| ^^^^^^^^^^^^^^ expected `Wrapper<T>`, found type parameter `T`
|
= note: expected associated type `<Wrapper<T> as Pointee>::Metadata`
found associated type `<T as Pointee>::Metadata`
= note: an associated type was expected, but a different one was found
```
(Yes, you can already do this with `as` casts. But using functions is so much ✨ *safer* ✨, because you can't change the metadata on accident.)
---
This PR essentially changes the built-in impls of `Pointee` from this:
```rust
// before
impl Pointee for u8 {
type Metadata = ();
}
impl Pointee for [u8] {
type Metadata = usize;
}
// ...
impl Pointee for Wrapper<u8> {
type Metadata = ();
}
impl Pointee for Wrapper<[u8]> {
type Metadata = usize;
}
// ...
// This impl is only selected if `T` is a type parameter or unnormalizable projection or opaque type.
fallback impl<T: ?Sized> Pointee for Wrapper<T>
where
Wrapper<T>: Sized
{
type Metadata = ();
}
// This impl is only selected if `T` is a type parameter or unnormalizable projection or opaque type.
fallback impl<T /*: Sized */> Pointee for T {
type Metadata = ();
}
```
to this:
```rust
// after
impl Pointee for u8 {
type Metadata = ();
}
impl Pointee for [u8] {
type Metadata = usize;
}
// ...
impl<T: ?Sized> Pointee for Wrapper<T> {
// in the old solver this will instead project to the "deep" tail directly,
// e.g. `Wrapper<Wrapper<T>>::Metadata = T::Metadata`
type Metadata = <T as Pointee>::Metadata;
}
// ...
// This impl is only selected if `T` is a type parameter or unnormalizable projection or opaque type.
fallback impl<T /*: Sized */> Pointee for T {
type Metadata = ();
}
```
Invert diagnostic lints.
That is, change `diagnostic_outside_of_impl` and `untranslatable_diagnostic` from `allow` to `deny`, because more than half of the compiler has been converted to use translated diagnostics.
This commit removes more `deny` attributes than it adds `allow` attributes, which proves that this change is warranted.
r? ````@davidtwco````
That is, change `diagnostic_outside_of_impl` and
`untranslatable_diagnostic` from `allow` to `deny`, because more than
half of the compiler has be converted to use translated diagnostics.
This commit removes more `deny` attributes than it adds `allow`
attributes, which proves that this change is warranted.
Rollup of 8 pull requests
Successful merges:
- #120484 (Avoid ICE when is_val_statically_known is not of a supported type)
- #120516 (pattern_analysis: cleanup manual impls)
- #120517 (never patterns: It is correct to lower `!` to `_`.)
- #120523 (Improve `io::Read::read_buf_exact` error case)
- #120528 (Store SHOULD_CAPTURE as AtomicU8)
- #120529 (Update data layouts in custom target tests for LLVM 18)
- #120531 (Remove a bunch of `has_errors` checks that have no meaningful or the wrong effect)
- #120533 (Correct paths for hexagon-unknown-none-elf platform doc)
r? `@ghost`
`@rustbot` modify labels: rollup
Avoid ICE when is_val_statically_known is not of a supported type
2 ICE with 1 stone!
1. Implement `llvm.is.constant.ptr` to avoid first ICE in linked issue.
2. return `false` when the argument is not one of `i*`/`f*`/`ptr` to avoid second ICE.
fixes#120480
- `emitted_at` isn't used outside the crate.
- `code` and `messages` are public fields, so there's no point have
trivial getters/setters for them.
- `suggestions` is public, so the comment about "functionality on
`Diagnostic`" isn't needed.
llvm: change data layout bug to an error and make it trigger more
Fixes#33446.
Don't skip the inconsistent data layout check for custom LLVMs or non-built-in targets.
With #118708, all targets will have a simple test that would trigger this error if LLVM's data layouts do change - so data layouts would be corrected during the LLVM upgrade. Therefore, with builtin targets, this error won't happen with our LLVM because each target will have been confirmed to work. With non-builtin targets, this error is probably useful to have because you can change the data layout in your target and if it is wrong then that could lead to bugs.
When using a custom LLVM, the same justification makes sense for non-builtin targets as with our LLVM, the user can update their target to match their LLVM and that's probably a good thing to do. However, with a custom LLVM, the user cannot change the builtin target data layouts if they don't match - though given that the compiler's data layout is used for layout computation and a bunch of other things - you could get some bugs because of the mismatch and probably want to know about that. I'm not sure if this is something that people do and is okay, but I doubt it?
`CFG_LLVM_ROOT` was also always set during local development with `download-ci-llvm` so this bug would never trigger locally.
In #33446, two points are raised:
- In the issue itself, changing this from a `bug!` to a proper error is what is suggested, by using `isCompatibleDataLayout` from LLVM, but that function still just does the same thing that we do and check for equality, so I've avoided the additional code necessary to do that FFI call.
- `@Mark-Simulacrum` suggests a different check is necessary to maintain backwards compatibility with old LLVM versions. I don't know how often this comes up, but we can do that with some simple string manipulation + LLVM version checks as happens already for LLVM 17 just above this diff.
Replacement of #114390: Add new intrinsic `is_var_statically_known` and optimize pow for powers of two
This adds a new intrinsic `is_val_statically_known` that lowers to [``@llvm.is.constant.*`](https://llvm.org/docs/LangRef.html#llvm-is-constant-intrinsic).` It also applies the intrinsic in the int_pow methods to recognize and optimize the idiom `2isize.pow(x)`. See #114390 for more discussion.
While I have extended the scope of the power of two optimization from #114390, I haven't added any new uses for the intrinsic. That can be done in later pull requests.
Note: When testing or using the library, be sure to use `--stage 1` or higher. Otherwise, the intrinsic will be a noop and the doctests will be skipped. If you are trying out edits, you may be interested in [`--keep-stage 0`](https://rustc-dev-guide.rust-lang.org/building/suggested.html#faster-builds-with---keep-stage).
Fixes#47234Resolves#114390
`@Centri3`
Fix overflow check
Make MIRI choose the path randomly and rename the intrinsic
Add back test
Add miri test and make it operate on `ptr`
Define `llvm.is.constant` for primitives
Update MIRI comment and fix test in stage2
Add const eval test
Clarify that both branches must have the same side effects
guaranteed non guarantee
use immediate type instead
Co-Authored-By: Ralf Jung <post@ralfj.de>
With https://reviews.llvm.org/D86310 LLVM now has i128 aligned to
16-bytes on x86 based platforms. This will be in LLVM-18. This patch
updates all our spec targets to be 16-byte aligned, and removes the
alignment when speaking to older LLVM.
This results in Rust overaligning things relative to LLVM on older LLVMs.
This alignment change was discussed in rust-lang/compiler-team#683
See #54341 for additional information about why this is happening and
where this will be useful in the future.
This *does not* stabilize `i128`/`u128` for FFI.
Don't skip the inconsistent data layout check for custom LLVMs.
With #118708, all targets will have a simple test that would trigger this
check if LLVM's data layouts do change - so data layouts would be
corrected during the LLVM upgrade. Therefore, with builtin targets, this
check won't trigger with our LLVM because each target will have been
confirmed to work. With non-builtin targets, this check is probably
useful to have because you can change the data layout in your target and
if its wrong then that could lead to bugs.
When using a custom LLVM, the same justification makes sense for
non-builtin targets as with our LLVM, the user can update their target to
match their LLVM and that's probably a good thing to do. However, with
a custom LLVM, the user cannot change the builtin target data layouts if
they don't match - though given that the compiler's data layout is used
for layout computation and a bunch of other things - you could get some
bugs because of the mismatch and probably want to know about that.
`CFG_LLVM_ROOT` was also always set during local development with
`download-ci-llvm` so this bug would never trigger locally.
Signed-off-by: David Wood <david@davidtw.co>
`is_force_warn` is only possible for diagnostics with `Level::Warning`,
but it is currently stored in `Diagnostic::code`, which every diagnostic
has.
This commit:
- removes the boolean `DiagnosticId::Lint::is_force_warn` field;
- adds a `ForceWarning` variant to `Level`.
Benefits:
- The common `Level::Warning` case now has no arguments, replacing
lots of `Warning(None)` occurrences.
- `rustc_session::lint::Level` and `rustc_errors::Level` are more
similar, both having `ForceWarning` and `Warning`.
In #119606 I added them and used a `_mv` suffix, but that wasn't great.
A `with_` prefix has three different existing uses.
- Constructors, e.g. `Vec::with_capacity`.
- Wrappers that provide an environment to execute some code, e.g.
`with_session_globals`.
- Consuming chaining methods, e.g. `Span::with_{lo,hi,ctxt}`.
The third case is exactly what we want, so this commit changes
`DiagnosticBuilder::foo_mv` to `DiagnosticBuilder::with_foo`.
Thanks to @compiler-errors for the suggestion.
Add -Zuse-sync-unwind
Currently Rust uses async unwind by default, but async unwind will bring non-negligible size overhead. it would be nice to allow users to choose this.
In addition, async unwind currently prevents LLVM from generate compact unwind for MachO, if one wishes to generate compact unwind for MachO, then also needs this flag.
Separate immediate and in-memory ScalarPair representation
Currently, we assume that ScalarPair is always represented using a two-element struct, both as an immediate value and when stored in memory.
This currently works fairly well, but runs into problems with https://github.com/rust-lang/rust/pull/116672, where a ScalarPair involving an i128 type can no longer be represented as a two-element struct in memory. For example, the tuple `(i32, i128)` needs to be represented in-memory as `{ i32, [3 x i32], i128 }` to satisfy alignment requirements. Using `{ i32, i128 }` instead will result in the second element being stored at the wrong offset (prior to LLVM 18).
Resolve this issue by no longer requiring that the immediate and in-memory type for ScalarPair are the same. The in-memory type will now look the same as for normal struct types (and will include padding filler and similar), while the immediate type stays a simple two-element struct type. This also means that booleans in immediate ScalarPair are now represented as i1 rather than i8, just like we do everywhere else.
The core change here is to llvm_type (which now treats ScalarPair as a normal struct) and immediate_llvm_type (which returns the two-element struct that llvm_type used to produce). The rest is fixing things up to no longer assume these are the same. In particular, this switches places that try to get pointers to the ScalarPair elements to use byte-geps instead of struct-geps.
Support reg_addr register class in s390x inline assembly
In s390x, `r0` cannot be used as an address register (it is evaluated as zero in an address context).
Therefore, currently, in assemblies involving memory accesses, `r0` must be [marked as clobbered](1a1155653a/src/arch/s390x.rs (L58)) or [explicitly used to a non-address](1a1155653a/src/arch/s390x.rs (L135)) or explicitly use an address register to prevent `r0` from being allocated to a register for the address.
This patch adds a register class for allocating general-purpose registers, except `r0`, to make it easier to use address registers. (powerpc already has a register class (reg_nonzero) for a similar purpose.)
This is identical to the `a` constraint in LLVM and GCC:
https://llvm.org/docs/LangRef.html#supported-constraint-code-list
> a: A 32, 64, or 128-bit integer address register (excludes R0, which in an address context evaluates as zero).
https://gcc.gnu.org/onlinedocs/gcc/Machine-Constraints.html
> a
> Address register (general purpose register except r0)
cc ``@uweigand``
r? ``@Amanieu``
Because it's redundant w.r.t. `Diagnostic::is_lint`, which is present
for every diagnostic level.
`struct_lint_level_impl` was the only place that set the `Error` field
to `true`, and it's also the only place that calls
`Diagnostic::is_lint()` to set the `is_lint` field.
coverage: Avoid a query stability hazard in `function_coverage_map`
When #118865 started enforcing the `rustc::potential_query_instability` lint in `rustc_codegen_llvm`, it added an exemption for this site, arguing that the entries are only used to create a list of filenames that is later sorted.
However, the list of entries also gets traversed when creating the function coverage records in LLVM IR, which may be sensitive to hash-based ordering.
This patch therefore changes `function_coverage_map` to use `FxIndexMap`, which should avoid hash-based instability by iterating in insertion order.
cc ``@Enselic``
`Diagnostic` has 40 methods that return `&mut Self` and could be
considered setters. Four of them have a `set_` prefix. This doesn't seem
necessary for a type that implements the builder pattern. This commit
removes the `set_` prefixes on those four methods.
When #118865 started enforcing the `rustc::potential_query_instability` lint in
`rustc_codegen_llvm`, it added an exemption for this site, arguing that the
entries are only used to create a list of filenames that is later sorted.
However, the list of entries also gets traversed when creating the function
coverage records in LLVM IR, which may be sensitive to hash-based ordering.
This patch therefore changes `function_coverage_map` to use `FxIndexMap`, which
should avoid hash-based instability by iterating in insertion order.
This involves lots of breaking changes. There are two big changes that
force changes. The first is that the bitflag types now don't
automatically implement normal derive traits, so we need to derive them
manually.
Additionally, bitflags now have a hidden inner type by default, which
breaks our custom derives. The bitflags docs recommend using the impl
form in these cases, which I did.