rustc_codegen_llvm: add support for writing summary bitcode
Typical uses of ThinLTO don't have any use for this as a standalone file, but distributed ThinLTO uses this to make the linker phase more efficient. With clang you'd do something like `clang -flto=thin -fthin-link-bitcode=foo.indexing.o -c foo.c` and then get both foo.o (full of bitcode) and foo.indexing.o (just the summary or index part of the bitcode). That's then usable by a two-stage linking process that's more friendly to distributed build systems like bazel, which is why I'm working on this area.
I talked some to `@teresajohnson` about naming in this area, as things seem to be a little confused between various blog posts and build systems. "bitcode index" and "bitcode summary" tend to be a little too ambiguous, and she tends to use "thin link bitcode" and "minimized bitcode" (which matches the descriptions in LLVM). Since the clang option is thin-link-bitcode, I went with that to try and not add a new spelling in the world.
Per `@dtolnay,` you can work around the lack of this by using `lld --thinlto-index-only` to do the indexing on regular .o files of bitcode, but that is a bit wasteful on actions when we already have all the information in rustc and could just write out the matching minimized bitcode. I didn't test that at all in our infrastructure, because by the time I learned that I already had this patch largely written.
Typical uses of ThinLTO don't have any use for this as a standalone
file, but distributed ThinLTO uses this to make the linker phase more
efficient. With clang you'd do something like `clang -flto=thin
-fthin-link-bitcode=foo.indexing.o -c foo.c` and then get both foo.o
(full of bitcode) and foo.indexing.o (just the summary or index part of
the bitcode). That's then usable by a two-stage linking process that's
more friendly to distributed build systems like bazel, which is why I'm
working on this area.
I talked some to @teresajohnson about naming in this area, as things
seem to be a little confused between various blog posts and build
systems. "bitcode index" and "bitcode summary" tend to be a little too
ambiguous, and she tends to use "thin link bitcode" and "minimized
bitcode" (which matches the descriptions in LLVM). Since the clang
option is thin-link-bitcode, I went with that to try and not add a new
spelling in the world.
Per @dtolnay, you can work around the lack of this by using `lld
--thinlto-index-only` to do the indexing on regular .o files of
bitcode, but that is a bit wasteful on actions when we already have all
the information in rustc and could just write out the matching minimized
bitcode. I didn't test that at all in our infrastructure, because by the
time I learned that I already had this patch largely written.
Rollup of 5 pull requests
Successful merges:
- #124615 (coverage: Further simplify extraction of mapping info from MIR)
- #124778 (Fix parse error message for meta items)
- #124797 (Refactor float `Primitive`s to a separate `Float` type)
- #124888 (Migrate `run-make/rustdoc-output-path` to rmake)
- #124957 (Make `Ty::builtin_deref` just return a `Ty`)
r? `@ghost`
`@rustbot` modify labels: rollup
Refactor float `Primitive`s to a separate `Float` type
Now there are 4 of them, it makes sense to refactor `F16`, `F32`, `F64` and `F128` out of `Primitive` and into a separate `Float` type (like integers already are). This allows patterns like `F16 | F32 | F64 | F128` to be simplified into `Float(_)`, and is consistent with `ty::FloatTy`.
As a side effect, this PR also makes the `Ty::primitive_size` method work with `f16` and `f128`.
Tracking issue: #116909
`@rustbot` label +F-f16_and_f128
Stabilize the size of incr comp object file names
The current implementation does not produce stable-length paths, and we create the paths in a way that makes our allocation behavior is nondeterministic. I think `@eddyb` fixed a number of other cases like this in the past, and this PR fixes another one. Whether that actually matters I have no idea, but we still have bimodal behavior in rustc-perf and the non-uniformity in `find` and `ls` was bothering me.
I've also removed the truncation of the mangled CGU names. Before this PR incr comp paths look like this:
```
target/debug/incremental/scratch-38izrrq90cex7/s-gux6gz0ow8-1ph76gg-ewe1xj434l26w9up5bedsojpd/261xgo1oqnd90ry5.o
```
And after, they look like this:
```
target/debug/incremental/scratch-035omutqbfkbw/s-gux6borni0-16r3v1j-6n64tmwqzchtgqzwwim5amuga/55v2re42sztc8je9bva6g8ft3.o
```
On the one hand, I'm sure this will break some people's builds because they're on Windows and only a few bytes from the path length limit. But if we're that seriously worried about the length of our file names, I have some other ideas on how to make them smaller. And last time I deleted some hash truncations from the compiler, there was a huge drop in the number if incremental compilation ICEs that were reported: https://github.com/rust-lang/rust/pull/110367https://github.com/rust-lang/rust/pull/110367
---
Upon further reading, this PR actually fixes a bug. This comment says the CGU names are supposed to be a fixed-length hash, and before this PR they aren't: ca7d34efa9/compiler/rustc_monomorphize/src/partitioning.rs (L445-L448)
Use `tcx.types.unit` instead of `Ty::new_unit(tcx)`
I don't think there is any need for the function, given that we can just access the `.types`, similarly to all other primitives?
Stop using LLVM struct types for alloca
The alloca type has no semantic meaning, only the size (and alignment, but we specify it explicitly) matter. Using `[N x i8]` is a more direct way to specify that we want `N` bytes, and avoids relying on LLVM's struct layout. It is likely that a future LLVM version will change to an untyped alloca representation.
Split out from #121577.
r? `@ghost`
Introduce perma-unstable `wasm-c-abi` flag
Now that `wasm-bindgen` v0.2.88 supports the spec-compliant C ABI, the idea is to switch to that in a future version of Rust. In the meantime it would be good to let people test and play around with it.
This PR introduces a new perma-unstable `-Zwasm-c-abi` compiler flag, which switches to the new spec-compliant C ABI when targeting `wasm32-unknown-unknown`.
Alternatively, we could also stabilize this and then deprecate it when we switch. I will leave this to the Rust maintainers to decide.
This is a companion PR to #117918, but they could be merged independently.
MCP: https://github.com/rust-lang/compiler-team/issues/703
Tracking issue: https://github.com/rust-lang/rust/issues/122532
Add `Ord::cmp` for primitives as a `BinOp` in MIR
Update: most of this OP was written months ago. See https://github.com/rust-lang/rust/pull/118310#issuecomment-2016940014 below for where we got to recently that made it ready for review.
---
There are dozens of reasonable ways to implement `Ord::cmp` for integers using comparison, bit-ops, and branches. Those differences are irrelevant at the rust level, however, so we can make things better by adding `BinOp::Cmp` at the MIR level:
1. Exactly how to implement it is left up to the backends, so LLVM can use whatever pattern its optimizer best recognizes and cranelift can use whichever pattern codegens the fastest.
2. By not inlining those details for every use of `cmp`, we drastically reduce the amount of MIR generated for `derive`d `PartialOrd`, while also making it more amenable to MIR-level optimizations.
Having extremely careful `if` ordering to μoptimize resource usage on broadwell (#63767) is great, but it really feels to me like libcore is the wrong place to put that logic. Similarly, using subtraction [tricks](https://graphics.stanford.edu/~seander/bithacks.html#CopyIntegerSign) (#105840) is arguably even nicer, but depends on the optimizer understanding it (https://github.com/llvm/llvm-project/issues/73417) to be practical. Or maybe [bitor is better than add](https://discourse.llvm.org/t/representing-in-ir/67369/2?u=scottmcm)? But maybe only on a future version that [has `or disjoint` support](https://discourse.llvm.org/t/rfc-add-or-disjoint-flag/75036?u=scottmcm)? And just because one of those forms happens to be good for LLVM, there's no guarantee that it'd be the same form that GCC or Cranelift would rather see -- especially given their very different optimizers. Not to mention that if LLVM gets a spaceship intrinsic -- [which it should](https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Suboptimal.20inlining.20in.20std.20function.20.60binary_search.60/near/404250586) -- we'll need at least a rustc intrinsic to be able to call it.
As for simplifying it in Rust, we now regularly inline `{integer}::partial_cmp`, but it's quite a large amount of IR. The best way to see that is with 8811efa88b (diff-d134c32d028fbe2bf835fef2df9aca9d13332dd82284ff21ee7ebf717bfa4765R113) -- I added a new pre-codegen MIR test for a simple 3-tuple struct, and this PR change it from 36 locals and 26 basic blocks down to 24 locals and 8 basic blocks. Even better, as soon as the construct-`Some`-then-match-it-in-same-BB noise is cleaned up, this'll expose the `Cmp == 0` branches clearly in MIR, so that an InstCombine (#105808) can simplify that to just a `BinOp::Eq` and thus fix some of our generated code perf issues. (Tracking that through today's `if a < b { Less } else if a == b { Equal } else { Greater }` would be *much* harder.)
---
r? `@ghost`
But first I should check that perf is ok with this
~~...and my true nemesis, tidy.~~
Unbox and unwrap the contents of `StatementKind::Coverage`
The payload of coverage statements was historically a structure with several fields, so it was boxed to avoid bloating `StatementKind`.
Now that the payload is a single relatively-small enum, we can replace `Box<Coverage>` with just `CoverageKind`.
This patch also adds a size assertion for `StatementKind`, to avoid accidentally bloating it in the future.
``@rustbot`` label +A-code-coverage
We already use `Instance` at declaration sites when available to glean
additional information about possible abstractions of the type in use.
This does the same when possible at callsites as well.
The primary purpose of this change is to allow CFI to alter how it
generates type information for indirect calls through `Virtual`
instances.
The payload of coverage statements was historically a structure with several
fields, so it was boxed to avoid bloating `StatementKind`.
Now that the payload is a single relatively-small enum, we can replace
`Box<Coverage>` with just `CoverageKind`.
This patch also adds a size assertion for `StatementKind`, to avoid
accidentally bloating it in the future.
Stabilize associated type bounds (RFC 2289)
This PR stabilizes associated type bounds, which were laid out in [RFC 2289]. This gives us a shorthand to express nested type bounds that would otherwise need to be expressed with nested `impl Trait` or broken into several `where` clauses.
### What are we stabilizing?
We're stabilizing the associated item bounds syntax, which allows us to put bounds in associated type position within other bounds, i.e. `T: Trait<Assoc: Bounds...>`. See [RFC 2289] for motivation.
In all position, the associated type bound syntax expands into a set of two (or more) bounds, and never anything else (see "How does this differ[...]" section for more info).
Associated type bounds are stabilized in four positions:
* **`where` clauses (and APIT)** - This is equivalent to breaking up the bound into two (or more) `where` clauses. For example, `where T: Trait<Assoc: Bound>` is equivalent to `where T: Trait, <T as Trait>::Assoc: Bound`.
* **Supertraits** - Similar to above, `trait CopyIterator: Iterator<Item: Copy> {}`. This is almost equivalent to breaking up the bound into two (or more) `where` clauses; however, the bound on the associated item is implied whenever the trait is used. See #112573/#112629.
* **Associated type item bounds** - This allows constraining the *nested* rigid projections that are associated with a trait's associated types. e.g. `trait Trait { type Assoc: Trait2<Assoc2: Copy>; }`.
* **opaque item bounds (RPIT, TAIT)** - This allows constraining associated types that are associated with the opaque without having to *name* the opaque. For example, `impl Iterator<Item: Copy>` defines an iterator whose item is `Copy` without having to actually name that item bound.
The latter three are not expressible in surface Rust (though for associated type item bounds, this will change in #120752, which I don't believe should block this PR), so this does represent a slight expansion of what can be expressed in trait bounds.
### How does this differ from the RFC?
Compared to the RFC, the current implementation *always* desugars associated type bounds to sets of `ty::Clause`s internally. Specifically, it does *not* introduce a position-dependent desugaring as laid out in [RFC 2289], and in particular:
* It does *not* desugar to anonymous associated items in associated type item bounds.
* It does *not* desugar to nested RPITs in RPIT bounds, nor nested TAITs in TAIT bounds.
This position-dependent desugaring laid out in the RFC existed simply to side-step limitations of the trait solver, which have mostly been fixed in #120584. The desugaring laid out in the RFC also added unnecessary complication to the design of the feature, and introduces its own limitations to, for example:
* Conditionally lowering to nested `impl Trait` in certain positions such as RPIT and TAIT means that we inherit the limitations of RPIT/TAIT, namely lack of support for higher-ranked opaque inference. See this code example: https://github.com/rust-lang/rust/pull/120752#issuecomment-1979412531.
* Introducing anonymous associated types makes traits no longer object safe, since anonymous associated types are not nameable, and all associated types must be named in `dyn` types.
This last point motivates why this PR is *not* stabilizing support for associated type bounds in `dyn` types, e.g, `dyn Assoc<Item: Bound>`. Why? Because `dyn` types need to have *concrete* types for all associated items, this would necessitate a distinct lowering for associated type bounds, which seems both complicated and unnecessary compared to just requiring the user to write `impl Trait` themselves. See #120719.
### Implementation history:
Limited to the significant behavioral changes and fixes and relevant PRs, ping me if I left something out--
* #57428
* #108063
* #110512
* #112629
* #120719
* #120584Closes#52662
[RFC 2289]: https://rust-lang.github.io/rfcs/2289-associated-type-bounds.html
Add asm goto support to `asm!`
Tracking issue: #119364
This PR implements asm-goto support, using the syntax described in "future possibilities" section of [RFC2873](https://rust-lang.github.io/rfcs/2873-inline-asm.html#asm-goto).
Currently I have only implemented the `label` part, not the `fallthrough` part (i.e. fallthrough is implicit). This doesn't reduce the expressive though, since you can use label-break to get arbitrary control flow or simply set a value and rely on jump threading optimisation to get the desired control flow. I can add that later if deemed necessary.
r? ``@Amanieu``
cc ``@ojeda``
Add stubs in IR and ABI for `f16` and `f128`
This is the very first step toward the changes in https://github.com/rust-lang/rust/pull/114607 and the [`f16` and `f128` RFC](https://rust-lang.github.io/rfcs/3453-f16-and-f128.html). It adds the types to `rustc_type_ir::FloatTy` and `rustc_abi::Primitive`, and just propagates those out as `unimplemented!` stubs where necessary.
These types do not parse yet so there is no feature gate, and it should be okay to use `unimplemented!`.
The next steps will probably be AST support with parsing and the feature gate.
r? `@compiler-errors`
cc `@Nilstrieb` suggested breaking the PR up in https://github.com/rust-lang/rust/pull/120645#issuecomment-1925900572
Remove useless lifetime of ArchiveBuilder
`trait ArchiveBuilder<'a>` has a seemingly useless lifetime a, so I remove it. If this is intentional, please reject this PR.
```rust
pub trait ArchiveBuilder<'a> {
fn add_file(&mut self, path: &Path);
fn add_archive(
&mut self,
archive: &Path,
skip: Box<dyn FnMut(&str) -> bool + 'static>,
) -> io::Result<()>;
fn build(self: Box<Self>, output: &Path) -> bool;
}
```
Add "algebraic" fast-math intrinsics, based on fast-math ops that cannot return poison
Setting all of LLVM's fast-math flags makes our fast-math intrinsics very dangerous, because some inputs are UB. This set of flags permits common algebraic transformations, but according to the [LangRef](https://llvm.org/docs/LangRef.html#fastmath), only the flags `nnan` (no nans) and `ninf` (no infs) can produce poison.
And this uses the algebraic float ops to fix https://github.com/rust-lang/rust/issues/120720
cc `@orlp`
errors: only eagerly translate subdiagnostics
Subdiagnostics don't need to be lazily translated, they can always be eagerly translated. Eager translation is slightly more complex as we need to have a `DiagCtxt` available to perform the translation, which involves slightly more threading of that context.
This slight increase in complexity should enable later simplifications - like passing `DiagCtxt` into `AddToDiagnostic` and moving Fluent messages into the diagnostic structs rather than having them in separate files (working on that was what led to this change).
r? ```@nnethercote```
Subdiagnostics don't need to be lazily translated, they can always be
eagerly translated. Eager translation is slightly more complex as we need
to have a `DiagCtxt` available to perform the translation, which involves
slightly more threading of that context.
This slight increase in complexity should enable later simplifications -
like passing `DiagCtxt` into `AddToDiagnostic` and moving Fluent messages
into the diagnostic structs rather than having them in separate files
(working on that was what led to this change).
Signed-off-by: David Wood <david@davidtw.co>
Invert diagnostic lints.
That is, change `diagnostic_outside_of_impl` and `untranslatable_diagnostic` from `allow` to `deny`, because more than half of the compiler has been converted to use translated diagnostics.
This commit removes more `deny` attributes than it adds `allow` attributes, which proves that this change is warranted.
r? ````@davidtwco````
That is, change `diagnostic_outside_of_impl` and
`untranslatable_diagnostic` from `allow` to `deny`, because more than
half of the compiler has be converted to use translated diagnostics.
This commit removes more `deny` attributes than it adds `allow`
attributes, which proves that this change is warranted.
Because it's almost always static.
This makes `impl IntoDiagnosticArg for DiagnosticArgValue` trivial,
which is nice.
There are a few diagnostics constructed in
`compiler/rustc_mir_build/src/check_unsafety.rs` and
`compiler/rustc_mir_transform/src/errors.rs` that now need symbols
converted to `String` with `to_string` instead of `&str` with `as_str`,
but that' no big deal, and worth it for the simplifications elsewhere.
Replacement of #114390: Add new intrinsic `is_var_statically_known` and optimize pow for powers of two
This adds a new intrinsic `is_val_statically_known` that lowers to [``@llvm.is.constant.*`](https://llvm.org/docs/LangRef.html#llvm-is-constant-intrinsic).` It also applies the intrinsic in the int_pow methods to recognize and optimize the idiom `2isize.pow(x)`. See #114390 for more discussion.
While I have extended the scope of the power of two optimization from #114390, I haven't added any new uses for the intrinsic. That can be done in later pull requests.
Note: When testing or using the library, be sure to use `--stage 1` or higher. Otherwise, the intrinsic will be a noop and the doctests will be skipped. If you are trying out edits, you may be interested in [`--keep-stage 0`](https://rustc-dev-guide.rust-lang.org/building/suggested.html#faster-builds-with---keep-stage).
Fixes#47234Resolves#114390
`@Centri3`
Support reg_addr register class in s390x inline assembly
In s390x, `r0` cannot be used as an address register (it is evaluated as zero in an address context).
Therefore, currently, in assemblies involving memory accesses, `r0` must be [marked as clobbered](1a1155653a/src/arch/s390x.rs (L58)) or [explicitly used to a non-address](1a1155653a/src/arch/s390x.rs (L135)) or explicitly use an address register to prevent `r0` from being allocated to a register for the address.
This patch adds a register class for allocating general-purpose registers, except `r0`, to make it easier to use address registers. (powerpc already has a register class (reg_nonzero) for a similar purpose.)
This is identical to the `a` constraint in LLVM and GCC:
https://llvm.org/docs/LangRef.html#supported-constraint-code-list
> a: A 32, 64, or 128-bit integer address register (excludes R0, which in an address context evaluates as zero).
https://gcc.gnu.org/onlinedocs/gcc/Machine-Constraints.html
> a
> Address register (general purpose register except r0)
cc ``@uweigand``
r? ``@Amanieu``
`Diagnostic` has 40 methods that return `&mut Self` and could be
considered setters. Four of them have a `set_` prefix. This doesn't seem
necessary for a type that implements the builder pattern. This commit
removes the `set_` prefixes on those four methods.
And make all hand-written `IntoDiagnostic` impls generic, by using
`DiagnosticBuilder::new(dcx, level, ...)` instead of e.g.
`dcx.struct_err(...)`.
This means the `create_*` functions are the source of the error level.
This change will let us remove `struct_diagnostic`.
Note: `#[rustc_lint_diagnostics]` is added to `DiagnosticBuilder::new`,
it's necessary to pass diagnostics tests now that it's used in
`into_diagnostic` functions.
Add emulated TLS support
This is a reopen of https://github.com/rust-lang/rust/pull/96317 . many android devices still only use 128 pthread keys, so using emutls can be helpful.
Currently LLVM uses emutls by default for some targets (such as android, openbsd), but rust does not use it, because `has_thread_local` is false.
This commit has some changes to allow users to enable emutls:
1. add `-Zhas-thread-local` flag to specify that std uses `#[thread_local]` instead of pthread key.
2. when using emutls, decorate symbol names to find thread local symbol correctly.
3. change `-Zforce-emulated-tls` to `-Ztls-model=emulated` to explicitly specify whether to generate emutls.
r? `@Amanieu`
compile-time evaluation: detect writes through immutable pointers
This has two motivations:
- it unblocks https://github.com/rust-lang/rust/pull/116745 (and therefore takes a big step towards `const_mut_refs` stabilization), because we can now detect if the memory that we find in `const` can be interned as "immutable"
- it would detect the UB that was uncovered in https://github.com/rust-lang/rust/pull/117905, which was caused by accidental stabilization of `copy` functions in `const` that can only be called with UB
When UB is detected, we emit a future-compat warn-by-default lint. This is not a breaking change, so completely in line with [the const-UB RFC](https://rust-lang.github.io/rfcs/3016-const-ub.html), meaning we don't need t-lang FCP here. I made the lint immediately show up for dependencies since it is nearly impossible to even trigger this lint without `const_mut_refs` -- the accidentally stabilized `copy` functions are the only way this can happen, so the crates that popped up in #117905 are the only causes of such UB (in the code that crater covers), and the three cases of UB that we know about have all been fixed in their respective crates already.
The way this is implemented is by making use of the fact that our interpreter is already generic over the notion of provenance. For CTFE we now use the new `CtfeProvenance` type which is conceptually an `AllocId` plus a boolean `immutable` flag (but packed for a more efficient representation). This means we can mark a pointer as immutable when it is created as a shared reference. The flag will be propagated to all pointers derived from this one. We can then check the immutable flag on each write to reject writes through immutable pointers.
I just hope perf works out.
Currently LLVM uses emutls by default
for some targets (such as android, openbsd),
but rust does not use it, because `has_thread_local` is false.
This commit has some changes to allow users to enable emutls:
1. add `-Zhas-thread-local` flag to specify
that std uses `#[thread_local]` instead of pthread key.
2. when using emutls, decorate symbol names
to find thread local symbol correctly.
3. change `-Zforce-emulated-tls` to `-Ztls-model=emulated`
to explicitly specify whether to generate emutls.
Currently we always do this:
```
use rustc_fluent_macro::fluent_messages;
...
fluent_messages! { "./example.ftl" }
```
But there is no need, we can just do this everywhere:
```
rustc_fluent_macro::fluent_messages! { "./example.ftl" }
```
which is shorter.
The `fluent_messages!` macro produces uses of
`crate::{D,Subd}iagnosticMessage`, which means that every crate using
the macro must have this import:
```
use rustc_errors::{DiagnosticMessage, SubdiagnosticMessage};
```
This commit changes the macro to instead use
`rustc_errors::{D,Subd}iagnosticMessage`, which avoids the need for the
imports.
This fixes the changes brought to codegen tests when effect params are
added to libcore, by not attempting to monomorphize functions that get
the host param by being `const fn`.
codegen_llvm/llvm_type: avoid matching on the Rust type
This `match` is highly suspicious. Looking at `scalar_llvm_type_at` I think it makes no difference. But if it were to make a difference that would be a huge problem, since it doesn't look through `repr(transparent)`!
Cc `@eddyb` `@bjorn3`
Use the same DISubprogram for each instance of the same inlined function within a caller
# Issue Details:
The call to `panic` within a function like `Option::unwrap` is translated to LLVM as a `tail call` (as it will never return), when multiple calls to the same function like this is inlined LLVM will notice the common `tail call` block (i.e., loading the same panic string + location info and then calling `panic`) and merge them together.
When merging these instructions together, LLVM will also attempt to merge the debug locations as well, but this fails (i.e., debug info is dropped) as Rust emits a new `DISubprogram` at each inline site thus LLVM doesn't recognize that these are actually the same function and so thinks that there isn't a common debug location.
As an example of this when building for x86_64 Windows (note the lack of `.cv_loc` before the call to `panic`, thus it will be attributed to the same line at the `addq` instruction):
```
.cv_loc 0 1 23 0 # src\lib.rs:23:0
addq $40, %rsp
retq
leaq .Lalloc_f570dea0a53168780ce9a91e67646421(%rip), %rcx
leaq .Lalloc_629ace53b7e5b76aaa810d549cc84ea3(%rip), %r8
movl $43, %edx
callq _ZN4core9panicking5panic17h12e60b9063f6dee8E
int3
```
# Fix Details:
Cache the `DISubprogram` emitted for each inlined function instance within a caller so that this can be reused if that instance is encountered again, this also requires caching the `DILexicalBlock` and `DIVariable` objects to avoid creating duplicates.
After this change the above assembly now looks like:
```
.cv_loc 0 1 23 0 # src\lib.rs:23:0
addq $40, %rsp
retq
.cv_inline_site_id 5 within 0 inlined_at 1 0 0
.cv_inline_site_id 6 within 5 inlined_at 1 12 0
.cv_loc 6 2 935 0 # library\core\src\option.rs:935:0
leaq .Lalloc_5f55955de67e57c79064b537689facea(%rip), %rcx
leaq .Lalloc_e741d4de8cb5801e1fd7a6c6795c1559(%rip), %r8
movl $43, %edx
callq _ZN4core9panicking5panic17hde1558f32d5b1c04E
int3
```
Use `unstable_target_features` when checking inline assembly
This is necessary to properly validate register classes even when the relevant target feature name is still unstable.
Add a new `compare_bytes` intrinsic instead of calling `memcmp` directly
As discussed in #113435, this lets the backends be the place that can have the "don't call the function if n == 0" logic, if it's needed for the target. (I didn't actually *add* those checks, though, since as I understood it we didn't actually need them on known targets?)
Doing this also let me make it `const` (unstable), which I don't think `extern "C" fn memcmp` can be.
cc `@RalfJung` `@Amanieu`
cleanup: remove pointee types
This can't be merged until the oldest LLVM version we support uses opaque pointers, which will be the case after #114148. (Also note `-Cllvm-args="-opaque-pointers=0"` can technically be used in LLVM 15, though I don't think we should support that configuration.)
I initially hoped this would provide some minor perf win, but in https://github.com/rust-lang/rust/pull/105412#issuecomment-1341224450 it had very little impact, so this is only valuable as a cleanup.
As a followup, this will enable #96242 to be resolved.
r? `@ghost`
`@rustbot` label S-blocked
Reuse `codegen_ssa` monomorphization errors in `codegen_gcc`
Removes monomorphization errors duplication by reusing the ones defined in `codegen_ssa`.
Also updates `expected_simd` errors usage in `codegen_gcc` by assuming we want to treat those parameters as translatable. See 7a888fb56e
Prototype: Add unstable `-Z reference-niches` option
MCP: rust-lang/compiler-team#641
Relevant RFC: rust-lang/rfcs#3204
This prototype adds a new `-Z reference-niches` option, controlling the range of valid bit-patterns for reference types (`&T` and `&mut T`), thereby enabling new enum niching opportunities. Like `-Z randomize-layout`, this setting is crate-local; as such, references to built-in types (primitives, tuples, ...) are not affected.
The possible settings are (here, `MAX` denotes the all-1 bit-pattern):
| `-Z reference-niches=` | Valid range |
|:---:|:---:|
| `null` (the default) | `1..=MAX` |
| `size` | `1..=(MAX- size)` |
| `align` | `align..=MAX.align_down_to(align)` |
| `size,align` | `align..=(MAX-size).align_down_to(align)` |
------
This is very WIP, and I'm not sure the approach I've taken here is the best one, but stage 1 tests pass locally; I believe this is in a good enough state to unleash this upon unsuspecting 3rd-party code, and see what breaks.
Resurrect: rustc_llvm: Add a -Z `print-codegen-stats` option to expose LLVM statistics.
This resurrects PR https://github.com/rust-lang/rust/pull/104000, which has sat idle for a while. And I want to see the effect of stack-move optimizations on LLVM (like https://reviews.llvm.org/D153453) :).
I have applied the changes requested by `@oli-obk` and `@nagisa` https://github.com/rust-lang/rust/pull/104000#discussion_r1014625377 and https://github.com/rust-lang/rust/pull/104000#discussion_r1014642482 in the latest commits.
r? `@oli-obk`
-----
LLVM has a neat [statistics](https://llvm.org/docs/ProgrammersManual.html#the-statistic-class-stats-option) feature that tracks how often optimizations kick in. It's very handy for optimization work. Since we expose the LLVM pass timings, I thought it made sense to expose the LLVM statistics too.
-----
(Edit: fix broken link
(Edit2: fix segmentation fault and use malloc
If `rustc` is built with
```toml
[llvm]
assertions = true
```
Then you can see like
```
rustc +stage1 -Z print-codegen-stats -C opt-level=3 tmp.rs
===-------------------------------------------------------------------------===
... Statistics Collected ...
===-------------------------------------------------------------------------===
3 aa - Number of MayAlias results
193 aa - Number of MustAlias results
531 aa - Number of NoAlias results
...
```
And the current default build emits only
```
$ rustc +stage1 -Z print-codegen-stats -C opt-level=3 tmp.rs
===-------------------------------------------------------------------------===
... Statistics Collected ...
===-------------------------------------------------------------------------===
$
```
This might be better to emit the message to tell assertion flag necessity, but now I can't find how to do that...
LLVM has a neat [statistics] feature that tracks how often optimizations kick
in. It's very handy for optimization work. Since we expose the LLVM pass
timings, I thought it made sense to expose the LLVM statistics too.
[statistics]: https://llvm.org/docs/ProgrammersManual.html#the-statistic-class-stats-option
These tend to have special handling in a bunch of places anyway, so the variant helps remember that. And I think it's easier to grok than non-Scalar Aggregates sometimes being `Immediates` (like I got wrong and caused 109992). As a minor bonus, it means we don't need to generate poison LLVM values for them to pass around in `OperandValue::Immediate`s.
Optimize scalar and scalar pair representations loaded from ByRef in llvm
in https://github.com/rust-lang/rust/pull/105653 I noticed that we were generating suboptimal LLVM IR if we had a `ConstValue::ByRef` that could be represented by a `ScalarPair`. Before https://github.com/rust-lang/rust/pull/105653 this is probably rare, but after it, every slice will go down this suboptimal code path that requires LLVM to untangle a bunch of indirections and translate static allocations that are only used once to read a scalar pair from.
Support #[global_allocator] without the allocator shim
This makes it possible to use liballoc/libstd in combination with `--emit obj` if you use `#[global_allocator]`. This is what rust-for-linux uses right now and systemd may use in the future. Currently they have to depend on the exact implementation of the allocator shim to create one themself as `--emit obj` doesn't create an allocator shim.
Note that currently the allocator shim also defines the oom error handler, which is normally required too. Once `#![feature(default_alloc_error_handler)]` becomes the only option, this can be avoided. In addition when using only fallible allocator methods and either `--cfg no_global_oom_handling` for liballoc (like rust-for-linux) or `--gc-sections` no references to the oom error handler will exist.
To avoid this feature being insta-stable, you will have to define `__rust_no_alloc_shim_is_unstable` to avoid linker errors.
(Labeling this with both T-compiler and T-lang as it originally involved both an implementation detail and had an insta-stable user facing change. As noted above, the `__rust_no_alloc_shim_is_unstable` symbol requirement should prevent unintended dependence on this unstable feature.)
You will need to add the following as replacement for the old __rust_*
definitions when not using the alloc shim.
#[no_mangle]
static __rust_no_alloc_shim_is_unstable: u8 = 0;
This makes it possible to use liballoc/libstd in combination with
`--emit obj` if you use `#[global_allocator]`. Making it work for the
default libstd allocator would require weak functions, which are not
well supported on all systems.
Add cross-language LLVM CFI support to the Rust compiler
This PR adds cross-language LLVM Control Flow Integrity (CFI) support to the Rust compiler by adding the `-Zsanitizer-cfi-normalize-integers` option to be used with Clang `-fsanitize-cfi-icall-normalize-integers` for normalizing integer types (see https://reviews.llvm.org/D139395).
It provides forward-edge control flow protection for C or C++ and Rust -compiled code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code share the same virtual address space). For more information about LLVM CFI and cross-language LLVM CFI support for the Rust compiler, see design document in the tracking issue #89653.
Cross-language LLVM CFI can be enabled with -Zsanitizer=cfi and -Zsanitizer-cfi-normalize-integers, and requires proper (i.e., non-rustc) LTO (i.e., -Clinker-plugin-lto).
Thank you again, ``@bjorn3,`` ``@nikic,`` ``@samitolvanen,`` and the Rust community for all the help!
This commit adds cross-language LLVM Control Flow Integrity (CFI)
support to the Rust compiler by adding the
`-Zsanitizer-cfi-normalize-integers` option to be used with Clang
`-fsanitize-cfi-icall-normalize-integers` for normalizing integer types
(see https://reviews.llvm.org/D139395).
It provides forward-edge control flow protection for C or C++ and Rust
-compiled code "mixed binaries" (i.e., for when C or C++ and Rust
-compiled code share the same virtual address space). For more
information about LLVM CFI and cross-language LLVM CFI support for the
Rust compiler, see design document in the tracking issue #89653.
Cross-language LLVM CFI can be enabled with -Zsanitizer=cfi and
-Zsanitizer-cfi-normalize-integers, and requires proper (i.e.,
non-rustc) LTO (i.e., -Clinker-plugin-lto).
Report allocation errors as panics
OOM is now reported as a panic but with a custom payload type (`AllocErrorPanicPayload`) which holds the layout that was passed to `handle_alloc_error`.
This should be review one commit at a time:
- The first commit adds `AllocErrorPanicPayload` and changes allocation errors to always be reported as panics.
- The second commit removes `#[alloc_error_handler]` and the `alloc_error_hook` API.
ACP: https://github.com/rust-lang/libs-team/issues/192Closes#51540Closes#51245
Fluent, with all the icu4x it brings in, takes quite some time to
compile. `fluent_messages!` is only needed in further downstream rustc
crates, but is blocking more upstream crates like `rustc_index`. By
splitting it out, we allow `rustc_macros` to be compiled earlier, which
speeds up `x check compiler` by about 5 seconds (and even more after the
needless dependency on `serde_json` is removed from
`rustc_data_structures`).
In cases where it is legal, we should prefer poison values over
undef values.
This replaces undef with poison for aggregate construction and
for uninhabited types. There are more places where we can likely
use poison, but I wanted to stay conservative to start with.
In particular the aggregate case is important for newer LLVM
versions, which are not able to handle an undef base value during
early optimization due to poison-propagation concerns.
This makes it easier to open the messages file while developing on features.
The commit was the result of automatted changes:
for p in compiler/rustc_*; do mv $p/locales/en-US.ftl $p/messages.ftl; rmdir $p/locales; done
for p in compiler/rustc_*; do sed -i "s#\.\./locales/en-US.ftl#../messages.ftl#" $p/src/lib.rs; done
Add `round_ties_even` to `f32` and `f64`
Tracking issue: #96710
Redux of #82273. See also #55107
Adds a new method, `round_ties_even`, to `f32` and `f64`, that rounds the float to the nearest integer , rounding halfway cases to the number with an even least significant bit. Uses the `roundeven` LLVM intrinsic to do this.
Of the five IEEE 754 rounding modes, this is the only one that doesn't already have a round-to-integer function exposed by Rust (others are `round`, `floor`, `ceil`, and `trunc`). Ties-to-even is also the rounding mode used for int-to-float and float-to-float `as` casts, as well as float arithmentic operations. So not having an explicit rounding method for it seems like an oversight.
Bikeshed: this PR currently uses `round_ties_even` for the name of the method. But maybe `round_ties_to_even` is better, or `round_even`, or `round_to_even`?
(This is a large commit. The changes to
`compiler/rustc_middle/src/ty/context.rs` are the most important ones.)
The current naming scheme is a mess, with a mix of `_intern_`, `intern_`
and `mk_` prefixes, with little consistency. In particular, in many
cases it's easy to use an iterator interner when a (preferable) slice
interner is available.
The guiding principles of the new naming system:
- No `_intern_` prefixes.
- The `intern_` prefix is for internal operations.
- The `mk_` prefix is for external operations.
- For cases where there is a slice interner and an iterator interner,
the former is `mk_foo` and the latter is `mk_foo_from_iter`.
Also, `slice_interners!` and `direct_interners!` can now be `pub` or
non-`pub`, which helps enforce the internal/external operations
division.
It's not perfect, but I think it's a clear improvement.
The following lists show everything that was renamed.
slice_interners
- const_list
- mk_const_list -> mk_const_list_from_iter
- intern_const_list -> mk_const_list
- substs
- mk_substs -> mk_substs_from_iter
- intern_substs -> mk_substs
- check_substs -> check_and_mk_substs (this is a weird one)
- canonical_var_infos
- intern_canonical_var_infos -> mk_canonical_var_infos
- poly_existential_predicates
- mk_poly_existential_predicates -> mk_poly_existential_predicates_from_iter
- intern_poly_existential_predicates -> mk_poly_existential_predicates
- _intern_poly_existential_predicates -> intern_poly_existential_predicates
- predicates
- mk_predicates -> mk_predicates_from_iter
- intern_predicates -> mk_predicates
- _intern_predicates -> intern_predicates
- projs
- intern_projs -> mk_projs
- place_elems
- mk_place_elems -> mk_place_elems_from_iter
- intern_place_elems -> mk_place_elems
- bound_variable_kinds
- mk_bound_variable_kinds -> mk_bound_variable_kinds_from_iter
- intern_bound_variable_kinds -> mk_bound_variable_kinds
direct_interners
- region
- intern_region (unchanged)
- const
- mk_const_internal -> intern_const
- const_allocation
- intern_const_alloc -> mk_const_alloc
- layout
- intern_layout -> mk_layout
- adt_def
- intern_adt_def -> mk_adt_def_from_data (unusual case, hard to avoid)
- alloc_adt_def(!) -> mk_adt_def
- external_constraints
- intern_external_constraints -> mk_external_constraints
Other
- type_list
- mk_type_list -> mk_type_list_from_iter
- intern_type_list -> mk_type_list
- tup
- mk_tup -> mk_tup_from_iter
- intern_tup -> mk_tup
Remove type-traversal trait aliases
#107924 moved the type traversal (folding and visiting) traits into the type library, but created trait aliases in `rustc_middle` to minimise both the API churn for trait consumers and the arising boilerplate. As mentioned in that PR, an alternative approach of defining subtraits with blanket implementations of the respective supertraits was also considered at that time but was ruled out as not adding much value.
Unfortunately, it has since emerged that rust-analyzer has difficulty with these trait aliases at present, resulting in a degraded contributor experience (see the recent [r-a has become useless](https://rust-lang.zulipchat.com/#narrow/stream/182449-t-compiler.2Fhelp/topic/r-a.20has.20become.20useless) topic on the #t-compiler/help Zulip stream).
This PR removes the trait aliases, and accordingly the underlying type library traits are now used directly; they are parameterised by `TyCtxt<'tcx>` rather than just the `'tcx` lifetime, and imports have been updated to reflect the fact that the trait aliases' explicitly named traits are no longer automatically brought into scope. These changes also roll-back the (no-longer required) workarounds to #107747 that were made in b409329c62.
Since this PR is just a find+replace together with the changes necessary for compilation & tidy to pass, it's currently just one mega-commit. Let me know if you'd like it broken up.
r? `@oli-obk`
Extend `CodegenBackend` trait with a function returning the translation
resources from the codegen backend, which can be added to the complete
list of resources provided to the emitter.
Signed-off-by: David Wood <david.wood@huawei.com>
Instead of loading the Fluent resources for every crate in
`rustc_error_messages`, each crate generates typed identifiers for its
own diagnostics and creates a static which are pulled together in the
`rustc_driver` crate and provided to the diagnostic emitter.
Signed-off-by: David Wood <david.wood@huawei.com>