bootstrap major change detection implementation
The use of `changelog-seen` and `bootstrap/CHANGELOG.md` has not been functional in any way for many years. We often do major/breaking changes but never update the changelog file or the `changelog-seen`. This is an alternative method for tracking major or breaking changes and informing developers when such changes occur.
Example output when bootstrap detects a major change:
![image](https://github.com/rust-lang/rust/assets/39852038/ee802dfa-a02b-488b-a433-f853ce079b8a)
This fixes the changes brought to codegen tests when effect params are
added to libcore, by not attempting to monomorphize functions that get
the host param by being `const fn`.
codegen_llvm/llvm_type: avoid matching on the Rust type
This `match` is highly suspicious. Looking at `scalar_llvm_type_at` I think it makes no difference. But if it were to make a difference that would be a huge problem, since it doesn't look through `repr(transparent)`!
Cc `@eddyb` `@bjorn3`
Use the same DISubprogram for each instance of the same inlined function within a caller
# Issue Details:
The call to `panic` within a function like `Option::unwrap` is translated to LLVM as a `tail call` (as it will never return), when multiple calls to the same function like this is inlined LLVM will notice the common `tail call` block (i.e., loading the same panic string + location info and then calling `panic`) and merge them together.
When merging these instructions together, LLVM will also attempt to merge the debug locations as well, but this fails (i.e., debug info is dropped) as Rust emits a new `DISubprogram` at each inline site thus LLVM doesn't recognize that these are actually the same function and so thinks that there isn't a common debug location.
As an example of this when building for x86_64 Windows (note the lack of `.cv_loc` before the call to `panic`, thus it will be attributed to the same line at the `addq` instruction):
```
.cv_loc 0 1 23 0 # src\lib.rs:23:0
addq $40, %rsp
retq
leaq .Lalloc_f570dea0a53168780ce9a91e67646421(%rip), %rcx
leaq .Lalloc_629ace53b7e5b76aaa810d549cc84ea3(%rip), %r8
movl $43, %edx
callq _ZN4core9panicking5panic17h12e60b9063f6dee8E
int3
```
# Fix Details:
Cache the `DISubprogram` emitted for each inlined function instance within a caller so that this can be reused if that instance is encountered again, this also requires caching the `DILexicalBlock` and `DIVariable` objects to avoid creating duplicates.
After this change the above assembly now looks like:
```
.cv_loc 0 1 23 0 # src\lib.rs:23:0
addq $40, %rsp
retq
.cv_inline_site_id 5 within 0 inlined_at 1 0 0
.cv_inline_site_id 6 within 5 inlined_at 1 12 0
.cv_loc 6 2 935 0 # library\core\src\option.rs:935:0
leaq .Lalloc_5f55955de67e57c79064b537689facea(%rip), %rcx
leaq .Lalloc_e741d4de8cb5801e1fd7a6c6795c1559(%rip), %r8
movl $43, %edx
callq _ZN4core9panicking5panic17hde1558f32d5b1c04E
int3
```
Use `unstable_target_features` when checking inline assembly
This is necessary to properly validate register classes even when the relevant target feature name is still unstable.
Add a new `compare_bytes` intrinsic instead of calling `memcmp` directly
As discussed in #113435, this lets the backends be the place that can have the "don't call the function if n == 0" logic, if it's needed for the target. (I didn't actually *add* those checks, though, since as I understood it we didn't actually need them on known targets?)
Doing this also let me make it `const` (unstable), which I don't think `extern "C" fn memcmp` can be.
cc `@RalfJung` `@Amanieu`
cleanup: remove pointee types
This can't be merged until the oldest LLVM version we support uses opaque pointers, which will be the case after #114148. (Also note `-Cllvm-args="-opaque-pointers=0"` can technically be used in LLVM 15, though I don't think we should support that configuration.)
I initially hoped this would provide some minor perf win, but in https://github.com/rust-lang/rust/pull/105412#issuecomment-1341224450 it had very little impact, so this is only valuable as a cleanup.
As a followup, this will enable #96242 to be resolved.
r? `@ghost`
`@rustbot` label S-blocked
Reuse `codegen_ssa` monomorphization errors in `codegen_gcc`
Removes monomorphization errors duplication by reusing the ones defined in `codegen_ssa`.
Also updates `expected_simd` errors usage in `codegen_gcc` by assuming we want to treat those parameters as translatable. See 7a888fb56e
Prototype: Add unstable `-Z reference-niches` option
MCP: rust-lang/compiler-team#641
Relevant RFC: rust-lang/rfcs#3204
This prototype adds a new `-Z reference-niches` option, controlling the range of valid bit-patterns for reference types (`&T` and `&mut T`), thereby enabling new enum niching opportunities. Like `-Z randomize-layout`, this setting is crate-local; as such, references to built-in types (primitives, tuples, ...) are not affected.
The possible settings are (here, `MAX` denotes the all-1 bit-pattern):
| `-Z reference-niches=` | Valid range |
|:---:|:---:|
| `null` (the default) | `1..=MAX` |
| `size` | `1..=(MAX- size)` |
| `align` | `align..=MAX.align_down_to(align)` |
| `size,align` | `align..=(MAX-size).align_down_to(align)` |
------
This is very WIP, and I'm not sure the approach I've taken here is the best one, but stage 1 tests pass locally; I believe this is in a good enough state to unleash this upon unsuspecting 3rd-party code, and see what breaks.
Resurrect: rustc_llvm: Add a -Z `print-codegen-stats` option to expose LLVM statistics.
This resurrects PR https://github.com/rust-lang/rust/pull/104000, which has sat idle for a while. And I want to see the effect of stack-move optimizations on LLVM (like https://reviews.llvm.org/D153453) :).
I have applied the changes requested by `@oli-obk` and `@nagisa` https://github.com/rust-lang/rust/pull/104000#discussion_r1014625377 and https://github.com/rust-lang/rust/pull/104000#discussion_r1014642482 in the latest commits.
r? `@oli-obk`
-----
LLVM has a neat [statistics](https://llvm.org/docs/ProgrammersManual.html#the-statistic-class-stats-option) feature that tracks how often optimizations kick in. It's very handy for optimization work. Since we expose the LLVM pass timings, I thought it made sense to expose the LLVM statistics too.
-----
(Edit: fix broken link
(Edit2: fix segmentation fault and use malloc
If `rustc` is built with
```toml
[llvm]
assertions = true
```
Then you can see like
```
rustc +stage1 -Z print-codegen-stats -C opt-level=3 tmp.rs
===-------------------------------------------------------------------------===
... Statistics Collected ...
===-------------------------------------------------------------------------===
3 aa - Number of MayAlias results
193 aa - Number of MustAlias results
531 aa - Number of NoAlias results
...
```
And the current default build emits only
```
$ rustc +stage1 -Z print-codegen-stats -C opt-level=3 tmp.rs
===-------------------------------------------------------------------------===
... Statistics Collected ...
===-------------------------------------------------------------------------===
$
```
This might be better to emit the message to tell assertion flag necessity, but now I can't find how to do that...
LLVM has a neat [statistics] feature that tracks how often optimizations kick
in. It's very handy for optimization work. Since we expose the LLVM pass
timings, I thought it made sense to expose the LLVM statistics too.
[statistics]: https://llvm.org/docs/ProgrammersManual.html#the-statistic-class-stats-option
Remove `box_free` lang item
This PR removes the `box_free` lang item, replacing it with `Box`'s `Drop` impl. Box dropping is still slightly magic because the contained value is still dropped by the compiler.
These tend to have special handling in a bunch of places anyway, so the variant helps remember that. And I think it's easier to grok than non-Scalar Aggregates sometimes being `Immediates` (like I got wrong and caused 109992). As a minor bonus, it means we don't need to generate poison LLVM values for them to pass around in `OperandValue::Immediate`s.
Optimize scalar and scalar pair representations loaded from ByRef in llvm
in https://github.com/rust-lang/rust/pull/105653 I noticed that we were generating suboptimal LLVM IR if we had a `ConstValue::ByRef` that could be represented by a `ScalarPair`. Before https://github.com/rust-lang/rust/pull/105653 this is probably rare, but after it, every slice will go down this suboptimal code path that requires LLVM to untangle a bunch of indirections and translate static allocations that are only used once to read a scalar pair from.
Support #[global_allocator] without the allocator shim
This makes it possible to use liballoc/libstd in combination with `--emit obj` if you use `#[global_allocator]`. This is what rust-for-linux uses right now and systemd may use in the future. Currently they have to depend on the exact implementation of the allocator shim to create one themself as `--emit obj` doesn't create an allocator shim.
Note that currently the allocator shim also defines the oom error handler, which is normally required too. Once `#![feature(default_alloc_error_handler)]` becomes the only option, this can be avoided. In addition when using only fallible allocator methods and either `--cfg no_global_oom_handling` for liballoc (like rust-for-linux) or `--gc-sections` no references to the oom error handler will exist.
To avoid this feature being insta-stable, you will have to define `__rust_no_alloc_shim_is_unstable` to avoid linker errors.
(Labeling this with both T-compiler and T-lang as it originally involved both an implementation detail and had an insta-stable user facing change. As noted above, the `__rust_no_alloc_shim_is_unstable` symbol requirement should prevent unintended dependence on this unstable feature.)
You will need to add the following as replacement for the old __rust_*
definitions when not using the alloc shim.
#[no_mangle]
static __rust_no_alloc_shim_is_unstable: u8 = 0;
This makes it possible to use liballoc/libstd in combination with
`--emit obj` if you use `#[global_allocator]`. Making it work for the
default libstd allocator would require weak functions, which are not
well supported on all systems.
Add cross-language LLVM CFI support to the Rust compiler
This PR adds cross-language LLVM Control Flow Integrity (CFI) support to the Rust compiler by adding the `-Zsanitizer-cfi-normalize-integers` option to be used with Clang `-fsanitize-cfi-icall-normalize-integers` for normalizing integer types (see https://reviews.llvm.org/D139395).
It provides forward-edge control flow protection for C or C++ and Rust -compiled code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code share the same virtual address space). For more information about LLVM CFI and cross-language LLVM CFI support for the Rust compiler, see design document in the tracking issue #89653.
Cross-language LLVM CFI can be enabled with -Zsanitizer=cfi and -Zsanitizer-cfi-normalize-integers, and requires proper (i.e., non-rustc) LTO (i.e., -Clinker-plugin-lto).
Thank you again, ``@bjorn3,`` ``@nikic,`` ``@samitolvanen,`` and the Rust community for all the help!
This commit adds cross-language LLVM Control Flow Integrity (CFI)
support to the Rust compiler by adding the
`-Zsanitizer-cfi-normalize-integers` option to be used with Clang
`-fsanitize-cfi-icall-normalize-integers` for normalizing integer types
(see https://reviews.llvm.org/D139395).
It provides forward-edge control flow protection for C or C++ and Rust
-compiled code "mixed binaries" (i.e., for when C or C++ and Rust
-compiled code share the same virtual address space). For more
information about LLVM CFI and cross-language LLVM CFI support for the
Rust compiler, see design document in the tracking issue #89653.
Cross-language LLVM CFI can be enabled with -Zsanitizer=cfi and
-Zsanitizer-cfi-normalize-integers, and requires proper (i.e.,
non-rustc) LTO (i.e., -Clinker-plugin-lto).
Report allocation errors as panics
OOM is now reported as a panic but with a custom payload type (`AllocErrorPanicPayload`) which holds the layout that was passed to `handle_alloc_error`.
This should be review one commit at a time:
- The first commit adds `AllocErrorPanicPayload` and changes allocation errors to always be reported as panics.
- The second commit removes `#[alloc_error_handler]` and the `alloc_error_hook` API.
ACP: https://github.com/rust-lang/libs-team/issues/192Closes#51540Closes#51245
Fluent, with all the icu4x it brings in, takes quite some time to
compile. `fluent_messages!` is only needed in further downstream rustc
crates, but is blocking more upstream crates like `rustc_index`. By
splitting it out, we allow `rustc_macros` to be compiled earlier, which
speeds up `x check compiler` by about 5 seconds (and even more after the
needless dependency on `serde_json` is removed from
`rustc_data_structures`).
Add inline assembly support for m68k
I believe this should be correct, to the extent I understand the logic around inline assembly. M68k is fairly straightforward here, other than having separate address registers.
Initial support for loongarch64-unknown-linux-gnu
Hi, We hope to add a new port in rust for LoongArch.
LoongArch intro
LoongArch is a RISC style ISA which is independently designed by Loongson
Technology in China. It is divided into two versions, the 32-bit version (LA32)
and the 64-bit version (LA64). LA64 applications have application-level
backward binary compatibility with LA32 applications. LoongArch is composed of
a basic part (Loongson Base) and an expanded part. The expansion part includes
Loongson Binary Translation (LBT), Loongson VirtualiZation (LVZ), Loongson SIMD
EXtension (LSX) and Loongson Advanced SIMD EXtension(LASX).
Currently the LA464 processor core supports LoongArch ISA and the Loongson
3A5000 processor integrates 4 64-bit LA464 cores. LA464 is a four-issue 64-bit
high-performance processor core. It can be used as a single core for high-end
embedded and desktop applications, or as a basic processor core to form an
on-chip multi-core system for server and high-performance machine applications.
Documentations:
ISA:
https://loongson.github.io/LoongArch-Documentation/LoongArch-Vol1-EN.html
ABI:
https://loongson.github.io/LoongArch-Documentation/LoongArch-ELF-ABI-EN.html
More docs can be found at:
https://loongson.github.io/LoongArch-Documentation/README-EN.html
Since last year, we have locally adapted two versions of rust, rust1.41 and rust1.57, and completed the test locally.
I'm not sure if I'm submitting all the patches at once, so I split up the patches and here's one of the commits
In cases where it is legal, we should prefer poison values over
undef values.
This replaces undef with poison for aggregate construction and
for uninhabited types. There are more places where we can likely
use poison, but I wanted to stay conservative to start with.
In particular the aggregate case is important for newer LLVM
versions, which are not able to handle an undef base value during
early optimization due to poison-propagation concerns.
This makes it easier to open the messages file while developing on features.
The commit was the result of automatted changes:
for p in compiler/rustc_*; do mv $p/locales/en-US.ftl $p/messages.ftl; rmdir $p/locales; done
for p in compiler/rustc_*; do sed -i "s#\.\./locales/en-US.ftl#../messages.ftl#" $p/src/lib.rs; done
Add `round_ties_even` to `f32` and `f64`
Tracking issue: #96710
Redux of #82273. See also #55107
Adds a new method, `round_ties_even`, to `f32` and `f64`, that rounds the float to the nearest integer , rounding halfway cases to the number with an even least significant bit. Uses the `roundeven` LLVM intrinsic to do this.
Of the five IEEE 754 rounding modes, this is the only one that doesn't already have a round-to-integer function exposed by Rust (others are `round`, `floor`, `ceil`, and `trunc`). Ties-to-even is also the rounding mode used for int-to-float and float-to-float `as` casts, as well as float arithmentic operations. So not having an explicit rounding method for it seems like an oversight.
Bikeshed: this PR currently uses `round_ties_even` for the name of the method. But maybe `round_ties_to_even` is better, or `round_even`, or `round_to_even`?
(This is a large commit. The changes to
`compiler/rustc_middle/src/ty/context.rs` are the most important ones.)
The current naming scheme is a mess, with a mix of `_intern_`, `intern_`
and `mk_` prefixes, with little consistency. In particular, in many
cases it's easy to use an iterator interner when a (preferable) slice
interner is available.
The guiding principles of the new naming system:
- No `_intern_` prefixes.
- The `intern_` prefix is for internal operations.
- The `mk_` prefix is for external operations.
- For cases where there is a slice interner and an iterator interner,
the former is `mk_foo` and the latter is `mk_foo_from_iter`.
Also, `slice_interners!` and `direct_interners!` can now be `pub` or
non-`pub`, which helps enforce the internal/external operations
division.
It's not perfect, but I think it's a clear improvement.
The following lists show everything that was renamed.
slice_interners
- const_list
- mk_const_list -> mk_const_list_from_iter
- intern_const_list -> mk_const_list
- substs
- mk_substs -> mk_substs_from_iter
- intern_substs -> mk_substs
- check_substs -> check_and_mk_substs (this is a weird one)
- canonical_var_infos
- intern_canonical_var_infos -> mk_canonical_var_infos
- poly_existential_predicates
- mk_poly_existential_predicates -> mk_poly_existential_predicates_from_iter
- intern_poly_existential_predicates -> mk_poly_existential_predicates
- _intern_poly_existential_predicates -> intern_poly_existential_predicates
- predicates
- mk_predicates -> mk_predicates_from_iter
- intern_predicates -> mk_predicates
- _intern_predicates -> intern_predicates
- projs
- intern_projs -> mk_projs
- place_elems
- mk_place_elems -> mk_place_elems_from_iter
- intern_place_elems -> mk_place_elems
- bound_variable_kinds
- mk_bound_variable_kinds -> mk_bound_variable_kinds_from_iter
- intern_bound_variable_kinds -> mk_bound_variable_kinds
direct_interners
- region
- intern_region (unchanged)
- const
- mk_const_internal -> intern_const
- const_allocation
- intern_const_alloc -> mk_const_alloc
- layout
- intern_layout -> mk_layout
- adt_def
- intern_adt_def -> mk_adt_def_from_data (unusual case, hard to avoid)
- alloc_adt_def(!) -> mk_adt_def
- external_constraints
- intern_external_constraints -> mk_external_constraints
Other
- type_list
- mk_type_list -> mk_type_list_from_iter
- intern_type_list -> mk_type_list
- tup
- mk_tup -> mk_tup_from_iter
- intern_tup -> mk_tup
Remove type-traversal trait aliases
#107924 moved the type traversal (folding and visiting) traits into the type library, but created trait aliases in `rustc_middle` to minimise both the API churn for trait consumers and the arising boilerplate. As mentioned in that PR, an alternative approach of defining subtraits with blanket implementations of the respective supertraits was also considered at that time but was ruled out as not adding much value.
Unfortunately, it has since emerged that rust-analyzer has difficulty with these trait aliases at present, resulting in a degraded contributor experience (see the recent [r-a has become useless](https://rust-lang.zulipchat.com/#narrow/stream/182449-t-compiler.2Fhelp/topic/r-a.20has.20become.20useless) topic on the #t-compiler/help Zulip stream).
This PR removes the trait aliases, and accordingly the underlying type library traits are now used directly; they are parameterised by `TyCtxt<'tcx>` rather than just the `'tcx` lifetime, and imports have been updated to reflect the fact that the trait aliases' explicitly named traits are no longer automatically brought into scope. These changes also roll-back the (no-longer required) workarounds to #107747 that were made in b409329c62.
Since this PR is just a find+replace together with the changes necessary for compilation & tidy to pass, it's currently just one mega-commit. Let me know if you'd like it broken up.
r? `@oli-obk`
Extend `CodegenBackend` trait with a function returning the translation
resources from the codegen backend, which can be added to the complete
list of resources provided to the emitter.
Signed-off-by: David Wood <david.wood@huawei.com>
Instead of loading the Fluent resources for every crate in
`rustc_error_messages`, each crate generates typed identifiers for its
own diagnostics and creates a static which are pulled together in the
`rustc_driver` crate and provided to the diagnostic emitter.
Signed-off-by: David Wood <david.wood@huawei.com>
...and remove it from `PointeeInfo`, which isn't meant for this.
There are still various places (marked with FIXMEs) that assume all pointers
have the same size and alignment. Fixing this requires parsing non-default
address spaces in the data layout string, which will be done in a followup.
Use struct types during codegen in less places
This makes it easier to use cg_ssa from a backend like Cranelift that doesn't have any struct types at all. After this PR struct types are still used for function arguments and return values. Removing those usages is harder but should still be doable.
Add LLVM KCFI support to the Rust compiler
This PR adds LLVM Kernel Control Flow Integrity (KCFI) support to the Rust compiler. It initially provides forward-edge control flow protection for operating systems kernels for Rust-compiled code only by aggregating function pointers in groups identified by their return and parameter types. (See llvm/llvm-project@cff5bef.)
Forward-edge control flow protection for C or C++ and Rust -compiled code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code share the same virtual address space) will be provided in later work as part of this project by identifying C char and integer type uses at the time types are encoded (see Type metadata in the design document in the tracking issue #89653).
LLVM KCFI can be enabled with -Zsanitizer=kcfi.
Thank you again, `@bjorn3,` `@eddyb,` `@nagisa,` and `@ojeda,` for all the help!
This commit adds LLVM Kernel Control Flow Integrity (KCFI) support to
the Rust compiler. It initially provides forward-edge control flow
protection for operating systems kernels for Rust-compiled code only by
aggregating function pointers in groups identified by their return and
parameter types. (See llvm/llvm-project@cff5bef.)
Forward-edge control flow protection for C or C++ and Rust -compiled
code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code
share the same virtual address space) will be provided in later work as
part of this project by identifying C char and integer type uses at the
time types are encoded (see Type metadata in the design document in the
tracking issue #89653).
LLVM KCFI can be enabled with -Zsanitizer=kcfi.
Co-authored-by: bjorn3 <17426603+bjorn3@users.noreply.github.com>
This ensures that the error is printed even for unused variables,
as well as unifying the handling between the LLVM and GCC backends.
This also fixes unusual behavior around exported Rust-defined variables
with linkage attributes. With the previous behavior, it appears to be
impossible to define such a variable such that it can actually be imported
and used by another crate. This is because on the importing side, the
variable is required to be a pointer, but on the exporting side, the
type checker rejects static variables of pointer type because they do
not implement `Sync`. Even if it were possible to import such a type, it
appears that code generation on the importing side would add an unexpected
additional level of pointer indirection, which would break type safety.
This highlighted that the semantics of linkage on Rust-defined variables
is different to linkage on foreign items. As such, we now model the
difference with two different codegen attributes: linkage for Rust-defined
variables, and import_linkage for foreign items.
This change gives semantics to the test
src/test/ui/linkage-attr/auxiliary/def_illtyped_external.rs which was
previously expected to fail to compile. Therefore, convert it into a
test that is expected to successfully compile.
The update to the GCC backend is speculative and untested.
Prefer doc comments over `//`-comments in compiler
Doc comments are generally nicer: they show up in the documentation, they are shown in IDEs when you hover other mentions of items, etc. Thus it makes sense to use them instead of `//`-comments.
Improve generating Custom entry function
This commit is aimed at making compiler-generated entry functions (Basically just C `main` right now) more generic so other targets can do similar things for custom entry. This was initially implemented as part of https://github.com/rust-lang/rust/pull/100316.
Currently, this moves the entry function name and Call convention to the target spec.
Signed-off-by: Ayush Singh <ayushsingh1325@gmail.com>
Fix some misleading target feature aliases
This is the first half of a fix for #100752. It looks like these aliases were added in #78361 and slipped under the radar, as these features are not AVX512. These features _do_ add AVX512 instructions when used _in combination_ with AVX512F, but without AVX512F, these features still provide 128-bit and 256-bit vector instructions. A user might be mislead into thinking these features imply AVX512F (which is true of the actual AVX512 features). This PR allows using the names as defined by LLVM, which matches Intel documentation.
A future PR should change the `std::arch` intrinsics to use these names, and finally remove these aliases from rustc.
r? ```@workingjubilee```
cc ```@Amanieu```
For the next commit, `FunctionCx::codegen_*_terminator` need to take a
`&mut Bx` instead of consuming a `Bx`. This triggers a cascade of
similar changes across multiple functions. The resulting code is more
concise and replaces many `&mut bx` expressions with `bx`.
Perform simple scalar replacement of aggregates (SROA) MIR opt
This is a re-open of https://github.com/rust-lang/rust/pull/85796
I copied the debuginfo implementation (first commit) from `@eddyb's` own SROA PR.
This pass replaces plain field accesses by simple locals when possible.
To be eligible, the replaced locals:
- must not be enums or unions;
- must not be used whole;
- must not have their address taken.
The storage and deinit statements are duplicated on each created local.
cc `@tmiasko` who reviewed the former version of this PR.
interpret: support for per-byte provenance
Also factors the provenance map into its own module.
The third commit does the same for the init mask. I can move it in a separate PR if you prefer.
Fixes https://github.com/rust-lang/miri/issues/2181
r? `@oli-obk`
Moved type_array function to rustc_codegen_ssa::BaseTypeMethods trait.
This allows using normal alloca function to create arrays as suggested in
https://github.com/rust-lang/rust/pull/104022.
Signed-off-by: Ayush Singh <ayushsingh1325@gmail.com>
The new implementation doesn't use weak lang items and instead changes
`#[alloc_error_handler]` to an attribute macro just like
`#[global_allocator]`.
The attribute will generate the `__rg_oom` function which is called by
the compiler-generated `__rust_alloc_error_handler`. If no `__rg_oom`
function is defined in any crate then the compiler shim will call
`__rdl_oom` in the alloc crate which will simply panic.
This also fixes link errors with `-C link-dead-code` with
`default_alloc_error_handler`: `__rg_oom` was previously defined in the
alloc crate and would attempt to reference the `oom` lang item, even if
it didn't exist. This worked as long as `__rg_oom` was excluded from
linking since it was not called.
This is a prerequisite for the stabilization of
`default_alloc_error_handler` (#102318).
Migrate rustc_codegen_gcc to SessionDiagnostics
As part of #100717 this pr migrates diagnostics to `SessionDiagnostics` for the `rustc_codegen_gcc` crate.
``@rustbot`` label +A-translation
Move the cast_float_to_int fallback code to GCC
Now that we require at least LLVM 13, that codegen backend is always
using its intrinsic `fptosi.sat` and `fptoui.sat` conversions, so it
doesn't need the manual implementation. However, the GCC backend still
needs it, so we can move all of that code down there.
interpret: make read-pointer-as-bytes a CTFE-only error with extra information
Next step in the reaction to https://github.com/rust-lang/rust/issues/99923. Also teaches Miri to implicitly strip provenance in more situations when transmuting pointers to integers, which fixes https://github.com/rust-lang/miri/issues/2456.
Pointer-to-int transmutation during CTFE now produces a message like this:
```
= help: this code performed an operation that depends on the underlying bytes representing a pointer
= help: the absolute address of a pointer is not known at compile-time, so such operations are not supported
```
r? ``@oli-obk``
Add pointer masking convenience functions
This PR adds the following public API:
```rust
impl<T: ?Sized> *const T {
fn mask(self, mask: usize) -> *const T;
}
impl<T: ?Sized> *mut T {
fn mask(self, mask: usize) -> *const T;
}
// mod intrinsics
fn mask<T>(ptr: *const T, mask: usize) -> *const T
```
This is equivalent to `ptr.map_addr(|a| a & mask)` but also uses a cool llvm intrinsic.
Proposed in https://github.com/rust-lang/rust/pull/95643#issuecomment-1121562352
cc `@Gankra` `@scottmcm` `@RalfJung`
r? rust-lang/libs-api
Currently they try to be very precise. But they are wrong, i.e. they
don't match what's happening in the loop below. This code isn't hot
enough for it to matter that much.
Because `PassMode::Cast` is by far the largest variant, but is
relatively rare.
This requires making `PassMode` not impl `Copy`, and `Clone` is no
longer necessary. This causes lots of sigil adjusting, but nothing very
notable.
Now that we require at least LLVM 13, that codegen backend is always
using its intrinsic `fptosi.sat` and `fptoui.sat` conversions, so it
doesn't need the manual implementation. However, the GCC backend still
needs it, so we can move all of that code down there.
This avoids monomorphizing all linker code for each codegen backend and
will allow passing in extra information to the archive builder from the
codegen backend.
Enable raw-dylib for bin crates
Fixes#93842
When `raw-dylib` is used in a `bin` crate, we need to collect all of the `raw-dylib` functions, generate the import library and add that to the linker command line.
I also changed the tests so that 1) the C++ dlls are created after the Rust dlls, thus there is no chance of accidentally using them in the Rust linking process and 2) disabled generating import libraries when building with MSVC.
Add fine-grained LLVM CFI support to the Rust compiler
This PR improves the LLVM Control Flow Integrity (CFI) support in the Rust compiler by providing forward-edge control flow protection for Rust-compiled code only by aggregating function pointers in groups identified by their return and parameter types.
Forward-edge control flow protection for C or C++ and Rust -compiled code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code share the same virtual address space) will be provided in later work as part of this project by identifying C char and integer type uses at the time types are encoded (see Type metadata in the design document in the tracking issue https://github.com/rust-lang/rust/issues/89653).
LLVM CFI can be enabled with -Zsanitizer=cfi and requires LTO (i.e., -Clto).
Thank you again, `@eddyb,` `@nagisa,` `@pcc,` and `@tmiasko` for all the help!
This commit improves the LLVM Control Flow Integrity (CFI) support in
the Rust compiler by providing forward-edge control flow protection for
Rust-compiled code only by aggregating function pointers in groups
identified by their return and parameter types.
Forward-edge control flow protection for C or C++ and Rust -compiled
code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code
share the same virtual address space) will be provided in later work as
part of this project by identifying C char and integer type uses at the
time types are encoded (see Type metadata in the design document in the
tracking issue #89653).
LLVM CFI can be enabled with -Zsanitizer=cfi and requires LTO (i.e.,
-Clto).
make vtable pointers entirely opaque
This implements the scheme discussed in https://github.com/rust-lang/unsafe-code-guidelines/issues/338: vtable pointers should be considered entirely opaque and not even readable by Rust code, similar to function pointers.
- We have a new kind of `GlobalAlloc` that symbolically refers to a vtable.
- Miri uses that kind of allocation when generating a vtable.
- The codegen backends, upon encountering such an allocation, call `vtable_allocation` to obtain an actually dataful allocation for this vtable.
- We need new intrinsics to obtain the size and align from a vtable (for some `ptr::metadata` APIs), since direct accesses are UB now.
I had to touch quite a bit of code that I am not very familiar with, so some of this might not make much sense...
r? `@oli-obk`
Keep unstable target features for asm feature checking
Inline assembly uses the target features to determine which registers
are available on the current target. However it needs to be able to
access unstable target features for this.
Fixes#99071
Inline assembly uses the target features to determine which registers
are available on the current target. However it needs to be able to
access unstable target features for this.
Fixes#99071
Use less string interning
This removes string interning in a couple of places where doing so won't result in perf improvements. I also switched one place to use pre-interned symbols.
Remove the source archive functionality of ArchiveWriter
We now build archives through strictly additive means rather than taking an existing archive and potentially substracting parts. This is simpler and makes it easier to swap out the archive writer in https://github.com/rust-lang/rust/pull/97485.
Make `std::mem::needs_drop` accept `?Sized`
This change attempts to make `needs_drop` work with types like `[u8]` and `str`.
This enables code in types like `Arc<T>` that was not possible before, such as https://github.com/rust-lang/rust/pull/97676.
Add the intrinsic
declare {i8*, i1} @llvm.type.checked.load(i8* %ptr, i32 %offset, metadata %type)
This is used in the VFE optimization when lowering loading functions
from vtables to LLVM IR. The `metadata` is used to map the function to
all vtables this function could belong to. This ensures that functions
from vtables that might be used somewhere won't get removed.
It looks like the last time had left some remaining cfg's -- which made me think
that the stage0 bump was actually successful. This brings us to a released 1.62
beta though.
asm: Add a kreg0 register class on x86 which includes k0
Previously we only exposed a kreg register class which excludes the k0
register since it can't be used in many instructions. However k0 is a
valid register and we need to have a way of marking it as clobbered for
clobber_abi.
Fixes#94977
Previously we only exposed a kreg register class which excludes the k0
register since it can't be used in many instructions. However k0 is a
valid register and we need to have a way of marking it as clobbered for
clobber_abi.
Fixes#94977
Allow self-profiler to only record potentially costly arguments when argument recording is turned on
As discussed [on zulip](https://rust-lang.zulipchat.com/#narrow/stream/247081-t-compiler.2Fperformance/topic/Identifying.20proc-macro.20slowdowns/near/277304909) with `@wesleywiser,` I'd like to record proc-macro expansions in the self-profiler, with some detailed data (per-expansion spans for example, to follow #95473).
At the same time, I'd also like to avoid doing expensive things when tracking a generic activity's arguments, if they were not specifically opted into the event filter mask, to allow the self-profiler to be used in hotter contexts.
This PR tries to offer:
- a way to ensure a closure to record arguments will only be called in that situation, so that potentially costly arguments can still be recorded when needed. With the additional requirement that, if possible, it would offer a way to record non-owned data without adding many `generic_activity_with_arg_{...}`-style methods. This lead to the `generic_activity_with_arg_recorder` single entry-point, and the closure parameter would offer the new methods, able to be executed in a context where costly argument could be created without disturbing the profiled piece of code.
- some facilities/patterns allowing to record more rustc specific data in this situation, without making `rustc_data_structures` where the self-profiler is defined, depend on other rustc crates (causing circular dependencies): in particular, spans. They are quite tricky to turn into strings (if the default `Debug` impl output does not match the context one needs them for), and since I'd also like to avoid the allocation there when arg recording is turned off today, that has turned into another flexibility requirement for the API in this PR (separating the span-specific recording into an extension trait). **edit**: I've removed this from the PR so that it's easier to review, and opened https://github.com/rust-lang/rust/pull/95739.
- allow for extensibility in the future: other ways to record arguments, or additional data attached to them could be added in the future (e.g. recording the argument's name as well as its data).
Some areas where I'd love feedback:
- the API and names: the `EventArgRecorder` and its method for example. As well as the verbosity that comes from the increased flexibility.
- if I should convert the existing `generic_activity_with_arg{s}` to just forward to `generic_activity_with_arg_recorder` + `recorder.record_arg` (or remove them altogether ? Probably not): I've used the new API in the simple case I could find of allocating for an arg that may not be recorded, and the rest don't seem costly.
- [x] whether this API should panic if no arguments were recorded by the user-provided closure (like this PR currently does: it seems like an error to use an API dedicated to record arguments but not call the methods to then do so) or if this should just record a generic activity without arguments ?
- whether the `record_arg` function should be `#[inline(always)]`, like the `generic_activity_*` functions ?
As mentioned, r? `@wesleywiser` following our recent discussion.
Implement -Z oom=panic
This PR removes the `#[rustc_allocator_nounwind]` attribute on `alloc_error_handler` which allows it to unwind with a panic instead of always aborting. This is then used to implement `-Z oom=panic` as per RFC 2116 (tracking issue #43596).
Perf and binary size tests show negligible impact.
This commit makes `AdtDef` use `Interned`. Much the commit is tedious
changes to introduce getter functions. The interesting changes are in
`compiler/rustc_middle/src/ty/adt.rs`.
Clarify `Layout` interning.
`Layout` is another type that is sometimes interned, sometimes not, and
we always use references to refer to it so we can't take any advantage
of the uniqueness properties for hashing or equality checks.
This commit renames `Layout` as `LayoutS`, and then introduces a new
`Layout` that is a newtype around an `Interned<LayoutS>`. It also
interns more layouts than before. Previously layouts within layouts
(via the `variants` field) were never interned, but now they are. Hence
the lifetime on the new `Layout` type.
Unlike other interned types, these ones are in `rustc_target` instead of
`rustc_middle`. This reflects the existing structure of the code, which
does layout-specific stuff in `rustc_target` while `TyAndLayout` is
generic over the `Ty`, allowing the type-specific stuff to occur in
`rustc_middle`.
The commit also adds a `HashStable` impl for `Interned`, which was
needed. It hashes the contents, unlike the `Hash` impl which hashes the
pointer.
r? `@fee1-dead`
`Layout` is another type that is sometimes interned, sometimes not, and
we always use references to refer to it so we can't take any advantage
of the uniqueness properties for hashing or equality checks.
This commit renames `Layout` as `LayoutS`, and then introduces a new
`Layout` that is a newtype around an `Interned<LayoutS>`. It also
interns more layouts than before. Previously layouts within layouts
(via the `variants` field) were never interned, but now they are. Hence
the lifetime on the new `Layout` type.
Unlike other interned types, these ones are in `rustc_target` instead of
`rustc_middle`. This reflects the existing structure of the code, which
does layout-specific stuff in `rustc_target` while `TyAndLayout` is
generic over the `Ty`, allowing the type-specific stuff to occur in
`rustc_middle`.
The commit also adds a `HashStable` impl for `Interned`, which was
needed. It hashes the contents, unlike the `Hash` impl which hashes the
pointer.
Currently some `Allocation`s are interned, some are not, and it's very
hard to tell at a use point which is which.
This commit introduces `ConstAllocation` for the known-interned ones,
which makes the division much clearer. `ConstAllocation::inner()` is
used to get the underlying `Allocation`.
In some places it's natural to use an `Allocation`, in some it's natural
to use a `ConstAllocation`, and in some places there's no clear choice.
I've tried to make things look as nice as possible, while generally
favouring `ConstAllocation`, which is the type that embodies more
information. This does require quite a few calls to `inner()`.
The commit also tweaks how `PartialOrd` works for `Interned`. The
previous code was too clever by half, building on `T: Ord` to make the
code shorter. That caused problems with deriving `PartialOrd` and `Ord`
for `ConstAllocation`, so I changed it to build on `T: PartialOrd`,
which is slightly more verbose but much more standard and avoided the
problems.
Direct users towards using Rust target feature names in CLI
This PR consists of a couple of changes on how we handle target features.
In particular there is a bug-fix wherein we avoid passing through features that aren't prefixed by `+` or `-` to LLVM. These appear to be causing LLVM to assert, which is pretty poor a behaviour (and also makes it pretty clear we expect feature names to be prefixed).
The other commit, I anticipate to be somewhat more controversial is outputting a warning when users specify a LLVM-specific, or otherwise unknown, feature name on the CLI. In those situations we request users to either replace it with a known Rust feature name (e.g. `bmi` -> `bmi1`) or file a feature request. I've a couple motivations for this: first of all, if users are specifying these features on the command line, I'm pretty confident there is also a need for these features to be usable via `#[cfg(target_feature)]` machinery. And second, we're growing a fair number of backends recently and having ability to provide some sort of unified-ish interface in this place seems pretty useful to me.
Sponsored by: standard.ai
Partially move cg_ssa towards using a single builder
Not all codegen backends can handle hopping between blocks well. For example Cranelift requires blocks to be terminated before switching to building a new block. Rust-gpu requires a `RefCell` to allow hopping between blocks and cg_gcc currently has a buggy implementation of hopping between blocks. This PR reduces the amount of cases where cg_ssa switches between blocks before they are finished and mostly fixes the block hopping in cg_gcc. (~~only `scalar_to_backend` doesn't handle it correctly yet in cg_gcc~~ fixed that one.)
`@antoyo` please review the cg_gcc changes.
The field is also renamed from `ident` to `name. In most cases,
we don't actually need the `Span`. A new `ident` method is added
to `VariantDef` and `FieldDef`, which constructs the full `Ident`
using `tcx.def_ident_span()`. This method is used in the cases
where we actually need an `Ident`.
This makes incremental compilation properly track changes
to the `Span`, without all of the invalidations caused by storing
a `Span` directly via an `Ident`.