Fix codegen bug in "ptx-kernel" abi related to arg passing
I found a codegen bug in the nvptx abi related to that args are passed as ptrs ([see comment](https://github.com/rust-lang/rust/issues/38788#issuecomment-1048999928)), this is not as specified in the [ptx-interoperability doc](https://docs.nvidia.com/cuda/ptx-writers-guide-to-interoperability/) or how C/C++ does it. It will also almost always fail in practice since device/host uses different memory spaces for most hardware.
This PR fixes the bug and add tests for passing structs to ptx kernels.
I observed that all nvptx assembly tests had been marked as [ignore a long time ago](https://github.com/rust-lang/rust/pull/59752#issuecomment-501713428). I'm not sure if the new one should be marked as ignore, it passed on my computer but it might fail if ptx-linker is missing on the server? I guess this is outside scope for this PR and should be looked at in a different issue/PR.
I only fixed the nvptx64-nvidia-cuda target and not the potential code paths for the non-existing 32bit target. Even though 32bit nvptx is not a supported target there are still some code under the hood supporting codegen for 32 bit ptx. I was advised to create an MCP to find out if this code should be removed or updated.
Perhaps ``@RDambrosio016`` would have interest in taking a quick look at this.
asm: Add a kreg0 register class on x86 which includes k0
Previously we only exposed a kreg register class which excludes the k0
register since it can't be used in many instructions. However k0 is a
valid register and we need to have a way of marking it as clobbered for
clobber_abi.
Fixes#94977
Previously we only exposed a kreg register class which excludes the k0
register since it can't be used in many instructions. However k0 is a
valid register and we need to have a way of marking it as clobbered for
clobber_abi.
Fixes#94977
allow large Size again
This basically reverts most of https://github.com/rust-lang/rust/pull/80042, and instead does the panic in `bits()` with a `#[cold]` function to make sure it does not get inlined.
https://github.com/rust-lang/rust/pull/80042 added a comment about an invariant ("The top 3 bits are ALWAYS zero") that is not actually enforced, and if it were enforced that would be a problem for https://github.com/rust-lang/rust/pull/95388. So I think we should not have that invariant, and I adjusted the code accordingly.
r? `@oli-obk` Cc `@sivadeilra`
ARMv6K Horizon OS has_thread_local support
cc. ```@ian-h-chamberlain```
cc. ```@AzureMarker```
Being an ARM target, it has always had built-in support for `#[thread_local]`. This PR comes in just now because we were testing `std::thread` support with `thread_local_dtor`s. This will hopefully be the last PR for the target specification, unless anymore features will be needed as time goes on.
Fold aarch64 feature +fp into +neon
Arm's FEAT_FP and Feat_AdvSIMD describe the same thing on AArch64:
The Neon unit, which handles both floating point and SIMD instructions.
Moreover, a configuration for AArch64 must include both or neither.
Arm says "entirely proprietary" toolchains may omit floating point:
https://developer.arm.com/documentation/102374/0101/Data-processing---floating-point
In the Programmer's Guide for Armv8-A, Arm says AArch64 can have
both FP and Neon or neither in custom implementations:
https://developer.arm.com/documentation/den0024/a/AArch64-Floating-point-and-NEON
In "Bare metal boot code for Armv8-A", enabling Neon and FP
is just disabling the same trap flag:
https://developer.arm.com/documentation/dai0527/a
In an unlikely future where "Neon and FP" become unrelated,
we can add "[+-]fp" as its own feature flag.
Until then, we can simplify programming with Rust on AArch64 by
folding both into "[+-]neon", which is valid as it supersets both.
"[+-]neon" is retained for niche uses such as firmware, kernels,
"I just hate floats", and so on.
I am... pretty sure no one is relying on this.
An argument could be made that, as we are not an "entirely proprietary" toolchain, we should not support AArch64 without floats at all. I think that's a bit excessive. However, I want to recognize the intent: programming for AArch64 should be simplified where possible. For x86-64, programmers regularly set up illegal feature configurations because it's hard to understand them, see https://github.com/rust-lang/rust/issues/89586. And per the above notes, plus the discussion in https://github.com/rust-lang/rust/issues/86941, there should be no real use cases for leaving these features split: the two should in fact always go together.
- Fixesrust-lang/rust#95002.
- Fixesrust-lang/rust#95064.
- Fixesrust-lang/rust#95122.
Arm's FEAT_FP and Feat_AdvSIMD describe the same thing on AArch64:
The Neon unit, which handles both floating point and SIMD instructions.
Moreover, a configuration for AArch64 must include both or neither.
Arm says "entirely proprietary" toolchains may omit floating point:
https://developer.arm.com/documentation/102374/0101/Data-processing---floating-point
In the Programmer's Guide for Armv8-A, Arm says AArch64 can have
both FP and Neon or neither in custom implementations:
https://developer.arm.com/documentation/den0024/a/AArch64-Floating-point-and-NEON
In "Bare metal boot code for Armv8-A", enabling Neon and FP
is just disabling the same trap flag:
https://developer.arm.com/documentation/dai0527/a
In an unlikely future where "Neon and FP" become unrelated,
we can add "[+-]fp" as its own feature flag.
Until then, we can simplify programming with Rust on AArch64 by
folding both into "[+-]neon", which is valid as it supersets both.
"[+-]neon" is retained for niche uses such as firmware, kernels,
"I just hate floats", and so on.
`Layout` is another type that is sometimes interned, sometimes not, and
we always use references to refer to it so we can't take any advantage
of the uniqueness properties for hashing or equality checks.
This commit renames `Layout` as `LayoutS`, and then introduces a new
`Layout` that is a newtype around an `Interned<LayoutS>`. It also
interns more layouts than before. Previously layouts within layouts
(via the `variants` field) were never interned, but now they are. Hence
the lifetime on the new `Layout` type.
Unlike other interned types, these ones are in `rustc_target` instead of
`rustc_middle`. This reflects the existing structure of the code, which
does layout-specific stuff in `rustc_target` while `TyAndLayout` is
generic over the `Ty`, allowing the type-specific stuff to occur in
`rustc_middle`.
The commit also adds a `HashStable` impl for `Interned`, which was
needed. It hashes the contents, unlike the `Hash` impl which hashes the
pointer.
add address sanitizer fo android
We have been being using asan to debug the rust/cpp/c mixed android application in production for months: recompile the rust library with a patched rustc, everything just works fine. The patch is really small thanks to `@nagisa` 's refactoring in https://github.com/rust-lang/rust/pull/81866
r? `@nagisa`
Add well known values to `--check-cfg` implementation
This pull-request adds well known values for the well known names via `--check-cfg=values()`.
[RFC 3013: Checking conditional compilation at compile time](https://rust-lang.github.io/rfcs/3013-conditional-compilation-checking.html#checking-conditional-compilation-at-compile-time) doesn't define this at all, but this seems a nice improvement.
The activation is done by a empty `values()` (new syntax) similar to `names()` except that `names(foo)` also activate well known names while `values(aa, "aa", "kk")` would not.
As stated this use a different activation logic because well known values for the well known names are not always sufficient.
In fact this is problematic for every `target_*` cfg because of non builtin targets, as the current implementation use those built-ins targets to create the list the well known values.
The implementation is straight forward, first we gather (if necessary) all the values (lazily or not) and then we apply them.
r? ```@petrochenkov```
ARM: Only allow using d16-d31 with asm! when supported by the target
Support can be determined by checking for the "d32" LLVM feature.
r? ```````````````@nagisa```````````````
The previous approach of checking for the reserve-r9 target feature
didn't actually work because LLVM only sets this feature very late when
initializing the per-function subtarget.
Adopt let else in more places
Continuation of #89933, #91018, #91481, #93046, #93590, #94011.
I have extended my clippy lint to also recognize tuple passing and match statements. The diff caused by fixing it is way above 1 thousand lines. Thus, I split it up into multiple pull requests to make reviewing easier. This is the biggest of these PRs and handles the changes outside of rustdoc, rustc_typeck, rustc_const_eval, rustc_trait_selection, which were handled in PRs #94139, #94142, #94143, #94144.
asm: Allow the use of r8-r14 as clobbers on Thumb1
Previously these were entirely disallowed, except for r11 which was allowed by accident.
cc `@hudson-ayers`