Commit Graph

1886 Commits

Author SHA1 Message Date
Ralf Jung
95582e6fcb codegen: memmove/memset cannot be non-temporal 2024-05-09 18:59:00 +02:00
Matthew Maurer
4d397d33da Adjust 64-bit ARM data layouts for LLVM update
LLVM has updated data layouts to specify `Fn32` on 64-bit ARM to avoid
C++ accidentally underaligning functions when trying to comply with
member function ABIs.

This should only affect Rust in cases where we had a similar bug (I
don't believe we have one), but our data layout must match to generate
code.

As a compatibility adaptatation, if LLVM is not version 19 yet, `Fn32`
gets voided from the data layout.

See llvm/llvm-project#90415
2024-05-06 16:53:17 +00:00
bors
befabbc9e5 Auto merge of #124675 - matthiaskrgr:rollup-x6n79ua, r=matthiaskrgr
Rollup of 7 pull requests

Successful merges:

 - #122492 (Implement ptr_as_ref_unchecked)
 - #123815 (Fix cannot usage in time.rs)
 - #124059 (default_alloc_error_hook: explain difference to default __rdl_oom in alloc)
 - #124510 (Add raw identifier in a typo suggestion)
 - #124555 (coverage: Clean up creation of MC/DC condition bitmaps)
 - #124593 (Describe and use CStr literals in CStr and CString docs)
 - #124630 (CI: remove `env-x86_64-apple-tests` YAML anchor)

r? `@ghost`
`@rustbot` modify labels: rollup
2024-05-03 19:46:04 +00:00
Matthias Krüger
613bccc4ca
Rollup merge of #124555 - Zalathar:init-coverage, r=nnethercote
coverage: Clean up creation of MC/DC condition bitmaps

This PR improves the code for creating and initializing [MC/DC](https://en.wikipedia.org/wiki/Modified_condition/decision_coverage) condition bitmap variables, as introduced by #123409 and modified by #124255.

- The condition bitmap variables are now created eagerly at the start of per-function codegen, via a new `init_coverage` method in `CoverageInfoBuilderMethods`. This avoids having to retroactively create the bitmaps while doing codegen for an individual coverage statement.
- As a result, we can now create and initialize those bitmaps using existing safe APIs, instead of having to perform our own unsafe call to `llvm::LLVMBuildAlloca`.
- This PR also tweaks the way we count the number of condition bitmaps needed, by tracking the total number of bitmaps needed (max depth + 1), instead of only tracking the maximum depth. This reduces the potential for subtle off-by-one confusion.
2024-05-03 20:33:46 +02:00
bors
0d7b2fb797 Auto merge of #123441 - saethlin:fixed-len-file-names, r=oli-obk
Stabilize the size of incr comp object file names

The current implementation does not produce stable-length paths, and we create the paths in a way that makes our allocation behavior is nondeterministic. I think `@eddyb` fixed a number of other cases like this in the past, and this PR fixes another one. Whether that actually matters I have no idea, but we still have bimodal behavior in rustc-perf and the non-uniformity in `find` and `ls` was bothering me.

I've also removed the truncation of the mangled CGU names. Before this PR incr comp paths look like this:
```
target/debug/incremental/scratch-38izrrq90cex7/s-gux6gz0ow8-1ph76gg-ewe1xj434l26w9up5bedsojpd/261xgo1oqnd90ry5.o
```
And after, they look like this:
```
target/debug/incremental/scratch-035omutqbfkbw/s-gux6borni0-16r3v1j-6n64tmwqzchtgqzwwim5amuga/55v2re42sztc8je9bva6g8ft3.o
```

On the one hand, I'm sure this will break some people's builds because they're on Windows and only a few bytes from the path length limit. But if we're that seriously worried about the length of our file names, I have some other ideas on how to make them smaller. And last time I deleted some hash truncations from the compiler, there was a huge drop in the number if incremental compilation ICEs that were reported: https://github.com/rust-lang/rust/pull/110367https://github.com/rust-lang/rust/pull/110367

---

Upon further reading, this PR actually fixes a bug. This comment says the CGU names are supposed to be a fixed-length hash, and before this PR they aren't: ca7d34efa9/compiler/rustc_monomorphize/src/partitioning.rs (L445-L448)
2024-05-03 17:41:48 +00:00
Waffle Lapkin
698d7a031e Inline & delete Ty::new_unit, since it's just a field access 2024-05-02 17:49:23 +02:00
Zalathar
de972b7321 coverage: Replace max_decision_depth with num_condition_bitmaps
This clearly distinguishes individual decision-depth indices from the total
number of condition bitmaps to allocate.
2024-05-01 09:55:22 +10:00
Zalathar
0b3a47900e coverage: Set up MC/DC bitmaps without additional unsafe code
Because this now always takes place at the start of the function, we can just
use the normal `alloca` method and then initialize each bitmap immediately.

This patch also moves bitmap setup out of the `mcdc_parameters` method, because
there is no longer any particular reason for it to be there.
2024-05-01 09:55:22 +10:00
Zalathar
52d608b560 coverage: Eagerly do start-of-function codegen for coverage 2024-05-01 09:06:53 +10:00
Matthias Krüger
784316eadc
Rollup merge of #124511 - nnethercote:rm-extern-crates, r=fee1-dead
Remove many `#[macro_use] extern crate foo` items

This requires the addition of more `use` items, which often make the code more verbose. But they also make the code easier to read, because `#[macro_use]` obscures where macros are defined.

r? `@fee1-dead`
2024-04-30 15:04:08 +02:00
bors
7a58674259 Auto merge of #124255 - RenjiSann:renji/mcdc-nested-expressions, r=Zalathar
MCDC coverage: support nested decision coverage

#123409 provided the initial MCDC coverage implementation.

As referenced in #124144, it does not currently support "nested" decisions, like the following example :

```rust
fn nested_if_in_condition(a: bool, b: bool, c: bool) {
    if a && if b || c { true } else { false } {
        say("yes");
    } else {
        say("no");
    }
}
```

Note that there is an if-expression (`if b || c ...`) embedded inside a boolean expression in the decision of an outer if-expression.

This PR proposes a workaround for this cases, by introducing a Decision context stack, and by handing several `temporary condition bitmaps` instead of just one.
When instrumenting boolean expressions, if the current node is a leaf condition (i.e. not a `||`/`&&` logical operator nor a `!` not operator), we insert a new decision context, such that if there are more boolean expressions inside the condition, they are handled as separate expressions.

On the codegen LLVM side, we allocate as many `temp_cond_bitmap`s as necessary to handle the maximum encountered decision depth.
2024-04-29 11:54:49 +00:00
Dorian Péron
60ca9b6e29 mcdc-coverage: Get decision_depth from THIR lowering
Use decision context stack to handle nested decisions:
- Introduce MCDCDecisionCtx
- Use a stack of MCDCDecisionCtx to handle nested decisions
2024-04-29 09:13:40 +00:00
Dorian Péron
ae8c023983 mcdc-coverage: Add decision_depth field in structs
Add decision_depth field to TVBitmapUpdate/CondBitmapUpdate statements
Add decision_depth field to BcbMappingKinds MCDCBranch and MCDCDecision
Add decision_depth field to MCDCBranchSpan and MCDCDecisionSpan
2024-04-29 09:13:40 +00:00
Dorian Péron
3c2f48ede9 mcdc-coverage: Add possibility for codegen llvm to handle several condition bitmaps 2024-04-29 08:41:15 +00:00
Nicholas Nethercote
4814fd0a4b Remove extern crate rustc_macros from numerous crates. 2024-04-29 10:21:54 +10:00
bors
284f94f9c0 Auto merge of #121298 - nikic:writable, r=cuviper
Set writable and dead_on_unwind attributes for sret arguments

Set the `writable` and `dead_on_unwind` attributes for `sret` arguments. This allows call slot optimization to remove more memcpy's.

See https://llvm.org/docs/LangRef.html#parameter-attributes for the specification of these attributes. In short, the statement we're making here is that:

 * The return slot is writable.
 * The return slot will not be read if the function unwinds.

Fixes https://github.com/rust-lang/rust/issues/90595.
2024-04-25 04:31:56 +00:00
Nikita Popov
3695af697e Set writable and dead_on_unwind attributes for sret arguments 2024-04-25 11:43:47 +09:00
bors
29a56a3b1c Auto merge of #122053 - erikdesjardins:alloca, r=nikic
Stop using LLVM struct types for alloca

The alloca type has no semantic meaning, only the size (and alignment, but we specify it explicitly) matter. Using `[N x i8]` is a more direct way to specify that we want `N` bytes, and avoids relying on LLVM's struct layout. It is likely that a future LLVM version will change to an untyped alloca representation.

Split out from #121577.

r? `@ghost`
2024-04-24 03:00:44 +00:00
Matthias Krüger
918304b190
Rollup merge of #124003 - WaffleLapkin:dellvmization, r=scottmcm,RalfJung,antoyo
Dellvmize some intrinsics (use `u32` instead of `Self` in some integer intrinsics)

This implements https://github.com/rust-lang/compiler-team/issues/693 minus what was implemented in #123226.

Note: I decided to _not_ change `shl`/... builder methods, as it just doesn't seem worth it.

r? ``@scottmcm``
2024-04-23 20:17:51 +02:00
Guillaume Gomez
1a12ec41e9
Rollup merge of #124178 - GuillaumeGomez:llvm-backend, r=oli-obk
[cleanup] [llvm backend] Prevent creating the same `Instance::mono` multiple times

Just a little thing I came across while going through the code.

r? ```@oli-obk```
2024-04-22 20:25:58 +02:00
Ben Kimock
6ee3713b08 Stabilize the size of incr comp object file names 2024-04-22 10:50:07 -04:00
Guillaume Gomez
d34be935ec Prevent creating the same Instance::mono multiple times 2024-04-19 23:13:58 +02:00
zhuyunxing
439dbfa1ec coverage. Lowering MC/DC statements to llvm-ir 2024-04-20 00:34:40 +08:00
zhuyunxing
cf6b6cb2b4 coverage. Generate Mappings of decisions and conditions for MC/DC 2024-04-19 17:09:26 +08:00
bors
c5de414865 Auto merge of #123144 - dpaoliello:arm64eclib, r=GuillaumeGomez,ChrisDenton,wesleywiser
Add support for Arm64EC to the Standard Library

Adds the final pieces so that the standard library can be built for arm64ec-pc-windows-msvc (initially added in #119199)

* Bumps `windows-sys` to 0.56.0, which adds support for Arm64EC.
* Correctly set the `isEC` parameter for LLVM's `writeArchive` function.
* Add `#![feature(asm_experimental_arch)]` to library crates where Arm64EC inline assembly is used, as it is currently unstable.
2024-04-18 12:22:52 +00:00
Matthias Krüger
90013ff5ad
Rollup merge of #124090 - durin42:llvm-19-riscv-feature, r=cuviper
llvm: update riscv target feature to match LLVM 19

In llvm/llvm-project@9067070d91 they ended up largely reverting
llvm/llvm-project@e817966718. This means the change we did in
rust-lang/rust@b378059e6b is now only corrct for LLVM 18...so we have to adjust again.

``@rustbot`` label: +llvm-main
2024-04-18 08:37:49 +02:00
Augie Fackler
22b704bac4 llvm: update riscv target feature to match LLVM 19
In llvm/llvm-project@9067070d91 they ended
up largely reverting
llvm/llvm-project@e817966718. This means
the change we did in
rust-lang/rust@b378059e6b is now only
corrct for LLVM 18...so we have to adjust again.

@rustbot label: +llvm-main
2024-04-17 16:15:24 -04:00
Mark Rousskov
649e80184b Codegen ZSTs without an allocation
This makes sure that &[] is just as efficient as indirecting through
unsafe code (from_raw_parts). No new stable guarantee is intended about
whether or not we do this, this is just an optimization.

Co-authored-by: Ralf Jung <post@ralfj.de>
2024-04-16 21:13:21 -04:00
Maybe Waffle
ceead1bda6 Change intrinsic types to use u32 instead of T to match stable reexports 2024-04-16 11:53:04 +00:00
Daniel Paoliello
32f5ca4be7 Add support for Arm64EC to the Standard Library 2024-04-15 16:05:16 -07:00
bors
5dcb678ad8 Auto merge of #122917 - saethlin:atomicptr-to-int, r=nikic
Add the missing inttoptr when we ptrtoint in ptr atomics

Ralf noticed this here: https://github.com/rust-lang/rust/pull/122220#discussion_r1535172094

Our previous codegen forgot to add the cast back to integer type. The code compiles anyway, because of course all locals are in-memory to start with, so previous codegen would do the integer atomic, store the integer to a local, then load a pointer from that local. Which is definitely _not_ what we wanted: That's an integer-to-pointer transmute, so all pointers returned by these `AtomicPtr` methods didn't have provenance. Yikes.

Here's the IR for `AtomicPtr::fetch_byte_add` on 1.76: https://godbolt.org/z/8qTEjeraY
```llvm
define noundef ptr `@atomicptr_fetch_byte_add(ptr` noundef nonnull align 8 %a, i64 noundef %v) unnamed_addr #0 !dbg !7 {
start:
  %0 = alloca ptr, align 8, !dbg !12
  %val = inttoptr i64 %v to ptr, !dbg !12
  call void `@llvm.lifetime.start.p0(i64` 8, ptr %0), !dbg !28
  %1 = ptrtoint ptr %val to i64, !dbg !28
  %2 = atomicrmw add ptr %a, i64 %1 monotonic, align 8, !dbg !28
  store i64 %2, ptr %0, align 8, !dbg !28
  %self = load ptr, ptr %0, align 8, !dbg !28
  call void `@llvm.lifetime.end.p0(i64` 8, ptr %0), !dbg !28
  ret ptr %self, !dbg !33
}
```

r? `@RalfJung`
cc `@nikic`
2024-04-15 08:07:47 +00:00
Matthias Krüger
f4f644182b
Rollup merge of #123775 - scottmcm:place-val, r=cjgillot
Make `PlaceRef` and `OperandValue::Ref` share a common `PlaceValue` type

Both `PlaceRef` and `OperandValue::Ref` need the triple of the backend pointer immediate, the optional backend metadata for DSTs, and the actual alignment of the place (since it can differ from the ABI alignment).

This PR introduces a new `PlaceValue` type for those three values, leaving [`PlaceRef`](https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/mir/place/struct.PlaceRef.html) with the `TyAndLayout` and a `PlaceValue`, just like how [`OperandRef`](https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/mir/operand/struct.OperandRef.html) is a `TyAndLayout` and an `OperandValue`.

This means that various places that use `Ref`s as places can just pass the `PlaceValue` along, like in the below excerpt from the diff:
```diff
        match operand.val {
-            OperandValue::Ref(ptr, meta, align) => {
-                debug_assert_eq!(meta, None);
+            OperandValue::Ref(source_place_val) => {
+                debug_assert_eq!(source_place_val.llextra, None);
                debug_assert!(matches!(operand_kind, OperandValueKind::Ref));
-                let fake_place = PlaceRef::new_sized_aligned(ptr, cast, align);
+                let fake_place = PlaceRef { val: source_place_val, layout: cast };
                Some(bx.load_operand(fake_place).val)
            }
```

There's more refactoring that I'd like to do after this, but I wanted to stop the PR here where it's hopefully easy (albeit probably not quick) to review since I tried to keep every change line-by-line clear.  (Most are just adding `.val` to get to a field.)

You can also go commit-at-a-time if you'd like.  Each passed tidy and the codegen tests on my machine (though I didn't run the cg_gcc ones).
2024-04-12 04:38:21 +02:00
Erik Desjardins
f4426c189f use [N x i8] for alloca types 2024-04-11 21:42:35 -04:00
bors
05ccc49a44 Auto merge of #123507 - dpaoliello:arm64ecasm, r=Amanieu
Add support for Arm64EC inline assembly (as unstable)

Compiler support for Arm64EC assembly mostly reuses the existing AArch64 support, except that it needs to block registers that are not permitted: <https://learn.microsoft.com/en-us/windows/arm/arm64ec-abi#register-mapping-and-blocked-registers>

For assembly authors there are several caveats and differences that need to be considered, I've provided documentation for this as part of the "Standard Library Support" PR: <https://github.com/rust-lang/rust/pull/123144/files#diff-6b08532480943c8b82f5dbda7ee1521afa74c9f626466aeb308dfa6956397edd>

r? rust-lang/compiler
2024-04-11 07:15:04 +00:00
Scott McMurray
d0ae76848a Add load/store helpers that take PlaceValue 2024-04-11 00:10:10 -07:00
Scott McMurray
3596098823 Put PlaceValue into OperandValue::Ref, rather than 3 tuple fields 2024-04-11 00:10:10 -07:00
Scott McMurray
89502e584b Make PlaceRef hold a PlaceValue for the non-layout fields (like OperandRef does) 2024-04-11 00:10:10 -07:00
Daniel Paoliello
2e44d29460 Add support for Arm64EC inline assembly 2024-04-10 10:06:44 -07:00
bors
c2239bca5b Auto merge of #123185 - scottmcm:more-typed-copy, r=compiler-errors
Remove my `scalar_copy_backend_type` optimization attempt

I added this back in https://github.com/rust-lang/rust/pull/111999 , but I no longer think it's a good idea
- It had to get scaled back to only power-of-two things to not break a bunch of targets
- LLVM seems to be getting better at memcpy removal anyway
- Introducing vector instructions has seemed to sometimes (https://github.com/rust-lang/rust/pull/115515#issuecomment-1750069529) make autovectorization worse

So this removes it from the codegen crates entirely, and instead just tries to use <https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/traits/builder/trait.BuilderMethods.html#method.typed_place_copy> instead of direct `memcpy` so things will still use load/store when a type isn't `OperandValue::Ref`.
2024-04-10 16:32:41 +00:00
Matthias Krüger
2ddf984594
Rollup merge of #123612 - kxxt:riscv-target-abi, r=jieyouxu,nikic,DianQK
Set target-abi module flag for RISC-V targets

Fixes cross-language LTO on RISC-V targets (Fixes #121924)
2024-04-10 04:27:40 +02:00
Scott McMurray
b5376ba601 Remove my scalar_copy_backend_type optimization attempt
I added this back in 111999, but I no longer think it's a good idea
- It had to get scaled back to only power-of-two things to not break a bunch of targets
- LLVM seems to be getting better at memcpy removal anyway
- Introducing vector instructions has seemed to sometimes (115515) make autovectorization worse

So this removes it from the codegen crates entirely, and instead just tries to use <https://doc.rust-lang.org/nightly/nightly-rustc/rustc_codegen_ssa/traits/builder/trait.BuilderMethods.html#method.typed_place_copy> instead of direct `memcpy` so things will still use load/store for immediates.
2024-04-09 08:51:32 -07:00
Levi Zim
33db20978e Pass value and valueLen to create a StringRef
Instead of creating a cstring.

Co-authored-by: LoveSy <shana@zju.edu.cn>
2024-04-09 08:53:11 +02:00
Matthias Krüger
b809c4264b
Rollup merge of #123620 - rcvalle:rust-create-rustc-sanitizers, r=davidtwco
sanitizers: Create the rustc_sanitizers crate

Create the `rustc_sanitizers` crate and move the source code for the CFI and KCFI sanitizers to it. The tracking issue for reviewing and moving sanitizers into a compiler crate is #123619. This is part of our work to organize and stabilize support for the sanitizers. (See our roadmap at https://hackmd.io/`@rcvalle/S1Ou9K6H6.)`
2024-04-09 06:02:21 +02:00
kxxt
f19c48e7a8 Set target-abi module flag for RISC-V targets
Fixes cross-language LTO on RISC-V targets (Fixes #121924)
2024-04-09 05:25:51 +02:00
Ramon de C Valle
1f0f2c4007 sanitizers: Create the rustc_sanitizers crate
Create the rustc_sanitizers crate and move the source code for the CFI
and KCFI sanitizers to it.

Co-authored-by: David Wood <agile.lion3441@fuligin.ink>
2024-04-08 12:05:41 -07:00
Nikita Popov
1b7342b411 force_array -> is_consecutive
The actual ABI implication here is that in some cases the values
are required to be "consecutive", i.e. must either all be passed
in registers or all on stack (without padding).

Adjust the code to either use Uniform::new() or Uniform::consecutive()
depending on which behavior is needed.

Then, when lowering this in LLVM, skip the [1 x i128] to i128
simplification if is_consecutive is set. i128 is the only case
I'm aware of where this is problematic right now. If we find
other cases, we can extend this (either based on target information
or possibly just by not simplifying for is_consecutive entirely).
2024-04-08 11:31:43 +09:00
Nikita Popov
009280c5e3 Fix argument ABI for overaligned structs on ppc64le
When passing a 16 (or higher) aligned struct by value on ppc64le,
it needs to be passed as an array of `i128` rather than an array
of `i64`. This will force the use of an even starting register.

For the case of a 16 byte struct with alignment 16 it is important
that `[1 x i128]` is used instead of `i128` -- apparently, the
latter will get treated similarly to `[2 x i64]`, not exhibiting
the correct ABI. Add a `force_array` flag to `Uniform` to support
this.

The relevant clang code can be found here:
fe2119a7b0/clang/lib/CodeGen/Targets/PPC.cpp (L878-L884)
fe2119a7b0/clang/lib/CodeGen/Targets/PPC.cpp (L780-L784)

I think the corresponding psABI wording is this:

> Fixed size aggregates and unions passed by value are mapped to as
> many doublewords of the parameter save area as the value uses in
> memory. Aggregrates and unions are aligned according to their
> alignment requirements. This may result in doublewords being
> skipped for alignment.

In particular the last sentence.

Fixes https://github.com/rust-lang/rust/issues/122767.
2024-04-08 11:15:36 +09:00
Michael Baikov
691e953da6 Save/restore more items in cache with incremental compilation 2024-04-06 10:59:24 -04:00
Guillaume Gomez
5ceac29123
Rollup merge of #123487 - rcvalle:rust-cfi-restore-typeid-for-instance, r=compiler-errors
CFI: Restore typeid_for_instance default behavior

Restore typeid_for_instance default behavior of performing self type erasure, since it's the most common case and what it does most of the time. Using concrete self (or not performing self type erasure) is for assigning a secondary type id, and secondary type ids are only assigned when they're unique and to methods, and also are only tested for when methods are used as function pointers.
2024-04-05 22:33:27 +02:00
Guillaume Gomez
74a5bc6c9e
Rollup merge of #121419 - agg23:xrOS-pr, r=davidtwco
Add aarch64-apple-visionos and aarch64-apple-visionos-sim tier 3 targets

Introduces `aarch64-apple-visionos` and `aarch64-apple-visionos-sim` as tier 3 targets. This allows native development for the Apple Vision Pro's visionOS platform.

This work has been tracked in https://github.com/rust-lang/compiler-team/issues/642. There is a corresponding `libc` change https://github.com/rust-lang/libc/pull/3568 that is not required for merge.

Ideally we would be able to incorporate [this change](https://github.com/gimli-rs/object/pull/626) to the `object` crate, but the author has stated that a release will not be cut for quite a while. Therefore, the two locations that would reference the xrOS constant from `object` are hardcoded to their MachO values of 11 and 12, accompanied by TODOs to mark the code as needing change. I am open to suggestions on what to do here to get this checked in.

# Tier 3 Target Policy

At this tier, the Rust project provides no official support for a target, so we place minimal requirements on the introduction of targets.

> A tier 3 target must have a designated developer or developers (the "target maintainers") on record to be CCed when issues arise regarding the target. (The mechanism to track and CC such developers may evolve over time.)

See [src/doc/rustc/src/platform-support/apple-visionos.md](e88379034a/src/doc/rustc/src/platform-support/apple-visionos.md)

> Targets must use naming consistent with any existing targets; for instance, a target for the same CPU or OS as an existing Rust target should use the same name for that CPU or OS. Targets should normally use the same names and naming conventions as used elsewhere in the broader ecosystem beyond Rust (such as in other toolchains), unless they have a very good reason to diverge. Changing the name of a target can be highly disruptive, especially once the target reaches a higher tier, so getting the name right is important even for a tier 3 target.
> * Target names should not introduce undue confusion or ambiguity unless absolutely necessary to maintain ecosystem compatibility. For example, if the name of the target makes people extremely likely to form incorrect beliefs about what it targets, the name should be changed or augmented to disambiguate it.
> * If possible, use only letters, numbers, dashes and underscores for the name. Periods (.) are known to cause issues in Cargo.

This naming scheme matches `$ARCH-$VENDOR-$OS-$ABI` which is matches the iOS Apple Silicon simulator (`aarch64-apple-ios-sim`) and other Apple targets.

> Tier 3 targets may have unusual requirements to build or use, but must not
  create legal issues or impose onerous legal terms for the Rust project or for
  Rust developers or users.
>  - The target must not introduce license incompatibilities.
>  - Anything added to the Rust repository must be under the standard Rust license (`MIT OR Apache-2.0`).
>  - The target must not cause the Rust tools or libraries built for any other host (even when supporting cross-compilation to the target) to depend on any new dependency less permissive than the Rust licensing policy. This applies whether the dependency is a Rust crate that would require adding new license exceptions (as specified by the `tidy` tool in the rust-lang/rust repository), or whether the dependency is a native library or binary. In other words, the introduction of the target must not cause a user installing or running a version of Rust or the Rust tools to besubject to any new license requirements.
>  - Compiling, linking, and emitting functional binaries, libraries, or other code for the target (whether hosted on the target itself or cross-compiling from another target) must not depend on proprietary (non-FOSS) libraries. Host tools built for the target itself may depend on the ordinary runtime libraries supplied by the platform and commonly used by other applications built for the target, but those libraries must not be required for code generation for the target; cross-compilation to the target must not require such libraries at all. For instance, `rustc` built for the target may depend on a common proprietary C runtime library or console output library, but must not depend on a proprietary code generation library or code optimization library. Rust's license permits such combinations, but the Rust project has no interest in maintaining such combinations within the scope of Rust itself, even at tier 3.
> - "onerous" here is an intentionally subjective term. At a minimum, "onerous" legal/licensing terms include but are *not* limited to: non-disclosure requirements, non-compete requirements, contributor license agreements (CLAs) or equivalent, "non-commercial"/"research-only"/etc terms, requirements conditional on the employer or employment of any particular Rust developers, revocable terms, any requirements that create liability for the Rust project or its developers or users, or any requirements that adversely affect the livelihood or prospects of the Rust project or its developers or users.

This contribution is fully available under the standard Rust license with no additional legal restrictions whatsoever. This PR does not introduce any new dependency less permissive than the Rust license policy.

The new targets do not depend on proprietary libraries.

> Tier 3 targets should attempt to implement as much of the standard libraries as possible and appropriate (core for most targets, alloc for targets that can support dynamic memory allocation, std for targets with an operating system or equivalent layer of system-provided functionality), but may leave some code unimplemented (either unavailable or stubbed out as appropriate), whether because the target makes it impossible to implement or challenging to implement. The authors of pull requests are not obligated to avoid calling any portions of the standard library on the basis of a tier 3 target not implementing those portions.

This new target mirrors the standard library for watchOS and iOS, with minor divergences.

> The target must provide documentation for the Rust community explaining how to build for the target, using cross-compilation if possible. If the target supports running binaries, or running tests (even if they do not pass), the documentation must explain how to run such binaries or tests for the target, using emulation if possible or dedicated hardware if necessary.

Documentation is provided in [src/doc/rustc/src/platform-support/apple-visionos.md](e88379034a/src/doc/rustc/src/platform-support/apple-visionos.md)

> Neither this policy nor any decisions made regarding targets shall create any binding agreement or estoppel by any party. If any member of an approving Rust team serves as one of the maintainers of a target, or has any legal or employment requirement (explicit or implicit) that might affect their decisions regarding a target, they must recuse themselves from any approval decisions regarding the target's tier status, though they may otherwise participate in discussions.
> * This requirement does not prevent part or all of this policy from being cited in an explicit contract or work agreement (e.g. to implement or maintain support for a target). This requirement exists to ensure that a developer or team responsible for reviewing and approving a target does not face any legal threats or obligations that would prevent them from freely exercising their judgment in such approval, even if such judgment involves subjective matters or goes beyond the letter of these requirements.

> Tier 3 targets must not impose burden on the authors of pull requests, or other developers in the community, to maintain the target. In particular, do not post comments (automated or manual) on a PR that derail or suggest a block on the PR based on a tier 3 target. Do not send automated messages or notifications (via any medium, including via `@)` to a PR author or others involved with a PR regarding a tier 3 target, unless they have opted into such messages.
> * Backlinks such as those generated by the issue/PR tracker when linking to an issue or PR are not considered a violation of this policy, within reason. However, such messages (even on a separate repository) must not generate notifications to anyone involved with a PR who has not requested such notifications.

> Patches adding or updating tier 3 targets must not break any existing tier 2 or tier 1 target, and must not knowingly break another tier 3 target without approval of either the compiler team or the maintainers of the other tier 3 target.
> * In particular, this may come up when working on closely related targets, such as variations of the same architecture with different features. Avoid introducing unconditional uses of features that another variation of the target may not have; use conditional compilation or runtime detection, as appropriate, to let each target run code supported by that target.

I acknowledge these requirements and intend to ensure that they are met.

This target does not touch any existing tier 2 or tier 1 targets and should not break any other targets.
2024-04-05 22:33:25 +02:00