This commit rewrites a number of `run-make` tests centered around wasm
to instead use `rmake.rs` and additionally use the `wasm32-wasip1`
target instead of `wasm32-unknown-unknown`. Testing no longer requires
Node.js and additionally uses the `wasmparser` crate from crates.io to
parse outputs and power assertions.
Misc improvements to non local defs lint implementation
This PR is a collection of small improvements I found when I [needlessly tried](https://www.github.com/rust-lang/rust/pull/120393#issuecomment-1971787475) to fix a "perf-regression" in the lint implementation.
I recommend looking at each commit individually.
Optimize `Symbol::integer` by utilizing in-place formatting
This PR optimize `Symbol::integer` by utilizing `itoa` in-place formatting instead of going through a dynamically allocated `String` and the format machinery.
<details>
For some context: I was profiling `rustc --check-cfg` with callgrind and due to the way we currently setup all the targets and we end-up calling `Symbol::integer` multiple times for all the targets. Using `itoa` reduced the number of instructions.
</details>
Introduces the `arm64ec-pc-windows-msvc` target for building Arm64EC ("Emulation Compatible") binaries for Windows.
For more information about Arm64EC see <https://learn.microsoft.com/en-us/windows/arm/arm64ec>.
Tier 3 policy:
> A tier 3 target must have a designated developer or developers (the "target maintainers") on record to be CCed when issues arise regarding the target. (The mechanism to track and CC such developers may evolve over time.)
I will be the maintainer for this target.
> Targets must use naming consistent with any existing targets; for instance, a target for the same CPU or OS as an existing Rust target should use the same name for that CPU or OS. Targets should normally use the same names and naming conventions as used elsewhere in the broader ecosystem beyond Rust (such as in other toolchains), unless they have a very good reason to diverge. Changing the name of a target can be highly disruptive, especially once the target reaches a higher tier, so getting the name right is important even for a tier 3 target.
Target uses the `arm64ec` architecture to match LLVM and MSVC, and the `-pc-windows-msvc` suffix to indicate that it targets Windows via the MSVC environment.
> Target names should not introduce undue confusion or ambiguity unless absolutely necessary to maintain ecosystem compatibility. For example, if the name of the target makes people extremely likely to form incorrect beliefs about what it targets, the name should be changed or augmented to disambiguate it.
Target name exactly specifies the type of code that will be produced.
> If possible, use only letters, numbers, dashes and underscores for the name. Periods (.) are known to cause issues in Cargo.
Done.
> Tier 3 targets may have unusual requirements to build or use, but must not create legal issues or impose onerous legal terms for the Rust project or for Rust developers or users.
> The target must not introduce license incompatibilities.
Uses the same dependencies, requirements and licensing as the other `*-pc-windows-msvc` targets.
> Anything added to the Rust repository must be under the standard Rust license (MIT OR Apache-2.0).
Understood.
> The target must not cause the Rust tools or libraries built for any other host (even when supporting cross-compilation to the target) to depend on any new dependency less permissive than the Rust licensing policy. This applies whether the dependency is a Rust crate that would require adding new license exceptions (as specified by the tidy tool in the rust-lang/rust repository), or whether the dependency is a native library or binary. In other words, the introduction of the target must not cause a user installing or running a version of Rust or the Rust tools to be subject to any new license requirements.
> Compiling, linking, and emitting functional binaries, libraries, or other code for the target (whether hosted on the target itself or cross-compiling from another target) must not depend on proprietary (non-FOSS) libraries. Host tools built for the target itself may depend on the ordinary runtime libraries supplied by the platform and commonly used by other applications built for the target, but those libraries must not be required for code generation for the target; cross-compilation to the target must not require such libraries at all. For instance, rustc built for the target may depend on a common proprietary C runtime library or console output library, but must not depend on a proprietary code generation library or code optimization library. Rust's license permits such combinations, but the Rust project has no interest in maintaining such combinations within the scope of Rust itself, even at tier 3.
> "onerous" here is an intentionally subjective term. At a minimum, "onerous" legal/licensing terms include but are not limited to: non-disclosure requirements, non-compete requirements, contributor license agreements (CLAs) or equivalent, "non-commercial"/"research-only"/etc terms, requirements conditional on the employer or employment of any particular Rust developers, revocable terms, any requirements that create liability for the Rust project or its developers or users, or any requirements that adversely affect the livelihood or prospects of the Rust project or its developers or users.
Uses the same dependencies, requirements and licensing as the other `*-pc-windows-msvc` targets.
> Neither this policy nor any decisions made regarding targets shall create any binding agreement or estoppel by any party. If any member of an approving Rust team serves as one of the maintainers of a target, or has any legal or employment requirement (explicit or implicit) that might affect their decisions regarding a target, they must recuse themselves from any approval decisions regarding the target's tier status, though they may otherwise participate in discussions.
> This requirement does not prevent part or all of this policy from being cited in an explicit contract or work agreement (e.g. to implement or maintain support for a target). This requirement exists to ensure that a developer or team responsible for reviewing and approving a target does not face any legal threats or obligations that would prevent them from freely exercising their judgment in such approval, even if such judgment involves subjective matters or goes beyond the letter of these requirements.
Understood, I am not a member of the Rust team.
> Tier 3 targets should attempt to implement as much of the standard libraries as possible and appropriate (core for most targets, alloc for targets that can support dynamic memory allocation, std for targets with an operating system or equivalent layer of system-provided functionality), but may leave some code unimplemented (either unavailable or stubbed out as appropriate), whether because the target makes it impossible to implement or challenging to implement. The authors of pull requests are not obligated to avoid calling any portions of the standard library on the basis of a tier 3 target not implementing those portions.
Both `core` and `alloc` are supported.
Support for `std` dependends on making changes to the standard library, `stdarch` and `backtrace` which cannot be done yet as the bootstrapping compiler raises a warning ("unexpected `cfg` condition value") for `target_arch = "arm64ec"`.
> The target must provide documentation for the Rust community explaining how to build for the target, using cross-compilation if possible. If the target supports running binaries, or running tests (even if they do not pass), the documentation must explain how to run such binaries or tests for the target, using emulation if possible or dedicated hardware if necessary.
Documentation is provided in src/doc/rustc/src/platform-support/arm64ec-pc-windows-msvc.md
> Tier 3 targets must not impose burden on the authors of pull requests, or other developers in the community, to maintain the target. In particular, do not post comments (automated or manual) on a PR that derail or suggest a block on the PR based on a tier 3 target. Do not send automated messages or notifications (via any medium, including via @) to a PR author or others involved with a PR regarding a tier 3 target, unless they have opted into such messages.
> Backlinks such as those generated by the issue/PR tracker when linking to an issue or PR are not considered a violation of this policy, within reason. However, such messages (even on a separate repository) must not generate notifications to anyone involved with a PR who has not requested such notifications.
> Patches adding or updating tier 3 targets must not break any existing tier 2 or tier 1 target, and must not knowingly break another tier 3 target without approval of either the compiler team or the maintainers of the other tier 3 target.
> In particular, this may come up when working on closely related targets, such as variations of the same architecture with different features. Avoid introducing unconditional uses of features that another variation of the target may not have; use conditional compilation or runtime detection, as appropriate, to let each target run code supported by that target.
Understood.
Leverage `anstyle-svg`, as `cargo` does now, to emit `.svg` files
instead of `.stderr` files for tests that explicitly enable color
output. This will make reviewing changes to the graphical output of
tests much more human friendly.
Introduce `run-make` V2 infrastructure, a `run_make_support` library and port over 2 tests as example
## Preface
See [issue #40713: Switch run-make tests from Makefiles to rust](https://github.com/rust-lang/rust/issues/40713) for more context.
## Basic Description of `run-make` V2
`run-make` V2 aims to eliminate the dependency on `make` and `Makefile`s for building `run-make`-style tests. Makefiles are replaced by *recipes* (`rmake.rs`). The current implementation runs `run-make` V2 tests in 3 steps:
1. We build the support library `run_make_support` which the `rmake.rs` recipes depend on as a tool lib.
2. We build the recipe `rmake.rs` and link in the support library.
3. We run the recipe to build and run the tests.
`rmake.rs` is basically a replacement for `Makefile`, and allows running arbitrary Rust code. The support library is built using cargo, and so can depend on external crates if desired.
The infrastructure implemented by this PR is very barebones, and is the minimally required infrastructure needed to build, run and pass the two example `run-make` tests ported over to the new infrastructure.
### Example `run-make` V2 test
```rs
// ignore-tidy-linelength
extern crate run_make_support;
use std::path::PathBuf;
use run_make_support::{aux_build, rustc};
fn main() {
aux_build()
.arg("--emit=metadata")
.arg("stable.rs")
.run();
let mut stable_path = PathBuf::from(env!("TMPDIR"));
stable_path.push("libstable.rmeta");
let output = rustc()
.arg("--emit=metadata")
.arg("--extern")
.arg(&format!("stable={}", &stable_path.to_string_lossy()))
.arg("main.rs")
.run();
let stderr = String::from_utf8_lossy(&output.stderr);
let version = include_str!(concat!(env!("S"), "/src/version"));
let expected_string = format!("stable since {}", version.trim());
assert!(stderr.contains(&expected_string));
}
```
## Follow Up Work
- [ ] Adjust rustc-dev-guide docs
rustc: Fix wasm64 metadata object files
It looks like LLD will detect object files being either 32 or 64-bit depending on any memory present. LLD will additionally reject 32-bit objects during a 64-bit link. Previously metadata objects did not have any memories in them which led LLD to conclude they were 32-bit objects which broke 64-bit targets for wasm.
This commit fixes this by ensuring that for 64-bit targets there's a memory object present to get LLD to detect it's a 64-bit target. Additionally this commit moves away from a hand-crafted wasm encoder to the `wasm-encoder` crate on crates.io as the complexity grows for the generated object file.
Closes#121460
add platform-specific function to get the error number for HermitOS
Extending `std` to get the last error number for HermitOS.
HermitOS is a tier 3 platform and this PR changes only files, wich are related to the tier 3 platform.
Extending `std` to get the last error number for HermitOS.
HermitOS is a tier 3 platform and this PR changes only files,
wich are related to the tier 3 platform.
Split rustc_type_ir to avoid rustc_ast from depending on it
unblocks #121576
and resolves a FIXME in `rustc_ast`'s `Cargo.toml`
The new crate is tiny, but it will get bigger in #121576
It looks like LLD will detect object files being either 32 or 64-bit
depending on any memory present. LLD will additionally reject 32-bit
objects during a 64-bit link. Previously metadata objects did not have
any memories in them which led LLD to conclude they were 32-bit objects
which broke 64-bit targets for wasm.
This commit fixes this by ensuring that for 64-bit targets there's a
memory object present to get LLD to detect it's a 64-bit target.
Additionally this commit moves away from a hand-crafted wasm encoder to
the `wasm-encoder` crate on crates.io as the complexity grows for the
generated object file.
Closes#121460
Unify dylib loading between proc macros and codegen backends
As bonus this makes the errors when failing to load a proc macro more informative to match the backend loading errors. In addition it makes it slightly easier to patch rustc to work on platforms that don't support dynamic linking like wasm.
As bonus this makes the errors when failing to load a proc macro more
informative to match the backend loading errors. In addition it makes it
slightly easier to patch rustc to work on platforms that don't support
dynamic linking like wasm.
The goal of this commit is to remove warnings using LLVM tip-of-tree
`wasm-ld`. In llvm/llvm-project#78658 the `wasm-ld` LLD driver no longer
looks at archive indices and instead looks at all the objects in
archives. Previously `lib.rmeta` files were simply raw rustc metadata
bytes, not wasm objects, meaning that `wasm-ld` would emit a warning
indicating so.
WebAssembly targets previously passed `--fatal-warnings` to `wasm-ld` by
default which meant that if Rust were to update to LLVM 18 then all wasm
targets would not work. This immediate blocker was resolved in
rust-lang/rust#120278 which removed `--fatal-warnings` which enabled a
theoretical update to LLVM 18 for wasm targets. This current state is
ok-enough for now because rustc squashes all linker output by default if
it doesn't fail. This means, for example, that rustc squashes all the
linker warnings coming out of `wasm-ld` about `lib.rmeta` files with
LLVM 18. This again isn't a pressing issue because the information is
all hidden, but it runs the risk of being annoying if another linker
error were to happen and then the output would have all these unrelated
warnings that couldn't be fixed.
Thus, this PR comes into the picture. The goal of this PR is to resolve
these warnings by using the WebAssembly object file format on wasm
targets instead of using raw rustc metadata. When I first implemented
the rlib-in-objects scheme in #84449 I remember either concluding that
`wasm-ld` would either include the metadata in the output or I thought
we didn't have to do anything there at all. I think I was wrong on both
counts as `wasm-ld` does not include the metadata in the final output
unless the object is referenced and we do actually need to do something
to resolve these warnings.
This PR updates the object file format containing rustc metadata on
WebAssembly targets to be an actual WebAssembly file. This enables the
`wasm` feature of the `object` crate to be able to read the custom
section in the same manner as other platforms, but currently `object`
doesn't support writing wasm object files so a handwritten encoder is
used instead.
The only caveat I know of with this is that if `wasm-ld` does indeed
look at the object file then the metadata will be included in the final
output. I believe the only thing that could cause that at this time is
`--whole-archive` which I don't think is passed for rlibs. I would
clarify that I'm not 100% certain about this, however.
tidy: reduce allocs
this reduces allocs in tidy from (dhat output)
```
==31349== Total: 1,365,199,543 bytes in 4,774,213 blocks
==31349== At t-gmax: 10,975,708 bytes in 66,093 blocks
==31349== At t-end: 2,880,947 bytes in 12,332 blocks
==31349== Reads: 5,210,008,956 bytes
==31349== Writes: 1,280,920,127 bytes
```
to
```
==66633== Total: 791,565,538 bytes in 3,503,144 blocks
==66633== At t-gmax: 10,914,511 bytes in 65,997 blocks
==66633== At t-end: 395,531 bytes in 941 blocks
==66633== Reads: 4,249,388,949 bytes
==66633== Writes: 814,119,580 bytes
```
<del>by wrapping regex and updating `ignore` (effect probably not only from `ignore`, didn't measured)</del>
also moves one more regex into `Lazy` to reduce regex rebuilds.
yes, once_cell better, but ...
this reduces from
==31349== Total: 1,365,199,543 bytes in 4,774,213 blocks
==31349== At t-gmax: 10,975,708 bytes in 66,093 blocks
==31349== At t-end: 2,880,947 bytes in 12,332 blocks
==31349== Reads: 5,210,008,956 bytes
==31349== Writes: 1,280,920,127 bytes
to
==47796== Total: 821,467,407 bytes in 3,955,595 blocks
==47796== At t-gmax: 10,976,209 bytes in 66,100 blocks
==47796== At t-end: 2,944,016 bytes in 12,490 blocks
==47796== Reads: 4,788,959,023 bytes
==47796== Writes: 975,493,639 bytes
miropt-test-tools: remove regex usage
this removes regex usage and slightly refactors ext stripping in one case
Update mdbook to 0.4.37
This updates mdbook to 0.4.37.
Changelog: https://github.com/rust-lang/mdBook/blob/master/CHANGELOG.md#mdbook-0437
The primary change is the update to pulldown-cmark which has a large number of markdown parsing changes. There shouldn't be any significant changes to the rendering of any of the books (I have posted some PRs to fix some minor issues to the ones that were affected).
The existing regex-based HTML parsing was just too primitive to
correctly handle HTML content. Some books have legitimate `href="…"`
text which should not be validated because it is part of the text, not
actual HTML.
Actually abort in -Zpanic-abort-tests
When a test fails in panic=abort, it can be useful to have a debugger or other tooling hook into the `abort()` call for debugging. Doing this some other way would require it to hard code details of Rust's panic machinery.
There's no reason we couldn't have done this in the first place; using a single exit code for "success" or "failed" was just simpler. Now we are aware of the special exit codes for posix and windows platforms, logging a special error if an unrecognized code is used on those platforms, and falling back to just "failure" on other platforms.
This continues to account for `#[should_panic]` inside the test process itself, so there's no risk of misinterpreting a random call to `abort()` as an expected panic. Any exit code besides `TR_OK` is logged as a test failure.
As an added benefit, this would allow us to support panic=immediate_abort (but not `#[should_panic]`), without noise about unexpected exit codes when a test fails.
Fixes footnote handling in rustdoc
Fixes#100638.
You can now declare footnotes like this:
```rust
//! Reference to footnotes A[^1], B[^2] and C[^3].
//!
//! [^1]: Footnote A.
//! [^2]: Footnote B.
//! [^3]: Footnote C.
```
r? `@notriddle`
Error codes are integers, but `String` is used everywhere to represent
them. Gross!
This commit introduces `ErrCode`, an integral newtype for error codes,
replacing `String`. It also introduces a constant for every error code,
e.g. `E0123`, and removes the `error_code!` macro. The constants are
imported wherever used with `use rustc_errors::codes::*`.
With the old code, we have three different ways to specify an error code
at a use point:
```
error_code!(E0123) // macro call
struct_span_code_err!(dcx, span, E0123, "msg"); // bare ident arg to macro call
\#[diag(name, code = "E0123")] // string
struct Diag;
```
With the new code, they all use the `E0123` constant.
```
E0123 // constant
struct_span_code_err!(dcx, span, E0123, "msg"); // constant
\#[diag(name, code = E0123)] // constant
struct Diag;
```
The commit also changes the structure of the error code definitions:
- `rustc_error_codes` now just defines a higher-order macro listing the
used error codes and nothing else.
- Because that's now the only thing in the `rustc_error_codes` crate, I
moved it into the `lib.rs` file and removed the `error_codes.rs` file.
- `rustc_errors` uses that macro to define everything, e.g. the error
code constants and the `DIAGNOSTIC_TABLES`. This is in its new
`codes.rs` file.
Avoid code generation for ThinVec<Diagnostic>'s destructor in the query system
This avoids 2 instances of the destructor of `ThinVec<Diagnostic>` from being included in `execute_job`. It also outlines the cold branch in `store_side_effects` / `store_side_effects_for_anon_node`.
pat_analysis: Don't rely on contiguous `VariantId`s outside of rustc
Today's pattern_analysis uses `BitSet` and `IndexVec` on the provided enum variant ids, which only makes sense if these ids count the variants from 0. In rust-analyzer, the variant ids are global interning ids, which would make `BitSet` and `IndexVec` ridiculously wasteful. In this PR I add some shims to use `FxHashSet`/`FxHashMap` instead outside of rustc.
r? ```@compiler-errors```
Use `zip_eq` to enforce that things being zipped have equal sizes
Some `zip`s are best enforced to be equal, since size mismatches suggest deeper bugs in the compiler.
Exhaustiveness: remove the need for arena-allocation within the algorithm
After https://github.com/rust-lang/rust/pull/119688, exhaustiveness checking doesn't need access to the arena anymore. This simplifies the lifetime story and makes it compile on stable without the extra dependency.
r? `@compiler-errors`
next solver: provisional cache
this adds the cache removed in #115843. However, it should now correctly track whether a provisional result depends on an inductive or coinductive stack.
While working on this, I was using the following doc: https://hackmd.io/VsQPjW3wSTGUSlmgwrDKOA. I don't think it's too helpful to understanding this, but am somewhat hopeful that the inline comments are more useful.
There are quite a few future perf improvements here. Given that this is already very involved I don't believe it is worth it (for now). While working on this PR one of my few attempts to significantly improve perf ended up being unsound again because I was not careful enough ✨
r? `@compiler-errors`
annotate-snippets: update to 0.10
Ports `annotate-snippets` to 0.10, temporary dupes versions; other crates left that depends on 0.9 is `ui_test` and `rustfmt`.
The standard library's std::sync::mpsc basically is a crossbeam channel,
and for the use case here will definitely suffice. This drops this
dependency from librustc_driver.
This involves lots of breaking changes. There are two big changes that
force changes. The first is that the bitflag types now don't
automatically implement normal derive traits, so we need to derive them
manually.
Additionally, bitflags now have a hidden inner type by default, which
breaks our custom derives. The bitflags docs recommend using the impl
form in these cases, which I did.
Make exhaustiveness usable outside of rustc
With this PR, `rustc_pattern_analysis` compiles on stable (with the `stable` feature)! `rust-analyzer` will be able to use it to provide match-related diagnostics and refactors.
Two questions:
- Should I name the feature `nightly` instead of `rustc` for consistency with other crates? `rustc` makes more sense imo.
- `typed-arena` is an optional dependency but tidy made me add it to the allow-list anyway. Can I avoid that somehow?
r? `@compiler-errors`
Add instance evaluation and methods to read an allocation in StableMIR
The instance evaluation is needed to handle intrinsics such as `type_id` and `type_name`.
Since we now use Allocation to represent all evaluated constants, provide a few methods to help process the data inside an allocation.
I've also started to add a structured way to get information about the compilation target machine. For now, I've only added information needed to process an allocation.
r? ``````@ouz-a``````
The instance evaluation is needed to handle intrinsics such as
`type_id` and `type_name`.
Since we now use Allocation to represent all evaluated constants,
provide a few methods to help process the data inside an allocation.
Use `unwinding` crate for unwinding on Xous platform
This patch adds support for using [unwinding](https://github.com/nbdd0121/unwinding) on platforms where libunwinding isn't viable. An example of such a platform is `riscv32imac-unknown-xous-elf`.
### Background
The Rust project maintains a fork of llvm at [llvm-project](https://github.com/rust-lang/llvm-project/) where it applies patches on top of the llvm project. This mostly seems to be to get unwinding support for the SGX project, and there may be other patches that I'm unaware of.
There is a lot of machinery in the build system to support compiling `libunwind` on other platforms, and I needed to add additional patches to llvm in order to add support for Xous.
Rather than continuing down this path, it seemed much easier to use a Rust-based library. The `unwinding` crate by `@nbdd0121` fits this description perfectly.
### Future work
This could potentially replace the custom patches for `libunwind` on other platforms such as SGX, and could enable unwinding support on many more exotic platforms.
### Anti-goals
This is not designed to replace `libunwind` on tier-one platforms or those where unwinding support already exists. There is already a well-established approach for unwinding there. Instead, this aims to enable unwinding on new platforms where C++ code may be difficult to compile.
Report errors in jobserver inherited through environment variables
This pr attempts to catch situations, when jobserver exists, but is not being inherited.
r? `@petrochenkov`
Call FileEncoder::finish in rmeta encoding
Fixes https://github.com/rust-lang/rust/issues/117254
The bug here was that rmeta encoding never called FileEncoder::finish. Now it does. Most of the changes here are needed to support that, since rmeta encoding wants to finish _then_ access the File in the encoder, so finish can't move out.
I tried adding a `cfg(debug_assertions)` exploding Drop impl to FileEncoder that checked for finish being called before dropping, but fatal errors cause unwinding so this isn't really possible. If we encounter a fatal error with a dirty FileEncoder, the Drop impl ICEs even though the implementation is correct. If we try to paper over that by wrapping FileEncoder in ManuallyDrop then that just erases the fact that Drop automatically checks that we call finish on all paths.
I also changed the name of DepGraph::encode to DepGraph::finish_encoding, because that's what it does and it makes the fact that it is the path to FileEncoder::finish less confusing.
r? `@WaffleLapkin`
Update windows-bindgen and define `INVALID_HANDLE_VALUE` ourselves
We generate bindings to the Windows API via the `windows-bindgen` crate, which is ultimately what's also used to generate the `windows-sys` and `windows` crates. However, there currently is some custom sauce just for std which makes it a bit different from the vanilla bindings. I would love for us to reduce and eventually remove the differences entirely so that std is using the exact same bindings as everyone else. Maybe in the future we can even just have a normal dependency on `windows-sys`.
This PR removes one of those special things. Our definition of `INVALID_HANDLE_VALUE` relies on an experimental nightly feature for strict provenance, so lets bring that back in house. It also excludes it from the codegen step though that isn't strictly necessary as we override it in any case.
This PR also updates windows-bingen to 0.52.0.
By default, `newtype_index!` types get a default `Encodable`/`Decodable`
impl. You can opt out of this with `custom_encodable`. Opting out is the
opposite to how Rust normally works with autogenerated (derived) impls.
This commit inverts the behaviour, replacing `custom_encodable` with
`encodable` which opts into the default `Encodable`/`Decodable` impl.
Only 23 of the 59 `newtype_index!` occurrences need `encodable`.
Even better, there were eight crates with a dependency on
`rustc_serialize` just from unused default `Encodable`/`Decodable`
impls. This commit removes that dependency from those eight crates.
Add arm64e-apple-ios & arm64e-apple-darwin targets
This introduces
* `arm64e-apple-ios`
* `arm64e-apple-darwin`
Rust targets for support `arm64e` architecture on `iOS` and `Darwin`.
So, this is a first approach for integrating to the Rust compiler.
## Tier 3 Target Policy
> * A tier 3 target must have a designated developer or developers (the "target
maintainers") on record to be CCed when issues arise regarding the target.
(The mechanism to track and CC such developers may evolve over time.)
I will be the target maintainer.
> * Targets must use naming consistent with any existing targets; for instance, a
target for the same CPU or OS as an existing Rust target should use the same
name for that CPU or OS. Targets should normally use the same names and
naming conventions as used elsewhere in the broader ecosystem beyond Rust
(such as in other toolchains), unless they have a very good reason to
diverge. Changing the name of a target can be highly disruptive, especially
once the target reaches a higher tier, so getting the name right is important
even for a tier 3 target.
Target names should not introduce undue confusion or ambiguity unless
absolutely necessary to maintain ecosystem compatibility. For example, if
the name of the target makes people extremely likely to form incorrect
beliefs about what it targets, the name should be changed or augmented to
disambiguate it.
If possible, use only letters, numbers, dashes and underscores for the name.
Periods (.) are known to cause issues in Cargo.
The target names `arm64e-apple-ios`, `arm64e-apple-darwin` were derived from `aarch64-apple-ios`, `aarch64-apple-darwin`.
In this [ticket,](#73628) people discussed the best suitable names for these targets.
> In some cases, the arm64e arch might be "different". For example:
> * `thread_set_state` might fail with (os/kern) protection failure if we try to call it from arm64 process to arm64e process.
> * The returning value of dlsym is PAC signed on arm64e, while left untouched on arm64
> * Some function like pthread_create_from_mach_thread requires a PAC signed function pointer on arm64e, which is not required on arm64.
So, I have chosen them because there are similar triplets in LLVM. I think there are no more suitable names for these targets.
> * Tier 3 targets may have unusual requirements to build or use, but must not
create legal issues or impose onerous legal terms for the Rust project or for
Rust developers or users.
The target must not introduce license incompatibilities.
Anything added to the Rust repository must be under the standard Rust
license (MIT OR Apache-2.0).
The target must not cause the Rust tools or libraries built for any other
host (even when supporting cross-compilation to the target) to depend
on any new dependency less permissive than the Rust licensing policy. This
applies whether the dependency is a Rust crate that would require adding
new license exceptions (as specified by the tidy tool in the
rust-lang/rust repository), or whether the dependency is a native library
or binary. In other words, the introduction of the target must not cause a
user installing or running a version of Rust or the Rust tools to be
subject to any new license requirements.
Compiling, linking, and emitting functional binaries, libraries, or other
code for the target (whether hosted on the target itself or cross-compiling
from another target) must not depend on proprietary (non-FOSS) libraries.
Host tools built for the target itself may depend on the ordinary runtime
libraries supplied by the platform and commonly used by other applications
built for the target, but those libraries must not be required for code
generation for the target; cross-compilation to the target must not require
such libraries at all. For instance, rustc built for the target may
depend on a common proprietary C runtime library or console output library,
but must not depend on a proprietary code generation library or code
optimization library. Rust's license permits such combinations, but the
Rust project has no interest in maintaining such combinations within the
scope of Rust itself, even at tier 3.
"onerous" here is an intentionally subjective term. At a minimum, "onerous"
legal/licensing terms include but are not limited to: non-disclosure
requirements, non-compete requirements, contributor license agreements
(CLAs) or equivalent, "non-commercial"/"research-only"/etc terms,
requirements conditional on the employer or employment of any particular
Rust developers, revocable terms, any requirements that create liability
for the Rust project or its developers or users, or any requirements that
adversely affect the livelihood or prospects of the Rust project or its
developers or users.
No dependencies were added to Rust.
> * Neither this policy nor any decisions made regarding targets shall create any
binding agreement or estoppel by any party. If any member of an approving
Rust team serves as one of the maintainers of a target, or has any legal or
employment requirement (explicit or implicit) that might affect their
decisions regarding a target, they must recuse themselves from any approval
decisions regarding the target's tier status, though they may otherwise
participate in discussions.
> * This requirement does not prevent part or all of this policy from being
cited in an explicit contract or work agreement (e.g. to implement or
maintain support for a target). This requirement exists to ensure that a
developer or team responsible for reviewing and approving a target does not
face any legal threats or obligations that would prevent them from freely
exercising their judgment in such approval, even if such judgment involves
subjective matters or goes beyond the letter of these requirements.
Understood.
I am not a member of a Rust team.
> * Tier 3 targets should attempt to implement as much of the standard libraries
as possible and appropriate (core for most targets, alloc for targets
that can support dynamic memory allocation, std for targets with an
operating system or equivalent layer of system-provided functionality), but
may leave some code unimplemented (either unavailable or stubbed out as
appropriate), whether because the target makes it impossible to implement or
challenging to implement. The authors of pull requests are not obligated to
avoid calling any portions of the standard library on the basis of a tier 3
target not implementing those portions.
Understood.
`std` is supported.
> * The target must provide documentation for the Rust community explaining how
to build for the target, using cross-compilation if possible. If the target
supports running binaries, or running tests (even if they do not pass), the
documentation must explain how to run such binaries or tests for the target,
using emulation if possible or dedicated hardware if necessary.
Building is described in the derived target doc.
> * Tier 3 targets must not impose burden on the authors of pull requests, or
other developers in the community, to maintain the target. In particular,
do not post comments (automated or manual) on a PR that derail or suggest a
block on the PR based on a tier 3 target. Do not send automated messages or
notifications (via any medium, including via `@)` to a PR author or others
involved with a PR regarding a tier 3 target, unless they have opted into
such messages.
> * Backlinks such as those generated by the issue/PR tracker when linking to
an issue or PR are not considered a violation of this policy, within
reason. However, such messages (even on a separate repository) must not
generate notifications to anyone involved with a PR who has not requested
such notifications.
Understood.
> * Patches adding or updating tier 3 targets must not break any existing tier 2
or tier 1 target, and must not knowingly break another tier 3 target without
approval of either the compiler team or the maintainers of the other tier 3
target.
> * In particular, this may come up when working on closely related targets,
such as variations of the same architecture with different features. Avoid
introducing unconditional uses of features that another variation of the
target may not have; use conditional compilation or runtime detection, as
appropriate, to let each target run code supported by that target.
These targets are not fully ABI compatible with arm64e code.
#73628
Begin to abstract `rustc_type_ir` for rust-analyzer
This adds the "nightly" feature which is used by the compiler, and falls back to more simple implementations when that is not active.
r? `@lcnr` or `@jackh726`
Update ICU4X
This updates all ICU4X crates and regenerates rustc_baked_icu_data.
Since the new unicode license under which they are licensed does not have an SPDX identifier yet, we define some exceptions. The license has to be reviewed to make sure it is still fine to use here, but I assume that is the case.
I also added an exception for rustc_icu_data to the unexplained ignore doctest tidy lint. This is a bit hacky but the whole style.rs in tidy is a mess so I didn't want to touch it more than this small hack.
part of #112865
r? `@davidtwco` `@wesleywiser` `@Manishearth`
This updates all ICU4X crates and regenerates rustc_baked_icu_data.
Since the new unicode license under which they are licensed does not
have an SPDX identifier yet, we define some exceptions. The license has
to be reviewed to make sure it is still fine to use here, but I assume
that is the case.
I also added an exception for rustc_icu_data to the unexplained ignore
doctest tidy lint. This is a bit hacky but the whole style.rs in tidy is
a mess so I didn't want to touch it more than this small hack.
enable parallel rustc front end in nightly builds
Refers to the [MCP](https://github.com/rust-lang/compiler-team/issues/681), this pr does:
1. Enable the parallel front end in nightly builds, and keep the default number of threads as 1. Then users can use the parallel rustc front end via -Z threads=n option.
2. Set it up to serial front end for beta/stable builds via bootstrap.
3. Switch over the alt builders from parallel rustc to serial, so we have artifacts without parallel to test against the artifacts with parallel.
r? `@oli-obk`
cc `@cjgillot` `@nnethercote` `@bjorn3` `@Kobzol`
Remove obsolete support for linking unwinder on Android
Linking libgcc is no longer supported (see #103673), so remove the related link attributes and the check in unwind's build.rs. The check was the last remaining significant piece of logic in build.rs, so remove build.rs as well.
They've been deprecated for four years.
This commit includes the following changes.
- It eliminates the `rustc_plugin_impl` crate.
- It changes the language used for lints in
`compiler/rustc_driver_impl/src/lib.rs` and
`compiler/rustc_lint/src/context.rs`. External lints are now called
"loaded" lints, rather than "plugins" to avoid confusion with the old
plugins. This only has a tiny effect on the output of `-W help`.
- E0457 and E0498 are no longer used.
- E0463 is narrowed, now only relating to unfound crates, not plugins.
- The `plugin` feature was moved from "active" to "removed".
- It removes the entire plugins chapter from the unstable book.
- It removes quite a few tests, mostly all of those in
`tests/ui-fulldeps/plugin/`.
Closes#29597.
Linking libgcc is no longer supported (see #103673), so remove the
related link attributes and the check in unwind's build.rs. The check
was the last remaining significant piece of logic in build.rs, so
remove build.rs as well.
The debug probably isn't useful, and assigning all the `$foo`
metavariables to `foo` variables is verbose and weird. Also, `$x:expr`
usually doesn't have a space after the `:`.
rustdoc: use JS to inline target type impl docs into alias
Preview docs:
- https://notriddle.com/rustdoc-html-demo-5/js-trait-alias/std/io/type.Result.html
- https://notriddle.com/rustdoc-html-demo-5/js-trait-alias-compiler/rustc_middle/ty/type.PolyTraitRef.html
This pull request also includes a bug fix for trait alias inlining across crates. This means more documentation is generated, and is why ripgrep runs slower (it's a thin wrapper on top of the `grep` crate, so 5% of its docs are now the Result type).
- Before, built with rustdoc 1.75.0-nightly (aa1a71e9e 2023-10-26), Result type alias method docs are missing: http://notriddle.com/rustdoc-html-demo-5/ripgrep-js-nightly/rg/type.Result.html
- After, built with this branch, all the methods on Result are shown: http://notriddle.com/rustdoc-html-demo-5/ripgrep-js-trait-alias/rg/type.Result.html
*Review note: This is mostly just reverting https://github.com/rust-lang/rust/pull/115201. The last commit has the new work in it.*
Fixes#115718
This is an attempt to balance three problems, each of which would
be violated by a simpler implementation:
- A type alias should show all the `impl` blocks for the target
type, and vice versa, if they're applicable. If nothing was
done, and rustdoc continues to match them up in HIR, this
would not work.
- Copying the target type's docs into its aliases' HTML pages
directly causes far too much redundant HTML text to be generated
when a crate has large numbers of methods and large numbers
of type aliases.
- Using JavaScript exclusively for type alias impl docs would
be a functional regression, and could make some docs very hard
to find for non-JS readers.
- Making sure that only applicable docs are show in the
resulting page requires a type checkers. Do not reimplement
the type checker in JavaScript.
So, to make it work, rustdoc stashes these type-alias-inlined docs
in a JSONP "database-lite". The file is generated in `write_shared.rs`,
included in a `<script>` tag added in `print_item.rs`, and `main.js`
takes care of patching the additional docs into the DOM.
The format of `trait.impl` and `type.impl` JS files are superficially
similar. Each line, except the JSONP wrapper itself, belongs to a crate,
and they are otherwise separate (rustdoc should be idempotent). The
"meat" of the file is HTML strings, so the frontend code is very simple.
Links are relative to the doc root, though, so the frontend needs to fix
that up, and inlined docs can reuse these files.
However, there are a few differences, caused by the sophisticated
features that type aliases have. Consider this crate graph:
```text
---------------------------------
| crate A: struct Foo<T> |
| type Bar = Foo<i32> |
| impl X for Foo<i8> |
| impl Y for Foo<i32> |
---------------------------------
|
----------------------------------
| crate B: type Baz = A::Foo<i8> |
| type Xyy = A::Foo<i8> |
| impl Z for Xyy |
----------------------------------
```
The type.impl/A/struct.Foo.js JS file has a structure kinda like this:
```js
JSONP({
"A": [["impl Y for Foo<i32>", "Y", "A::Bar"]],
"B": [["impl X for Foo<i8>", "X", "B::Baz", "B::Xyy"], ["impl Z for Xyy", "Z", "B::Baz"]],
});
```
When the type.impl file is loaded, only the current crate's docs are
actually used. The main reason to bundle them together is that there's
enough duplication in them for DEFLATE to remove the redundancy.
The contents of a crate are a list of impl blocks, themselves
represented as lists. The first item in the sublist is the HTML block,
the second item is the name of the trait (which goes in the sidebar),
and all others are the names of type aliases that successfully match.
This way:
- There's no need to generate these files for types that have no aliases
in the current crate. If a dependent crate makes a type alias, it'll
take care of generating its own docs.
- There's no need to reimplement parts of the type checker in
JavaScript. The Rust backend does the checking, and includes its
results in the file.
- Docs defined directly on the type alias are dropped directly in the
HTML by `render_assoc_items`, and are accessible without JavaScript.
The JSONP file will not list impl items that are known to be part
of the main HTML file already.
[JSONP]: https://en.wikipedia.org/wiki/JSONP
Remove `rustc_symbol_mangling/messages.ftl`.
It contains a single message that (a) doesn't contain any natural language, and (b) is only used in tests.
r? `@davidtwco`
Add method to convert internal to stable constructs
This is an alternative implementation to https://github.com/rust-lang/rust/pull/116999. I believe we can still improve the logic a bit here, but I wanted to see which direction we should go first.
In this implementation, the API is simpler and we keep Tables somewhat private. The definition is still public though, since we have to expose the Stable trait. However, there's a cost of keeping another thread-local and using `Rc`, but I'm hoping it will be a small cost.
r? ``@oli-obk``
r? ``@spastorino``
Implement jump threading MIR opt
This pass is an attempt to generalize `ConstGoto` and `SeparateConstSwitch` passes into a more complete jump threading pass.
This pass is rather heavy, as it performs a truncated backwards DFS on MIR starting from each `SwitchInt` terminator. This backwards DFS remains very limited, as it only walks through `Goto` terminators.
It is build to support constants and discriminants, and a propagating through a very limited set of operations.
The pass successfully manages to disentangle the `Some(x?)` use case and the DFA use case. It still needs a few tests before being ready.
coverage: Emit the filenames section before encoding per-function mappings
When embedding coverage information in LLVM IR (and ultimately in the resulting binary), there are two main things that each CGU needs to emit:
- A single `__llvm_covmap` record containing a coverage header, which mostly consists of a list of filenames used by the CGU's coverage mappings.
- Several `__llvm_covfun` records, one for each instrumented function, each of which contains the hash of the list of filenames in the header.
There is a kind of loose cyclic dependency between the two: we need the hash of the file table before we can emit the covfun records, but we need to traverse all of the instrumented functions in order to build the file table.
The existing code works by processing the individual functions first. It lazily adds filenames to the file table, and stores the mostly-complete function records in a temporary list. After this it hashes the file table, emits the header (containing the file table), and then uses the hash to emit all of the function records.
This PR reverses that order: first we traverse all of the functions (without trying to prepare their function records) to build a *complete* file table, and then emit it immediately. At this point we have the file table hash, so we can then proceed to build and emit all of the function records, without needing to store them in an intermediate list.
---
Along the way, this PR makes some necessary changes that are also worthwhile in their own right:
- We split `FunctionCoverage` into distinct collector/finished phases, which neatly avoids some borrow-checker hassles when extracting a function's final expression/mapping data.
- We avoid having to re-sort a function's mappings when preparing the list of filenames that it uses.
This is an attempt to balance three problems, each of which would
be violated by a simpler implementation:
- A type alias should show all the `impl` blocks for the target
type, and vice versa, if they're applicable. If nothing was
done, and rustdoc continues to match them up in HIR, this
would not work.
- Copying the target type's docs into its aliases' HTML pages
directly causes far too much redundant HTML text to be generated
when a crate has large numbers of methods and large numbers
of type aliases.
- Using JavaScript exclusively for type alias impl docs would
be a functional regression, and could make some docs very hard
to find for non-JS readers.
- Making sure that only applicable docs are show in the
resulting page requires a type checkers. Do not reimplement
the type checker in JavaScript.
So, to make it work, rustdoc stashes these type-alias-inlined docs
in a JSONP "database-lite". The file is generated in `write_shared.rs`,
included in a `<script>` tag added in `print_item.rs`, and `main.js`
takes care of patching the additional docs into the DOM.
The format of `trait.impl` and `type.impl` JS files are superficially
similar. Each line, except the JSONP wrapper itself, belongs to a crate,
and they are otherwise separate (rustdoc should be idempotent). The
"meat" of the file is HTML strings, so the frontend code is very simple.
Links are relative to the doc root, though, so the frontend needs to fix
that up, and inlined docs can reuse these files.
However, there are a few differences, caused by the sophisticated
features that type aliases have. Consider this crate graph:
```text
---------------------------------
| crate A: struct Foo<T> |
| type Bar = Foo<i32> |
| impl X for Foo<i8> |
| impl Y for Foo<i32> |
---------------------------------
|
----------------------------------
| crate B: type Baz = A::Foo<i8> |
| type Xyy = A::Foo<i8> |
| impl Z for Xyy |
----------------------------------
```
The type.impl/A/struct.Foo.js JS file has a structure kinda like this:
```js
JSONP({
"A": [["impl Y for Foo<i32>", "Y", "A::Bar"]],
"B": [["impl X for Foo<i8>", "X", "B::Baz", "B::Xyy"], ["impl Z for Xyy", "Z", "B::Baz"]],
});
```
When the type.impl file is loaded, only the current crate's docs are
actually used. The main reason to bundle them together is that there's
enough duplication in them for DEFLATE to remove the redundancy.
The contents of a crate are a list of impl blocks, themselves
represented as lists. The first item in the sublist is the HTML block,
the second item is the name of the trait (which goes in the sidebar),
and all others are the names of type aliases that successfully match.
This way:
- There's no need to generate these files for types that have no aliases
in the current crate. If a dependent crate makes a type alias, it'll
take care of generating its own docs.
- There's no need to reimplement parts of the type checker in
JavaScript. The Rust backend does the checking, and includes its
results in the file.
- Docs defined directly on the type alias are dropped directly in the
HTML by `render_assoc_items`, and are accessible without JavaScript.
The JSONP file will not list impl items that are known to be part
of the main HTML file already.
[JSONP]: https://en.wikipedia.org/wiki/JSONP
Uplift movability and mutability, the simple way
Just make type_ir a dependency of ast. This can be relaxed later if we want to make the dependency less heavy. Part of rust-lang/types-team#124.
r? `@lcnr` or `@jackh726`
Implement rustc part of RFC 3127 trim-paths
This PR implements (or at least tries to) [RFC 3127 trim-paths](https://github.com/rust-lang/rust/issues/111540), the rustc part. That is `-Zremap-path-scope` with all of it's components/scopes.
`@rustbot` label: +F-trim-paths
Normalize alloc-id in tests.
AllocIds are globally numbered in a rustc invocation. This makes them very sensitive to changes unrelated to what is being tested. This commit normalizes them by renumbering, in order of appearance in the output.
The renumbering allows to keep the identity, that a simple `allocN` wouldn't. This is useful when we have memory dumps.
cc `@saethlin`
r? `@oli-obk`
Remove cgu_reuse_tracker from Session
This removes a bit of global mutable state.
It will now miss post-lto cgu reuse when ThinLTO determines that a cgu doesn't get changed, but there weren't any tests for this anyway and a test for it would be fragile to the exact implementation of ThinLTO in LLVM.
Implement `-Clink-self-contained=-linker` opt out
This implements the `-Clink-self-contained` opt out necessary to switch to lld by changing rustc's defaults instead of cargo's.
Components that are enabled and disabled on the CLI are recorded, for the purpose of being merged with the ones which the target spec will declare (I'll open another PR for that tomorrow, for easier review).
For MCP510, we now check whether using the self-contained linker is disabled on the CLI. Right now it would only be sensible to with `-Zgcc-ld=lld` (and I'll add some checks that we don't both enable and disable a component on the CLI in a future PR), but the goal is to simplify adding the check of the target's enabled components here in the follow-up PRs.
r? `@petrochenkov`
Bring back generic parameters for indices in rustc_abi and make it compile on stable
This effectively reverses https://github.com/rust-lang/rust/pull/107163, allowing rust-analyzer to depend on this crate again,
It also moves some glob imports / expands them in the first commit because they made it more difficult for me to reason about things.
Update windows ffi bindings
Bump `windows-bindgen` to version 0.51.1. This brings with it some changes to the generated FFI bindings, but little that affects the code.
One change that does have more of an impact is `SOCKET` being `usize` instead of either `u64` or `u32` (as is used in std's public `SOCKET` type). However, it's now easy enough to abstract over that difference.
Finally I added a few new bindings that are likely to be used in pending PRs, mostly to make sure they're ok with the new metadata.
r? libs
Split out the stable part of smir into its own crate to prevent accidental usage of forever unstable things
Some groundwork for being able to work on https://github.com/rust-lang/project-stable-mir/issues/27 at all
r? `@spastorino`
- Some comment fixes.
- Make some functions unsafe.
- Make helpers module private.
- Rebase on master
- Update r-efi to v4.2.0
Signed-off-by: Ayush Singh <ayushsingh1325@gmail.com>
This reduces the amount of dependencies pulling in atty.
```
Removing env_logger v0.9.3
```
Signed-off-by: Martin Kröning <martin.kroening@eonerc.rwth-aachen.de>
This reduces the amount of dependencies pulling in atty.
```
Updating colored v2.0.0 -> v2.0.4
Updating tracing-tree v0.2.3 -> v0.2.4
```
Signed-off-by: Martin Kröning <martin.kroening@eonerc.rwth-aachen.de>
Add initial libstd support for Xous
This patchset adds some minimal support to the tier-3 target `riscv32imac-unknown-xous-elf`. The following features are supported:
* alloc
* thread creation and joining
* thread sleeping
* thread_local
* panic_abort
* mutex
* condvar
* stdout
Additionally, internal support for the various Xous primitives surrounding IPC have been added as part of the Xous FFI. These may be exposed as part of `std::os::xous::ffi` in the future, however for now they are not public.
This represents the minimum viable product. A future patchset will add support for networking and filesystem support.
Avoid blessing cargo deps's source code in ui tests
Before this PR, the source code of dependencies was included in UI test error messages whenever possible. Unfortunately, "whenever possible" means in some cases the source code wouldn't be injected, resulting in a test failure.
One such case is when `$CARGO_HOME` is remapped to something that is not present on disk [^1]. As the remapped path doesn't exist on disk, the source code wouldn't be showed in `tests/ui/issues/issue-21763.rs`:
```diff
= note: required for `hashbrown::raw::RawTable<(Rc<()>, Rc<()>)>` to implement `Send`
note: required because it appears within the type `HashMap<Rc<()>, Rc<()>, RandomState>`
--> $HASHBROWN_SRC_LOCATION
- |
-LL | pub struct HashMap<K, V, S = DefaultHashBuilder, A: Allocator + Clone = Global> {
- | ^^^^^^^
note: required because it appears within the type `HashMap<Rc<()>, Rc<()>>`
--> $SRC_DIR/std/src/collections/hash/map.rs:LL:COL
note: required by a bound in `foo`
```
This PR fixes the problem by always hiding dependencies source code in the error messages generated during UI tests. This is implemented with a new internal flag, `-Z ignore-directory-in-diagnostics-source-blocks=$path`, which compiletest passes during UI tests. Once this is merged, remapping the Cargo home will be supported.
This PR is best reviewed commit-by-commit.
[^1]: After being puzzled for a bit, I discovered why this never impacted `rust-lang/rust`: we don't remap `$CARGO_HOME` 😅. Instead, we set `$CARGO_HOME` to `/cargo` in CI, which sort-of-but-not-really achieves the same effect.
Refactor `opt-dist` to simplify local building
This PR refactors the `opt-dist` tool to make it easier to invoke it locally, outside of CI, and thus simplify building PGO/BOLT optimized `rustc` builds e.g. for distro maintainers. It should also make it easier to run the PGO/BOLT workflow locally e.g. to profile performance or debug issues (looking at you, https://github.com/rust-lang/rust/pull/115554).
Replace `rustc_data_structures` dependency with `rustc_index` in `rustc_parse_format`
`rustc_data_structures` is only used for the `static_assert_size` macro, yet that is defined in `rustc_index` and merely re-exported. `rustc_index` is a lot more lightweight than `rustc_data_structures` which would make this a lot more reusable for rust-analyzer.
Rollup of 6 pull requests
Successful merges:
- #111580 (Don't ICE on layout computation failure)
- #114923 (doc: update lld-flavor ref)
- #115174 (tests: add test for #67992)
- #115187 (Add new interface to smir)
- #115300 (Tweaks and improvements on SMIR around generics_of and predicates_of)
- #115340 (some more is_zst that should be is_1zst)
r? `@ghost`
`@rustbot` modify labels: rollup
The Xous operating system has no libc, and relies on `compiler_builtins`
to supply basic functions such as `memcpy`. Bump the version to 0.1.101
to pull in a version of this crate with the flag enabled for Xous.
Signed-off-by: Sean Cross <sean@xobs.io>
Make `unconditional_recursion` warning detect recursive drops
Closes#55388
Also closes#50049 unless we want to keep it for the second example which this PR does not solve, but I think it is better to track that work in #57965.
r? `@oli-obk` since you are the mentor for #55388
Unresolved questions:
- [x] There are two false positives that must be fixed before merging (see diff). I suspect the best way to solve them is to perform analysis after drop elaboration instead of before, as now, but I have not explored that any further yet. Could that be an option? **Answer:** Yes, that solved the problem.
`@rustbot` label +T-compiler +C-enhancement +A-lint
Lots of tiny incremental simplifications of `EmitterWriter` internals
ignore the first commit, it's https://github.com/rust-lang/rust/pull/114088 squashed and rebased, but it's needed to use to use `derive_setters`, as they need a newer `syn` version.
Then this PR starts out with removing many arguments that are almost always defaulted to `None` or `false` and replace them with builder methods that can set these fields in the few cases that want to set them.
After that it's one commit after the other that removes or merges things until everything becomes some very simple trait objects
Update lexer emoji diagnostics to Unicode 15.0
This replaces the `unic-emoji-char` dep tree (which hasn't been updated for a while) with `unicode-properties` crate which contains Unicode 15.0 data.
Improves diagnostics for added emoji characters in recent years. (See tests).
cc #101840
cc ``@Manishearth``
Verify that all crate sources are in sync
This ensures that rustc will not attempt to link against a cdylib as if it is a rust dylib when an rlib for the same crate is available. Previously rustc didn't actually check if any further formats of a crate which has been loaded are of the same version and if they are actually valid. This caused a cdylib to be interpreted as rust dylib as soon as the corresponding rlib was loaded. As cdylibs don't export any rust symbols, linking would fail if rustc decides to link against the cdylib rather than the rlib.
Two crates depended on the previous behavior by separately compiling a test crate as both rlib and dylib. These have been changed to capture their original spirit to the best of my ability while still working when rustc verifies that all crates are in sync. It is unlikely that build systems depend on the current behavior and in any case we are taking a lot of measures to ensure that any change to either the source or the compilation options (including crate type) results in rustc rejecting it as incompatible. We merely didn't do this check here for now obsolete perf reasons.
Fixes https://github.com/rust-lang/rust/issues/10786
Fixes https://github.com/rust-lang/rust/issues/82151
Fixes https://github.com/rust-lang/rust/issues/82972
Closes https://github.com/bevy-cheatbook/bevy-cheatbook/issues/114
Implement rust-lang/compiler-team#578.
When an ICE is encountered on nightly releases, the new rustc panic
handler will also write the contents of the backtrace to disk. If any
`delay_span_bug`s are encountered, their backtrace is also added to the
file. The platform and rustc version will also be collected.
"no method" errors on standard library types
The standard library developer can annotate methods on e.g.
`BTreeSet::push` with `#[rustc_confusables("insert")]`. When the user
mistypes `btreeset.push()`, `BTreeSet::insert` will be suggested if
there are no other candidates to suggest.
Implement a few more rvalue translation to smir
Add the implementation for a few more RValue variants. For now, I simplified the stable version of `RValue::Ref` by removing the notion of Region.
r? `@oli-obk`