Suggest `x build library` for a custom toolchain that fails to load `core`
Fixes#113222
The nicer suggestion for dev-channel won't be emitted if `-Z ui-testing` enabled. IMO, this is acceptable for now.
It's the same as `Delimiter`, minus the `Invisible` variant. I'm
generally in favour of using types to make impossible states
unrepresentable, but this one feels very low-value, and the conversions
between the two types are annoying and confusing.
Look at the change in `src/tools/rustfmt/src/expr.rs` for an example:
the old code converted from `MacDelimiter` to `Delimiter` and back
again, for no good reason. This suggests the author was confused about
the types.
This patch adds a feature gate `async_fn_track_caller` that is separate from `closure_track_caller`. This is to allow enabling `async_fn_track_caller` separately.
Fixes#110009
Exporting `__rust_alloc_error_handler_should_panic` multiple times
causes ld.gold to balk with: `error: version script assignment of to
symbol __rust_alloc_error_handler_should_panic failed: symbol not
defined`
Specifically this breaks builds on DragonFly and YoctoProject with
ld.gold. Builds with ld.bfd should be unaffected.
Rollup of 5 pull requests
Successful merges:
- #114079 (Use `upvar_tys` in more places, make it return a list)
- #114166 (Add regression test for resolving `--extern libc=test.rlib`)
- #114321 (get auto traits for parallel rustc)
- #114335 (fix and extend ptr_comparison test)
- #114347 (x.py print more detailed format files and untracked files count)
r? `@ghost`
`@rustbot` modify labels: rollup
Improve `invalid_reference_casting` lint
This PR is a follow-up to https://github.com/rust-lang/rust/pull/111567 and https://github.com/rust-lang/rust/pull/113422.
This PR does multiple things:
- First it adds support for deferred de-reference, the goal is to support code like this, where the casting and de-reference are not done on the same expression
```rust
let myself = self as *const Self as *mut Self;
*myself = Self::Ready(value);
```
- Second it does not lint anymore on SB/TB UB code by only checking assignments (`=`, `+=`, ...) and creation of mutable references `&mut *`
- Thirdly it greatly improves the diagnostics in particular for cast from `&mut` to `&mut` or assignments
- ~~And lastly it renames the lint from `cast_ref_to_mut` to `invalid_reference_casting` which is more consistent with the ["rules"](https://github.com/rust-lang/rust-clippy/issues/2845) and also more consistent with what the lint checks~~ *https://github.com/rust-lang/rust/pull/113422*
This PR is best reviewed commit by commit.
r? compiler
Miri: fix error on dangling pointer inbounds offset
We used to claim that the pointer was "dereferenced", but that is just not true.
Can be reviewed commit-by-commit. The first commit is an unrelated rename that didn't seem worth splitting into its own PR.
r? `@oli-obk`
coverage: Consolidate FFI types into one module
Coverage FFI types were historically split across two modules, because some of them were needed by code in `rustc_codegen_ssa`.
Now that all of the coverage codegen code has been moved into `rustc_codegen_llvm` (#113355), it's possible to move all of the FFI types into a single module, making it easier to see all of them at once.
---
This PR only moves code and adjusts imports; there should be no functional changes.
Coverage FFI types were historically split across two modules, because some of
them were needed by code in `rustc_codegen_ssa`.
Now that all of the coverage codegen code has been moved into
`rustc_codegen_llvm` (#113355), it's possible to move all of the FFI types into
a single module, making it easier to see all of them at once.
Rollup of 6 pull requests
Successful merges:
- #114178 (Account for macros when suggesting a new let binding)
- #114199 (Don't unsize coerce infer vars in select in new solver)
- #114301 (Don't check unnecessarily that impl trait is RPIT)
- #114314 (Tweaks to `adt_sized_constraint`)
- #114322 (Fix invalid slice coercion suggestion reported in turbofish)
- #114340 ([rustc_attr][nit] Replace `filter` + `is_some` with `map_or`.)
r? `@ghost`
`@rustbot` modify labels: rollup
Fix invalid slice coercion suggestion reported in turbofish
This PR fixes the invalid slice coercion suggestion reported in turbofish and inferred generics by not emitting them.
Fixes https://github.com/rust-lang/rust/issues/110063
Don't check unnecessarily that impl trait is RPIT
We have this random `return_type_impl_trait` function to detect if a function returns an RPIT which is used in outlives suggestions, but removing it doesn't actually change any diagnostics. Let's just remove it.
Also, suppress a spurious outlives error from a ReError.
Fixes#114274
Account for macros when suggesting a new let binding
Provide a structured suggestion when the expression comes from a macro expansion:
```
error[E0716]: temporary value dropped while borrowed
--> $DIR/borrowck-let-suggestion.rs:2:17
|
LL | let mut x = vec![1].iter();
| ^^^^^^^ - temporary value is freed at the end of this statement
| |
| creates a temporary value which is freed while still in use
LL |
LL | x.use_mut();
| - borrow later used here
|
= note: this error originates in the macro `vec` (in Nightly builds, run with -Z macro-backtrace for more info)
help: consider using a `let` binding to create a longer lived value
|
LL ~ let binding = vec![1];
LL ~ let mut x = binding.iter();
|
```
WASI threads, implementation of wasm32-wasi-preview1-threads target
This PR adds a target proposed in https://github.com/rust-lang/compiler-team/issues/574 by `@abrown` and implementation of `std:🧵:spawn` for the target `wasm32-wasi-preview1-threads`
### Tier 3 Target Policy
As tier 3 targets, the new targets are required to adhere to [the tier 3 target policy](https://doc.rust-lang.org/nightly/rustc/target-tier-policy.html#tier-3-target-policy) requirements. This section quotes each requirement in entirety and describes how they are met.
> - A tier 3 target must have a designated developer or developers (the "target maintainers") on record to be CCed when issues arise regarding the target. (The mechanism to track and CC such developers may evolve over time.)
See [src/doc/rustc/src/platform-support/wasm32-wasi-preview1-threads.md](https://github.com/rust-lang/rust/pull/112922/files#diff-a48ee9d94f13e12be24eadd08eb47b479c153c340eeea4ef22276d876dfd4f3e).
> - Targets must use naming consistent with any existing targets; for instance, a target for the same CPU or OS as an existing Rust target should use the same name for that CPU or OS. Targets should normally use the same names and naming conventions as used elsewhere in the broader ecosystem beyond Rust (such as in other toolchains), unless they have a very good reason to diverge. Changing the name of a target can be highly disruptive, especially once the target reaches a higher tier, so getting the name right is important even for a tier 3 target.
> - Target names should not introduce undue confusion or ambiguity unless absolutely necessary to maintain ecosystem compatibility. For example, if the name of the target makes people extremely likely to form incorrect beliefs about what it targets, the name should be changed or augmented to disambiguate it.
If possible, use only letters, numbers, dashes and underscores for the name. Periods (.) are known to cause issues in Cargo.
The target is using the same name for $ARCH=wasm32 and $OS=wasi as existing Rust targets. The suffix `preview1` introduced to accurately set expectations because eventually this target will be deprecated and follows [MCP 607](https://github.com/rust-lang/compiler-team/issues/607). The suffix `threads` indicates that it’s an extension that enables threads to the existing target and it follows [MCP 574](https://github.com/rust-lang/compiler-team/issues/574) which describes the rationale behind introducing a separate target.
> - Tier 3 targets may have unusual requirements to build or use, but must not create legal issues or impose onerous legal terms for the Rust project or for Rust developers or users.
> - The target must not introduce license incompatibilities.
> - Anything added to the Rust repository must be under the standard Rust license (MIT OR Apache-2.0).
> - The target must not cause the Rust tools or libraries built for any other host (even when supporting cross-compilation to the target) to depend on any new dependency less permissive than the Rust licensing policy. This applies whether the dependency is a Rust crate that would require adding new license exceptions (as specified by the tidy tool in the rust-lang/rust repository), or whether the dependency is a native library or binary. In other words, the introduction of the target must not cause a user installing or running a version of Rust or the Rust tools to be subject to any new license requirements.
> - Compiling, linking, and emitting functional binaries, libraries, or other code for the target (whether hosted on the target itself or cross-compiling from another target) must not depend on proprietary (non-FOSS) libraries. Host tools built for the target itself may depend on the ordinary runtime libraries supplied by the platform and commonly used by other applications built for the target, but those libraries must not be required for code generation for the target; cross-compilation to the target must not require such libraries at all. For instance, rustc built for the target may depend on a common proprietary C runtime library or console output library, but must not depend on a proprietary code generation library or code optimization library. Rust's license permits such combinations, but the Rust project has no interest in maintaining such combinations within the scope of Rust itself, even at tier 3.
> - "onerous" here is an intentionally subjective term. At a minimum, "onerous" legal/licensing terms include but are not limited to: non-disclosure requirements, non-compete requirements, contributor license agreements (CLAs) or equivalent, "non-commercial"/"research-only"/etc terms, requirements conditional on the employer or employment of any particular Rust developers, revocable terms, any requirements that create liability for the Rust project or its developers or users, or any requirements that adversely affect the livelihood or prospects of the Rust project or its developers or users.
This PR does not introduce any new dependency.
The new target doesn’t support building host tools.
> Tier 3 targets should attempt to implement as much of the standard libraries as possible and appropriate (core for most targets, alloc for targets that can support dynamic memory allocation, std for targets with an operating system or equivalent layer of system-provided functionality), but may leave some code unimplemented (either unavailable or stubbed out as appropriate), whether because the target makes it impossible to implement or challenging to implement. The authors of pull requests are not obligated to avoid calling any portions of the standard library on the basis of a tier 3 target not implementing those portions.
The full standard library is available for this target as it’s an extension to an existing target that has already supported it.
> The target must provide documentation for the Rust community explaining how to build for the target, using cross-compilation if possible. If the target supports running binaries, or running tests (even if they do not pass), the documentation must explain how to run such binaries or tests for the target, using emulation if possible or dedicated hardware if necessary.
Only manual test running is supported at the moment with some tweaks in the test runner codebase. For build and running tests see [src/doc/rustc/src/platform-support/wasm32-wasi-preview1-threads.md](https://github.com/rust-lang/rust/pull/112922/files#diff-a48ee9d94f13e12be24eadd08eb47b479c153c340eeea4ef22276d876dfd4f3e).
> - Neither this policy nor any decisions made regarding targets shall create any binding agreement or estoppel by any party. If any member of an approving Rust team serves as one of the maintainers of a target, or has any legal or employment requirement (explicit or implicit) that might affect their decisions regarding a target, they must recuse themselves from any approval decisions regarding the target's tier status, though they may otherwise participate in discussions.
> - This requirement does not prevent part or all of this policy from being cited in an explicit contract or work agreement (e.g. to implement or maintain support for a target). This requirement exists to ensure that a developer or team responsible for reviewing and approving a target does not face any legal threats or obligations that would prevent them from freely exercising their judgment in such approval, even if such judgment involves subjective matters or goes beyond the letter of these requirements.
> - Tier 3 targets must not impose burden on the authors of pull requests, or other developers in the community, to maintain the target. In particular, do not post comments (automated or manual) on a PR that derail or suggest a block on the PR based on a tier 3 target. Do not send automated messages or notifications (via any medium, including via `@)` to a PR author or others involved with a PR regarding a tier 3 target, unless they have opted into such messages.
> - Backlinks such as those generated by the issue/PR tracker when linking to an issue or PR are not considered a violation of this policy, within reason. However, such messages (even on a separate repository) must not generate notifications to anyone involved with a PR who has not requested such notifications.
> - Patches adding or updating tier 3 targets must not break any existing tier 2 or tier 1 target, and must not knowingly break another tier 3 target without approval of either the compiler team or the maintainers of the other tier 3 target.
> - In particular, this may come up when working on closely related targets, such as variations of the same architecture with different features. Avoid introducing unconditional uses of features that another variation of the target may not have; use conditional compilation or runtime detection, as appropriate, to let each target run code supported by that target.
I acknowledge these requirements and intend to ensure they are met.
Similar to the last commit, it's more of a `Parser`-level concern than a
`TokenCursor`-level concern. And the struct size reductions are nice.
After this change, `TokenCursor` is as minimal as possible (two fields
and two methods) which is nice.
It's more of a `Parser`-level concern than a `TokenCursor`-level
concern. Also, `num_bump_calls` is a more accurate name, because it's
incremented in `Parser::bump`.
Comparing vectors of string parts yields the same result but avoids
unnecessary `join` and potential allocation for resulting `String`.
This code is cold so it's unlikely to have any measurable impact, but
considering but since it's also simpler, why not? :)
Filter out short-lived LLVM diagnostics before they reach the rustc handler
During profiling I saw remark passes being unconditionally enabled: for example `Machine Optimization Remark Emitter`.
The diagnostic remarks enabled by default are [from missed optimizations and opt analyses](https://github.com/rust-lang/rust/pull/113339#discussion_r1259480303). They are created by LLVM, passed to the diagnostic handler on the C++ side, emitted to rust, where they are unpacked, C++ strings are converted to rust, etc.
Then they are discarded in the vast majority of the time (i.e. unless some kind of `-Cremark` has enabled some of these passes' output to be printed).
These unneeded allocations are very short-lived, basically only lasting between the LLVM pass emitting them and the rust handler where they are discarded. So it doesn't hugely impact max-rss, and is only a slight reduction in instruction count (cachegrind reports a reduction between 0.3% and 0.5%) _on linux_. It's possible that targets without `jemalloc` or with a worse allocator, may optimize these less.
It is however significant in the aggregate, looking at the total number of allocated bytes:
- it's the biggest source of allocations according to dhat, on the benchmarks I've tried e.g. `syn` or `cargo`
- allocations on `syn` are reduced by 440MB, 17% (from 2440722647 bytes total, to 2030461328 bytes)
- allocations on `cargo` are reduced by 6.6GB, 19% (from 35371886402 bytes total, to 28723987743 bytes)
Some of these diagnostics objects [are allocated in LLVM](https://github.com/rust-lang/rust/pull/113339#discussion_r1252387484) *before* they're emitted to our diagnostic handler, where they'll be filtered out. So we could remove those in the future, but that will require changing a few LLVM call-sites upstream, so I left a FIXME.
Move doc comment desugaring out of `TokenCursor`.
It's awkward that `TokenCursor` sometimes desugars doc comments on the fly, but usually doesn't.
r? `@petrochenkov`
now that remarks are filtered before cg_llvm's diagnostic handler callback
is called, we don't need to do the filtering post c++-to-rust conversion
of the diagnostic.
this will eliminate many short-lived allocations (e.g. 20% of the memory used
building cargo) when unpacking the diagnostic and converting its various
C++ strings into rust strings, just to be filtered out most of the time.
cleanup: remove pointee types
This can't be merged until the oldest LLVM version we support uses opaque pointers, which will be the case after #114148. (Also note `-Cllvm-args="-opaque-pointers=0"` can technically be used in LLVM 15, though I don't think we should support that configuration.)
I initially hoped this would provide some minor perf win, but in https://github.com/rust-lang/rust/pull/105412#issuecomment-1341224450 it had very little impact, so this is only valuable as a cleanup.
As a followup, this will enable #96242 to be resolved.
r? `@ghost`
`@rustbot` label S-blocked
Since all output characters taken from `BASE_64` are valid UTF8 chars
there is no need to waste cycles on validation.
Even though it's obviously a perf win, I've also used a [benchmark](https://gist.github.com/ttsugriy/e1e63c07927d8f31e71695a9c617bbf3)
on M1 MacBook Air with following results:
```
Running benches/base_n_benchmark.rs (target/release/deps/base_n_benchmark-825fe5895b5c2693)
push_str/old time: [14.670 µs 14.852 µs 15.074 µs]
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
4 (4.00%) high mild
7 (7.00%) high severe
push_str/new time: [12.573 µs 12.674 µs 12.801 µs]
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
7 (7.00%) high mild
4 (4.00%) high severe
```
[rustc_data_structures][perf] Simplify base_n::push_str.
This minor change removes the need to reverse resulting digits. Since reverse is O(|digit_num|) but bounded by 128, it's unlikely to be a noticeable in practice. At the same time, this code is also a 1 line shorter, so combined with tiny perf win, why not?
I ran https://gist.github.com/ttsugriy/ed14860ef597ab315d4129d5f8adb191 on M1 macbook air and got a small improvement
```
Running benches/base_n_benchmark.rs (target/release/deps/base_n_benchmark-825fe5895b5c2693)
push_str/old time: [14.180 µs 14.313 µs 14.462 µs]
Performance has improved.
Found 5 outliers among 100 measurements (5.00%)
4 (4.00%) high mild
1 (1.00%) high severe
push_str/new time: [13.741 µs 13.839 µs 13.973 µs]
Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
3 (3.00%) high mild
5 (5.00%) high severe
```
Improve diagnostic for wrong borrow on binary operations
This PR improves the diagnostic for wrong borrow on binary operations by suggesting to reborrow on appropriate expressions.
```diff
+ = note: an implementation for `&Foo * &Foo` exist
+ help: consider reborrowing both sides
+ |
+ LL | let _ = &*ref_mut_foo * &*ref_mut_foo;
+ | ++ ++
```
Fixes https://github.com/rust-lang/rust/issues/109352
coverage: Replace `ExpressionOperandId` with enum `Operand`
*This is one step in my larger coverage refactoring ambitions described at <https://github.com/rust-lang/compiler-team/issues/645>.*
LLVM coverage has a concept of “mapping expressions” that allow a span's execution count to be computed as a simple arithmetic expression over other counters/expressions, instead of requiring a dedicated physical counter for every control-flow branch.
These expressions have an operator (`+` or `-`) and two operands. Operands are currently represented as `ExpressionOperandId`, which wraps a `u32` with the following semantics:
- 0 represents a special counter that always has a value of zero
- Values ascending from 1 represent counter IDs
- Values descending from `u32::MAX` represent the IDs of other expressions
---
This change replaces that whole `ExpressionOperandId` scheme with a simple enum that explicitly distinguishes between the three cases.
This lets us remove a lot of fiddly code for dealing with the different operand kinds:
- Previously it was only possible to distinguish between counter-ID operands and expression-ID operands by comparing the operand ID with the total number of counters in a function. This is unnecessary now that the enum distinguishes them explicitly.
- There's no need for expression IDs to descend from `u32::MAX` and then get translated into zero-based indices in certain places. Now that they ascend from zero, they can be used as indices directly.
- There's no need to reserve ID number 0 for the special zero operand, since it can just have its own variant in the enum, so counter IDs can count up from 0.
(Making counter IDs ascend from 0 also lets us fix an off-by-one error in the query for counting the total number of counters, which would cause LLVM to emit an extra unused counter for every instrumented function.)
---
This PR may be easiest to review as individual patches, since that breaks it up into clearly distinct parts:
- Replace a `u32` wrapper with an explicit enum, without changing the semantics of the underlying IDs being stored.
- Change the numbering scheme used by `Operand::Expression` to make expression IDs ascend from 0 (instead of descending from `u32::MAX`).
- Change the numbering scheme used by `Operand::Counter` to make counter IDs ascend from 0 (instead of ascending from 1).
[rustc_data_structures] Simplify SortedMap::insert.
It looks like current usage of `swap` is aimed at achieving what `std::mem::replace` does but more concisely and idiomatically.
Operand types are now tracked explicitly, so there is no need to reserve ID 0
for the special always-zero counter.
As part of the renumbering, this change fixes an off-by-one error in the way
counters were counted by the `coverageinfo` query. As a result, functions
should now have exactly the number of counters they actually need, instead of
always having an extra counter that is never used.
Operand types are now tracked explicitly, so there is no need for expression
IDs to avoid counter IDs by descending from `u32::MAX`. Instead they can just
count up from 0, and can be used directly as indices when necessary.
Because the three kinds of operand are now distinguished explicitly, we no
longer need fiddly code to disambiguate counter IDs and expression IDs based on
the total number of counters/expressions in a function.
This does increase the size of operands from 4 bytes to 8 bytes, but that
shouldn't be a big deal since they are mostly stored inside boxed structures,
and the current coverage code is not particularly size-optimized anyway.
The actual motivation here is to prevent `rustfmt` from suddenly reformatting
these enum variants onto a single line, when they become slightly shorter in
the future.
But there's no harm in adding some helpful documentation at the same time.
Detect trait upcasting through struct tail unsizing in new solver select
Oops, we were able to hide trait upcasting behind a parent unsize goal that evaluated to `Certainty::Yes`. Let's do rematching for `Certainty::Yes` unsize goals with `BuiltinImplSource::Misc` sources (corresponding to all of the other unsize rules) to make sure we end up selecting any nested goals which may be satisfied via `BuiltinImplSource::TraitUpcasting` or `::TupleUnsizing`.
r? ``@lcnr``
Update lexer emoji diagnostics to Unicode 15.0
This replaces the `unic-emoji-char` dep tree (which hasn't been updated for a while) with `unicode-properties` crate which contains Unicode 15.0 data.
Improves diagnostics for added emoji characters in recent years. (See tests).
cc #101840
cc ``@Manishearth``
This minor change removes the need to reverse resulting digits.
Since reverse is O(|digit_num|) but bounded by 128, it's unlikely
to be a noticeable in practice. At the same time, this code is
also a 1 line shorter, so combined with tiny perf win, why not?
I ran https://gist.github.com/ttsugriy/ed14860ef597ab315d4129d5f8adb191
on M1 macbook air and got a small improvement
```
Running benches/base_n_benchmark.rs (target/release/deps/base_n_benchmark-825fe5895b5c2693)
push_str/old time: [14.180 µs 14.313 µs 14.462 µs]
Performance has improved.
Found 5 outliers among 100 measurements (5.00%)
4 (4.00%) high mild
1 (1.00%) high severe
push_str/new time: [13.741 µs 13.839 µs 13.973 µs]
Performance has improved.
Found 8 outliers among 100 measurements (8.00%)
3 (3.00%) high mild
5 (5.00%) high severe
```
Map RPITIT's opaque type bounds back from projections to opaques
An RPITIT in a program's AST is eventually translated into both a projection GAT and an opaque. The opaque is used for default trait methods, like:
```
trait Foo {
fn bar() -> impl Sized { 0i32 }
}
```
The item bounds for both the projection and opaque are identical, and both have a *projection* self ty. This is mostly okay, since we can normalize this projection within the default trait method body to the opaque, but it does two things:
1. it leads to bugs in places where we don't normalize item bounds, like `deduce_future_output_from_obligations`
2. it leads to extra match arms that are both suspicious looking and also easy to miss
This PR maps the opaque type bounds of the RPITIT's *opaque* back to the opaque's self type to avoid this quirk. Then we can fix the UI test for #108304 (1.) and also remove a bunch of match arms (2.).
Fixes#108304
r? `@spastorino`
Check lazy type aliases for well-formedness
Previously we didn't check if `T: Mul` holds given lazy `type Alias<T> = <T as Mul>::Output;`.
Now we do. It only makes sense.
`@rustbot` label F-lazy_type_alias
r? `@oli-obk`
This function has some shared code for the thin LTO and fat LTO cases,
but those cases have so little in common that it's actually clearer to
treat them fully separately.
PR #112946 tweaked the naming of LLVM threads, but messed things up
slightly, resulting in threads on Windows having names like `optimize
module {} regex.f10ba03eb5ec7975-cgu.0`.
This commit removes the extraneous `{} `.
The main loop has a *very* complex condition, which includes two
mentions of `codegen_state`. The body of the loop then immediately
switches on the `codegen_state`.
I find it easier to understand if it's a `loop` and we check for exit
conditions after switching on `codegen_state`. We end up with a tiny bit
of code duplication, but it's clear that (a) we never exit in the
`Ongoing` case, (b) we exit in the `Completed` state only if several
things are true (and there's interaction with LTO there), and (c) we
exit in the `Aborted` state if a couple of things are true. Also, the
exit conditions are all simple conjunctions.
This loop condition involves `codegen_state`, `work_items`, and
`running_with_own_token`. But the body of the loop cannot modify
`codegen_state`, so repeatedly checking it is unnecessary.
`CodegenContext` is immutable except for the `worker` field - we clone
`CodegenContext` in multiple places, changing the `worker` field each
time. It's simpler to move the `worker` field out of `CodegenContext`.
It took me some time to understand how the main thread can lend a
jobserver token to an LLVM thread. This commit renames a couple of
things to make it clearer.
- Rename the `LLVMing` variant as `Lending`, because that is a clearer
description of what is happening.
- Rename `running` as `running_with_own_token`, which makes it clearer
that there might be one additional LLVM thread running (with a loaned
token). Also add a comment to its definition.
And rename the `Compiled` variant as `Finished`, because that name makes
it clearer there is nothing left to do, contrasting nicely with the
`Needs*` variants.
`TokenCursor` currently does doc comment desugaring on the fly, if the
`desugar_doc_comment` field is set. This requires also modifying the
token stream on the fly with `replace_prev_and_rewind`.
This commit moves the doc comment desugaring out of `TokenCursor`, by
introducing a new `TokenStream::desugar_doc_comment` method. This
separation of desugaring and iterating makes the code nicer.
Don't install default projection bound for return-position `impl Trait` in trait methods with no body
This ensures that we never try to project to an opaque type in a trait method that has no body to infer its hidden type, which means we never later call `type_of` on that opaque. This is because opaque types try to reveal their hidden type when proving auto traits.
I thought about this a lot, and I think this is a fix that's less likely to introduce other strange downstream ICEs than #113461.
Fixes#113434
r? `@spastorino`
Fix invalid suggestion for mismatched types in closure arguments
This PR fixes the invalid suggestion for mismatched types in closure arguments.
The invalid suggestion came from a wrongly created span in the parser for closure arguments that don't have a type specified. Specifically, the span in this case was the last token span, but in the case of tuples, the span represented the last parenthesis instead of the whole tuple, which is fixed by taking the more accurate span of the pattern.
There is one unfortunate downside of this fix, it worsens even more the diagnostic for mismatched types in closure args without an explicit type. This happens because there is no correct span for implied inferred type. I tried also fixing this but it's a rabbit hole.
Fixes https://github.com/rust-lang/rust/issues/114180
The invalid suggestion came from a wrongly created span in `rustc_parse'
for closure arguments that didn't have a type specified. Specifically,
the span in this case was the last token span, but in the case of
tuples, the span represented the last parenthesis instead of the whole
tuple, which is fixed by taking the more accurate span of the pattern.
[rustc][data_structures] Simplify binary_search_slice.
Instead of using `binary_search_by_key`, it's possible to use `partition_point` to find the lower bound. This avoids the need to locate the leftmost matching entry separately.
It's also possible to use `partition_point` to find the upper bound, so I plan to send a separate PR for your consideration.
Now that we use opaque pointers, ADTs can no longer be recursive, so we
do not need to name them. Previously, this would be necessary if you had
a struct like
```rs
struct Foo(Box<Foo>, u64, u64);
```
which would be represented with something like
```ll
%Foo = type { %Foo*, i64, i64 }
```
which is now just
```ll
{ ptr, i64, i64 }
```
Gracefully handle ternary operator
Fixes#112578
~~May not be the best way to do this as it doesn't check for a single `:`, so it could perhaps appear even when the actual issue is just a missing semicolon. May not be the biggest deal, though?~~
Nevermind, got it working properly now ^^
Update the minimum external LLVM to 15
With this change, we'll have stable support for LLVM 15 through 17 (pending release).
For reference, the previous increase to LLVM 14 was #107573.
Refactor + improve diagnostics for `&mut T`/`T` mismatch inside Option/Result
Follow up to #114052. This also makes the diagnostics structured + translatable.
r? `@WaffleLapkin`
Rename and allow `cast_ref_to_mut` lint
This PR is a small subset of https://github.com/rust-lang/rust/pull/112431, that is the renaming of the lint (`cast_ref_to_mut` -> `invalid_reference_casting`).
BUT also temporarily change the default level of the lint from deny-by-default to allow-by-default until https://github.com/rust-lang/rust/pull/112431 is merged.
r? `@Nilstrieb`
fix(resolve): update the ambiguity glob binding as warning recursively
Fixes#47525Fixes#56593, but `issue-56593-2.rs` is not fixed to ensure backward compatibility.
Fixes#98467Fixes#105235Fixes#112713
This PR had added a field called `warn_ambiguous` in `NameBinding` which is only for back compatibly reason and used for lint.
More details: https://github.com/rust-lang/rust/pull/112743
r? `@petrochenkov`
Rollup of 7 pull requests
Successful merges:
- #113773 (Don't attempt to compute layout of type referencing error)
- #114107 (Prevent people from assigning me as a PR reviewer)
- #114124 (tests/ui/proc-macro/*: Migrate FIXMEs to check-pass)
- #114171 (Fix switch-stdout test for none unix/windows platforms)
- #114172 (Fix issue_15149 test for the SGX target)
- #114173 (btree/map.rs: remove "Basic usage" text)
- #114174 (doc: replace wrong punctuation mark)
r? `@ghost`
`@rustbot` modify labels: rollup
Rollup of 7 pull requests
Successful merges:
- #114099 (privacy: no nominal visibility for assoc fns )
- #114128 (When flushing delayed span bugs, write to the ICE dump file even if it doesn't exist)
- #114138 (Adjust spans correctly for fn -> method suggestion)
- #114146 (Skip reporting item name when checking RPITIT GAT's associated type bounds hold)
- #114147 (Insert RPITITs that were shadowed by missing ADTs that resolve to [type error])
- #114155 (Replace a lazy `RefCell<Option<T>>` with `OnceCell<T>`)
- #114164 (Add regression test for `--cap-lints allow` and trait bounds warning)
r? `@ghost`
`@rustbot` modify labels: rollup
Replace a lazy `RefCell<Option<T>>` with `OnceCell<T>`
This code was using `RefCell<Option<T>>` to manually implement lazy initialization. Now that we have `OnceCell` in the standard library, we can just use that instead.
In particular, this avoids a confusing doubly-nested option, because the value being lazily computed is itself an `Option<Symbol>`.
Skip reporting item name when checking RPITIT GAT's associated type bounds hold
Doesn't really make sense to label an item that has a name that users can't really mention. Fixes#114145. Also fixes#113794.
r? `@spastorino`
privacy: no nominal visibility for assoc fns
Fixes#113860.
When `staged_api` is enabled, effective visibilities are computed earlier and this can trigger an ICE in some cases.
In particular, if a impl of a trait method has a visibility then an error will be reported for that, but when privacy invariants are being checked, the effective visibility will still be greater than the nominal visbility and that will trigger a `span_bug!`.
However, this invariant - that effective visibilites are limited to nominal visibility - doesn't make sense for associated functions.
Diagnostic namespace
This PR implements the basic infrastructure for accepting the `#[diagnostic]` attribute tool namespace as specified in https://github.com/rust-lang/rfcs/pull/3368. Note: This RFC is not merged yet, but it seems like it will be accepted soon. I open this PR early on to get feedback on the actual implementation as soon as possible. This hopefully enables getting at least the diagnostic namespace to stable rust "soon", so that crates do not need to bump their MSRV if we stabilize actual attributes in this namespace.
This PR only adds infrastructure accept attributes from this namespace, it does not add any specific attribute. Therefore the compiler will emit a lint warning for each attribute that's actually used. This namespace is added behind a feature flag, so it will be only available on a nightly compiler for now.
cc `@estebank` as they've supported me in planing, specifying and implementing this feature.
When `staged_api` is enabled, effective visibilities are computed earlier
and this can trigger an ICE in some cases.
In particular, if a impl of a trait method has a visibility then an error
will be reported for that, but when privacy invariants are being checked,
the effective visibility will still be greater than the nominal visbility
and that will trigger a `span_bug!`.
However, this invariant - that effective visibilites are limited to
nominal visibility - doesn't make sense for associated functions.
Signed-off-by: David Wood <david@davidtw.co>
Optimize `TokenKind::clone`.
`TokenKind` would impl `Copy` if it weren't for
`TokenKind::Interpolated`. This commit makes `clone` reflect that.
r? `@ghost`
Less `TokenTree` cloning
`TokenTreeCursor` has this comment on it:
```
// FIXME: Many uses of this can be replaced with by-reference iterator to avoid clones.
```
This PR completes that FIXME. It doesn't have much perf effect, but at least we now know that.
r? `@petrochenkov`
Turns out opaque types can have hidden types registered during mir validation
See the newly added test's documentation for an explanation.
fixes#114121
Restore region uniquification in the new solver 🎉
All of the bugs that were "due" to uniquification have been settled via other means (e.g. bidirectional alias-relate, param-env incompleteness, etc).
Firstly, revert the functional changes in #110180. 😸
Secondly, we need to ignore regions when considering if a goal has changed (the "has_changed" boolean returned from `evaluate_goal`) -- otherwise, because we're doing region uniquification, we may perpetually consider a goal to be changed. See the UI test I committed for an explanation.
Implement diagnostic translation for rustc-errors
This is my first PR to rustc yeah~
I'm going to implement diagnostic translation on rustc-errors crate.
This PR is WIP, the reason of opening this as draft, I want to show my code to prevent the issue caused by misunderstanding and also I have few questions.
Some error messages are processed by `pluralize!` macro which determines to use plural word or not. From now, I make two kinds of keys and combine with enum but I'm not sure is this best method to do it.
Is there any prefered method to do this? => This resolved on conversation on PR.
I'll remain to perform force-push until my first implementation looks good to me
Don't treat negative trait predicates as always knowable
We don't need this. It was added in #90104 but I don't really know why. It's not sound afaict -- negative trait predicates need the same coherence-ambiguity/orphan check rules as positive ones.
r? `@lcnr`
cc `@spastorino,` do you remember why?