Trim extra whitespace when suggesting removal of invalid qualifiers when
parsing function pointer type.
Fixes: #133083
Signed-off-by: Tyrone Wu <wudevelops@gmail.com>
Reword resolve errors caused by likely missing crate in dep tree
Reword label and add `help`:
```
error[E0432]: unresolved import `some_novel_crate`
--> f704.rs:1:5
|
1 | use some_novel_crate::Type;
| ^^^^^^^^^^^^^^^^ use of unresolved module or unlinked crate `some_novel_crate`
|
= help: if you wanted to use a crate named `some_novel_crate`, use `cargo add some_novel_crate` to add it to your `Cargo.toml`
```
Fix#133137.
```
error[E0432]: unresolved import `some_novel_crate`
--> file.rs:1:5
|
1 | use some_novel_crate::Type;
| ^^^^^^^^^^^^^^^^ use of unresolved module or unlinked crate `some_novel_crate`
```
On resolve errors where there might be a missing crate, mention `cargo add foo`:
```
error[E0433]: failed to resolve: use of unresolved module or unlinked crate `nope`
--> $DIR/conflicting-impl-with-err.rs:4:11
|
LL | impl From<nope::Thing> for Error {
| ^^^^ use of unresolved module or unlinked crate `nope`
|
= help: if you wanted to use a crate named `nope`, use `cargo add nope` to add it to your `Cargo.toml`
```
Point at invalid utf-8 span on user's source code
```
error: couldn't read `$DIR/not-utf8-bin-file.rs`: stream did not contain valid UTF-8
--> $DIR/not-utf8-2.rs:6:5
|
LL | include!("not-utf8-bin-file.rs");
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
note: byte `193` is not valid utf-8
--> $DIR/not-utf8-bin-file.rs:2:14
|
LL | let _ = "�|�␂!5�cc␕␂��";
| ^
= note: this error originates in the macro `include` (in Nightly builds, run with -Z macro-backtrace for more info)
```
When we attempt to load a Rust source code file, if there is a OS file failure we try reading the file as bytes. If that succeeds we try to turn it into UTF-8. If *that* fails, we provide additional context about *where* the file has the first invalid UTF-8 character.
Fix#76869.
Properly record metavar spans for other expansions other than TT
This properly records metavar spans for nonterminals other than tokentree. This means that we operations like `span.to(other_span)` work correctly for macros. As you can see, other diagnostics involving metavars have improved as a result.
Fixes#132908
Alternative to #133270
cc `@ehuss`
cc `@petrochenkov`
```
error: couldn't read `$DIR/not-utf8-bin-file.rs`: stream did not contain valid UTF-8
--> $DIR/not-utf8-2.rs:6:5
|
LL | include!("not-utf8-bin-file.rs");
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
note: `[193]` is not valid utf-8
--> $DIR/not-utf8-bin-file.rs:2:14
|
LL | let _ = "�|�␂!5�cc␕␂��";
| ^
= note: this error originates in the macro `include` (in Nightly builds, run with -Z macro-backtrace for more info)
```
When we attempt to load a Rust source code file, if there is a OS file failure we try reading the file as bytes. If that succeeds we try to turn it into UTF-8. If *that* fails, we provide additional context about *where* the file has the first invalid UTF-8 character.
Fix#76869.
rustc_intrinsic: support functions without body
We synthesize a HIR body `loop {}` but such bodyless intrinsics.
Most of the diff is due to turning `ItemKind::Fn` into a brace (named-field) enum variant, because it carries a `bool`-typed field now. This is to remember whether the function has a body. MIR building panics to avoid ever translating the fake `loop {}` body, and the intrinsic logic uses the lack of a body to implicitly mark that intrinsic as must-be-overridden.
I first tried actually having no body rather than generating the fake body, but there's a *lot* of code that assumes that all function items have HIR and MIR, so this didn't work very well. Then I noticed that even `rustc_intrinsic_must_be_overridden` intrinsics have MIR generated (they are filled with an `Unreachable` terminator) so I guess I am not the first to discover this. ;)
r? `@oli-obk`
On parse errors where an ident is found where one wasn't expected, see if the next elements might have been meant as method call or field access.
```
error: expected one of `.`, `;`, `?`, `else`, or an operator, found `map`
--> $DIR/missing-dot-on-statement-expression.rs:7:29
|
LL | let _ = [1, 2, 3].iter()map(|x| x);
| ^^^ expected one of `.`, `;`, `?`, `else`, or an operator
|
help: you might have meant to write a method call
|
LL | let _ = [1, 2, 3].iter().map(|x| x);
| +
```
Rollup of 6 pull requests
Successful merges:
- #132150 (Fix powerpc64 big-endian FreeBSD ABI)
- #133942 (Clarify how to use `black_box()`)
- #134081 (Try to evaluate constants in legacy mangling)
- #134192 (Remove `Lexer`'s dependency on `Parser`.)
- #134208 (coverage: Tidy up creation of covmap and covfun records)
- #134211 (On Neutrino QNX, reduce the need to set archiver via environment variables)
r? `@ghost`
`@rustbot` modify labels: rollup
Remove `Lexer`'s dependency on `Parser`.
Lexing precedes parsing, as you'd expect: `Lexer` creates a `TokenStream` and `Parser` then parses that `TokenStream`.
But, in a horrendous violation of layering abstractions and common sense, `Lexer` depends on `Parser`! The `Lexer::unclosed_delim_err` method does some error recovery that relies on creating a `Parser` to do some post-processing of the `TokenStream` that the `Lexer` just created.
This commit just removes `unclosed_delim_err`. This change removes `Lexer`'s dependency on `Parser`, and also means that `lex_token_tree`'s return value can have a more typical form.
The cost is slightly worse error messages in two obscure cases, as shown in these tests:
- tests/ui/parser/brace-in-let-chain.rs: there is slightly less explanation in this case involving an extra `{`.
- tests/ui/parser/diff-markers/unclosed-delims{,-in-macro}.rs: the diff marker detection is no longer supported (because that detection is implemented in the parser).
In my opinion this cost is outweighed by the magnitude of the code cleanup.
r? ```````@chenyukang```````
Tweak multispan rendering to reduce output length
Consider comments and bare delimiters the same as an "empty line" for purposes of hiding rendered code output of long multispans. This results in more aggressive shortening of rendered output without losing too much context, specially in `*.stderr` tests that have "hidden" comments. We do that check not only on the first 4 lines of the multispan, but now also on the previous to last line as well.
Keep track of parse errors in `mod`s and don't emit resolve errors for paths involving them
When we expand a `mod foo;` and parse `foo.rs`, we now track whether that file had an unrecovered parse error that reached the end of the file. If so, we keep that information around in the HIR and mark its `DefId` in the `Resolver`. When resolving a path like `foo::bar`, we do not emit any errors for "`bar` not found in `foo`", as we know that the parse error might have caused `bar` to not be parsed and accounted for.
When this happens in an existing project, every path referencing `foo` would be an irrelevant compile error. Instead, we now skip emitting anything until `foo.rs` is fixed. Tellingly enough, we didn't have any test for errors caused by expansion of `mod`s with parse errors.
Fix https://github.com/rust-lang/rust/issues/97734.
Advent of `tests/ui` (misc cleanups and improvements) [1/N]
Part of #133895.
Misc improvements to some ui tests immediately under `tests/ui/`.
Best reviewed commit-by-commit.
Thanks `@clubby789` for PR title suggestion 😸.
r? compiler
Consider comments and bare delimiters the same as an "empty line" for purposes of hiding rendered code output of long multispans. This results in more aggressive shortening of rendered output without losing too much context, specially in `*.stderr` tests that have "hidden" comments.
Lexing precedes parsing, as you'd expect: `Lexer` creates a
`TokenStream` and `Parser` then parses that `TokenStream`.
But, in a horrendous violation of layering abstractions and common
sense, `Lexer` depends on `Parser`! The `Lexer::unclosed_delim_err`
method does some error recovery that relies on creating a `Parser` to do
some post-processing of the `TokenStream` that the `Lexer` just created.
This commit just removes `unclosed_delim_err`. This change removes
`Lexer`'s dependency on `Parser`, and also means that `lex_token_tree`'s
return value can have a more typical form.
The cost is slightly worse error messages in two obscure cases, as shown
in these tests:
- tests/ui/parser/brace-in-let-chain.rs: there is slightly less
explanation in this case involving an extra `{`.
- tests/ui/parser/diff-markers/unclosed-delims{,-in-macro}.rs: the diff
marker detection is no longer supported (because that detection is
implemented in the parser).
In my opinion this cost is outweighed by the magnitude of the code
cleanup.
When we expand a `mod foo;` and parse `foo.rs`, we now track whether that file had an unrecovered parse error that reached the end of the file. If so, we keep that information around. When resolving a path like `foo::bar`, we do not emit any errors for "`bar` not found in `foo`", as we know that the parse error might have caused `bar` to not be parsed and accounted for.
When this happens in an existing project, every path referencing `foo` would be an irrelevant compile error. Instead, we now skip emitting anything until `foo.rs` is fixed. Tellingly enough, we didn't have any test for errors caused by `mod` expansion.
Fix#97734.
Initial implementation of `#[feature(default_field_values]`, proposed in https://github.com/rust-lang/rfcs/pull/3681.
Support default fields in enum struct variant
Allow default values in an enum struct variant definition:
```rust
pub enum Bar {
Foo {
bar: S = S,
baz: i32 = 42 + 3,
}
}
```
Allow using `..` without a base on an enum struct variant
```rust
Bar::Foo { .. }
```
`#[derive(Default)]` doesn't account for these as it is still gating `#[default]` only being allowed on unit variants.
Support `#[derive(Default)]` on enum struct variants with all defaulted fields
```rust
pub enum Bar {
#[default]
Foo {
bar: S = S,
baz: i32 = 42 + 3,
}
}
```
Check for missing fields in typeck instead of mir_build.
Expand test with `const` param case (needs `generic_const_exprs` enabled).
Properly instantiate MIR const
The following works:
```rust
struct S<A> {
a: Vec<A> = Vec::new(),
}
S::<i32> { .. }
```
Add lint for default fields that will always fail const-eval
We *allow* this to happen for API writers that might want to rely on users'
getting a compile error when using the default field, different to the error
that they would get when the field isn't default. We could change this to
*always* error instead of being a lint, if we wanted.
This will *not* catch errors for partially evaluated consts, like when the
expression relies on a const parameter.
Suggestions when encountering `Foo { .. }` without `#[feature(default_field_values)]`:
- Suggest adding a base expression if there are missing fields.
- Suggest enabling the feature if all the missing fields have optional values.
- Suggest removing `..` if there are no missing fields.
Advent of `tests/ui` (misc cleanups and improvements) [2/N]
Part of #133895.
Misc improvements to some ui tests immediately under `tests/ui/`.
Best reviewed commit-by-commit. Please see individual commit messages for some further rationale and change summaries.
r? compiler
I was surprised to find that running with `-Zparse-only` only parses the
crate root file. Other files aren't parsed because that happens later
during expansion.
This commit renames the option and updates the help message to make this
clearer.
Compiletest: add proc-macro header
This adds a `proc-macro` header to simplify using proc-macros, and to reduce boilerplate. This header works similar to the `aux-build` header where you pass a path for a proc-macro to be built.
This allows the `force-host`, `no-prefer-dynamic` headers, and `crate_type` attribute to be removed. Additionally it uses `--extern` like `aux_crate` (allows implicit `extern crate` in 2018) and `--extern proc_macro` (to place in the prelude in 2018).
~~This also includes a secondary change which defaults the edition of proc-macros to 2024. This further reduces boilerplate (removing `extern crate proc_macro;`), and allows using modern Rust syntax. I was a little on the fence including this. I personally prefer it, but I can imagine it might be confusing to others.~~ EDIT: Removed
Some tests were changed so that when there is a chain of dependencies A→B→C, that the `@ proc-macro` is placed in `B` instead of `A` so that the `--extern` flag works correctly (previously it depended on `-L` to find `C`). I think this is better to make the dependencies more explicit. None of these tests looked like the were actually testing this behavior.
There is one test that had an unexplained output change: `tests/ui/macros/same-sequence-span.rs`. I do not know why it changed, but it didn't look like it was particularly important. Perhaps there was a normalization issue?
This is currently not compatible with the rustdoc `build-aux-docs` header. It can probably be fixed, I'm just not feeling motivated to do that right now.
### Implementation steps
- [x] Document this new behavior in rustc-dev-guide once we figure out the specifics. https://github.com/rust-lang/rustc-dev-guide/pull/2149
Trim whitespace in RemoveLet primary span
Separate `RemoveLet` span into primary span for `let` and removal suggestion span for `let `, so that primary span does not include whitespace.
Fixes: #133031
If a macro statement has been parsed after `else`, suggest a missing `if`:
```
error: expected `{`, found `falsy`
--> $DIR/else-no-if.rs:47:12
|
LL | } else falsy! {} {
| ---- ^^^^^
| |
| expected an `if` or a block after this `else`
|
help: add an `if` if this is the condition of a chained `else if` statement
|
LL | } else if falsy! {} {
| ++
```
Look at the expression that was parsed when trying to recover from a bad `if` condition to determine what was likely intended by the user beyond "maybe this was meant to be an `else` body".
```
error: expected `{`, found `map`
--> $DIR/missing-dot-on-if-condition-expression-fixable.rs:4:30
|
LL | for _ in [1, 2, 3].iter()map(|x| x) {}
| ^^^ expected `{`
|
help: you might have meant to write a method call
|
LL | for _ in [1, 2, 3].iter().map(|x| x) {}
| +
```