Note: there was an existing code path involving `Interpolated` in
`MetaItem::from_tokens` that was dead. This commit transfers that to the
new form, but puts an `unreachable!` call inside it.
The one notable test change is `tests/ui/macros/trace_faulty_macros.rs`.
This commit removes the complicated `Interpolated` handling in
`expected_expression_found` that results in a longer error message. But
I think the new, shorter message is actually an improvement.
The original complaint was in #71039, when the error message started
with "error: expected expression, found `1 + 1`". That was confusing
because `1 + 1` is an expression. Other than that, the reporter said
"the whole error message is not too bad if you ignore the first line".
Subsequently, extra complexity and wording was added to the error
message. But I don't think the extra wording actually helps all that
much. In particular, it still says of the `1+1` that "this is expected
to be expression". This repeats the problem from the original complaint!
This commit removes the extra complexity, reverting to a simpler error
message. This is primarily because the traversal is a pain without
`Interpolated` tokens. Nonetheless, I think the error message is
*improved*. It now starts with "expected expression, found `pat`
metavariable", which is much clearer and the real problem. It also
doesn't say anything specific about `1+1`, which is good, because the
`1+1` isn't really relevant to the error -- it's the `$e:pat` that's
important.
Use `edition = "2024"` in the compiler (redux)
Most of this is binding mode changes, which I fixed by running `x.py fix`.
Also adds some miscellaneous `unsafe` blocks for new unsafe standard library functions (the setenv ones), and a missing `unsafe extern` block in some enzyme codegen code, and fixes some precise capturing lifetime changes (but only when they led to errors).
cc ``@ehuss`` ``@traviscross``
Notes about tests:
- tests/ui/parser/macro/trait-object-macro-matcher.rs: the syntax error
is duplicated, because it occurs now when parsing the decl macro
input, and also when parsing the expanded decl macro. But this won't
show up for normal users due to error de-duplication.
- tests/ui/associated-consts/issue-93835.rs: similar, plus there are
some additional errors about this very broken code.
- The changes to metavariable descriptions in #132629 are now visible in
error message for several tests.
Introduce CoercePointeeWellformed for coherence checks at typeck stage
Fix#135206
This is the first PR to introduce the "wellformedness" check for `derive(CoercePointee)`.
This patch introduces a new error code to cover all the prerequisites of the said macro. The checks that is enforced with this patch is whether the data is indeed `struct` and whether the layout is set to `repr(transparent)`.
A following series of patch will arrive later to address the following concern.
1. #135217 so that we would only admit one single coercion on one type parameter, and leave the rest for future consideration in tandem of development of other coercion rules.
1. Enforcement of data field requirements.
**An open question** is whether there is a good schema to encode the `#[pointee]` as well, so that we could also check if the `#[pointee]` type parameter is indeed `?Sized`.
``@rustbot`` label F-derive_coerce_pointee
Properly record metavar spans for other expansions other than TT
This properly records metavar spans for nonterminals other than tokentree. This means that we operations like `span.to(other_span)` work correctly for macros. As you can see, other diagnostics involving metavars have improved as a result.
Fixes#132908
Alternative to #133270
cc `@ehuss`
cc `@petrochenkov`
Add a list of symbols for stable standard library crates
There are a few locations where the crate name is checked against an enumerated list of `std`, `core`, `alloc`, and `proc_macro`, or some subset thereof. In most cases when we are looking for any "standard library" crate, all four crates should be treated the same. Change this so the crates are listed in one place, and that list is used wherever a list of `std` crates is needed.
`test` could be considered relevant in some of these cases, but generally treating it separate from the others seems preferable while it is unstable.
There are also a few places that Clippy will be able to use this.
There are a few locations where the crate name is checked against an
enumerated list of `std`, `core`, `alloc`, and `proc_macro`, or some
subset thereof. In most of these cases, all four crates should likely be
treated the same. Change this so the crates are listed in one place, and
that list is used wherever a list of `std` crates is needed.
`test` could be considered relevant in some of these cases, but
generally treating it separate from the others seems preferable while it
is unstable.
There are also a few places that Clippy will be able to use this.
[macro_metavar_expr_concat] Fix#128346Fix#128346Fix#131393
The syntax is invalid in both issues so I guess that theoretically the compiler should have aborted early.
This PR tries to fix a local problem but let me know if there are better options.
cc `@petrochenkov` if you are interested
The parser pushes a `TokenType` to `Parser::expected_token_types` on
every call to the various `check`/`eat` methods, and clears it on every
call to `bump`. Some of those `TokenType` values are full tokens that
require cloning and dropping. This is a *lot* of work for something
that is only used in error messages and it accounts for a significant
fraction of parsing execution time.
This commit overhauls `TokenType` so that `Parser::expected_token_types`
can be implemented as a bitset. This requires changing `TokenType` to a
C-style parameterless enum, and adding `TokenTypeSet` which uses a
`u128` for the bits. (The new `TokenType` has 105 variants.)
The new types `ExpTokenPair` and `ExpKeywordPair` are now arguments to
the `check`/`eat` methods. This is for maximum speed. The elements in
the pairs are always statically known; e.g. a
`token::BinOp(token::Star)` is always paired with a `TokenType::Star`.
So we now compute `TokenType`s in advance and pass them in to
`check`/`eat` rather than the current approach of constructing them on
insertion into `expected_token_types`.
Values of these pair types can be produced by the new `exp!` macro,
which is used at every `check`/`eat` call site. The macro is for
convenience, allowing any pair to be generated from a single identifier.
The ident/keyword filtering in `expected_one_of_not_found` is no longer
necessary. It was there to account for some sloppiness in
`TokenKind`/`TokenType` comparisons.
The existing `TokenType` is moved to a new file `token_type.rs`, and all
its new infrastructure is added to that file. There is more boilerplate
code than I would like, but I can't see how to make it shorter.
`rustc_span::symbol` defines some things that are re-exported from
`rustc_span`, such as `Symbol` and `sym`. But it doesn't re-export some
closely related things such as `Ident` and `kw`. So you can do `use
rustc_span::{Symbol, sym}` but you have to do `use
rustc_span::symbol::{Ident, kw}`, which is inconsistent for no good
reason.
This commit re-exports `Ident`, `kw`, and `MacroRulesNormalizedIdent`,
and changes many `rustc_span::symbol::` qualifiers in `compiler/` to
`rustc_span::`. This is a 200+ net line of code reduction, mostly
because many files with two `use rustc_span` items can be reduced to
one.
Currently there are two ways to peek at a `TokenStreamIter`.
- Wrap it in a `Peekable` and use that traits `peek` method.
- Use `TokenStreamIter`'s inherent `peek` method.
Some code uses one, some use the other. This commit converts all places
to the inherent method. This eliminates mixing of `TokenStreamIter` and
`Peekable<TokenStreamIter>` and some use of `impl Iterator` and `dyn
Iterator`.
Because `TokenStreamIter` is a much better name for a `TokenStream`
iterator. Also rename the `TokenStream::trees` method as
`TokenStream::iter`, and some local variables.
When we expand a `mod foo;` and parse `foo.rs`, we now track whether that file had an unrecovered parse error that reached the end of the file. If so, we keep that information around. When resolving a path like `foo::bar`, we do not emit any errors for "`bar` not found in `foo`", as we know that the parse error might have caused `bar` to not be parsed and accounted for.
When this happens in an existing project, every path referencing `foo` would be an irrelevant compile error. Instead, we now skip emitting anything until `foo.rs` is fixed. Tellingly enough, we didn't have any test for errors caused by `mod` expansion.
Fix#97734.
Initial implementation of `#[feature(default_field_values]`, proposed in https://github.com/rust-lang/rfcs/pull/3681.
Support default fields in enum struct variant
Allow default values in an enum struct variant definition:
```rust
pub enum Bar {
Foo {
bar: S = S,
baz: i32 = 42 + 3,
}
}
```
Allow using `..` without a base on an enum struct variant
```rust
Bar::Foo { .. }
```
`#[derive(Default)]` doesn't account for these as it is still gating `#[default]` only being allowed on unit variants.
Support `#[derive(Default)]` on enum struct variants with all defaulted fields
```rust
pub enum Bar {
#[default]
Foo {
bar: S = S,
baz: i32 = 42 + 3,
}
}
```
Check for missing fields in typeck instead of mir_build.
Expand test with `const` param case (needs `generic_const_exprs` enabled).
Properly instantiate MIR const
The following works:
```rust
struct S<A> {
a: Vec<A> = Vec::new(),
}
S::<i32> { .. }
```
Add lint for default fields that will always fail const-eval
We *allow* this to happen for API writers that might want to rely on users'
getting a compile error when using the default field, different to the error
that they would get when the field isn't default. We could change this to
*always* error instead of being a lint, if we wanted.
This will *not* catch errors for partially evaluated consts, like when the
expression relies on a const parameter.
Suggestions when encountering `Foo { .. }` without `#[feature(default_field_values)]`:
- Suggest adding a base expression if there are missing fields.
- Suggest enabling the feature if all the missing fields have optional values.
- Suggest removing `..` if there are no missing fields.
Current places where `Interpolated` is used are going to change to
instead use invisible delimiters. This prepares for that.
- It adds invisible delimiter cases to the `can_begin_*`/`may_be_*`
methods and the `failed_to_match_macro` that are equivalent to the
existing `Interpolated` cases.
- It adds panics/asserts in some places where invisible delimiters
should never occur.
- In `Parser::parse_struct_fields` it excludes an ident + invisible
delimiter from special consideration in an error message, because
that's quite different to an ident + paren/brace/bracket.
Unify FnKind between AST visitors and make WalkItemKind more straight forward
Unifying `FnKind` requires a bunch of changes to `WalkItemKind::walk` signature so I'll change them in one go
related to #128974
r? `@petrochenkov`