It currently doesn't work at all. This commit changes it to a simpler
imperative style that produces a valid `tokens` vec.
(An aside: I find `Iterator::scan` to be a pretty wretched function,
that produces code which is very hard to understand. Probably why this
is just one of two uses of it in the entire compiler.)
This new special case is simpler than the old special case because it
only is used when `dist == 1`. But that's still enough to cover ~98% of
cases. This results in equivalent performance to the old special case,
and identical behaviour as the general case.
The general case at the bottom of `look_ahead` is slow, because it
clones the token cursor. Above it there is a special case for
performance that is hit most of the time and avoids the cloning.
Unfortunately, its behaviour differs from the general case in two ways.
- When within a pair of delimiters, if you look any distance past the
closing delimiter you get the closing delimiter instead of what comes
after the closing delimiter.
- It uses `tree_cursor.look_ahead(dist - 1)` which totally confuses
tokens with token trees. This means that only the first token in a
token tree will be seen. E.g. in a sequence like `{ a }` the `a` and
`}` will be skipped over. Bad!
It's likely that these differences weren't noticed before now because
the use of `look_ahead` in the parser is limited to small distances and
relatively few contexts.
Removing the special case causes slowdowns up of to 2% on a range of
benchmarks. The next commit will add a new, correct special case to
regain that lost performance.
Currently the second element is a `Vec<(FlatToken, Spacing)>`. But the
vector always has zero or one elements, and the `FlatToken` is always
`FlatToken::AttrTarget` (which contains an `AttributesData`), and the
spacing is always `Alone`. So we can simplify it to
`Option<AttributesData>`.
An assertion in `to_attr_token_stream` can can also be removed, because
`new_tokens.len()` was always 0 or 1, which means than `range.len()`
is always greater than or equal to it, because `range.is_empty()` is
always false (as per the earlier assertion).
The number of source code bytes can't exceed a `u32`'s range, so a token
position also can't. This reduces the size of `Parser` and
`LazyAttrTokenStreamImpl` by eight bytes each.
Fix duplicated attributes on nonterminal expressions
This PR fixes a long-standing bug (#86055) whereby expression attributes can be duplicated when expanded through declarative macros.
First, consider how items are parsed in declarative macros:
```
Items:
- parse_nonterminal
- parse_item(ForceCollect::Yes)
- parse_item_
- attrs = parse_outer_attributes
- parse_item_common(attrs)
- maybe_whole!
- collect_tokens_trailing_token
```
The important thing is that the parsing of outer attributes is outside token collection, so the item's tokens don't include the attributes. This is how it's supposed to be.
Now consider how expression are parsed in declarative macros:
```
Exprs:
- parse_nonterminal
- parse_expr_force_collect
- collect_tokens_no_attrs
- collect_tokens_trailing_token
- parse_expr
- parse_expr_res(None)
- parse_expr_assoc_with
- parse_expr_prefix
- parse_or_use_outer_attributes
- parse_expr_dot_or_call
```
The important thing is that the parsing of outer attributes is inside token collection, so the the expr's tokens do include the attributes, i.e. in `AttributesData::tokens`.
This PR fixes the bug by rearranging expression parsing to that outer attribute parsing happens outside of token collection. This requires a number of small refactorings because expression parsing is somewhat complicated. While doing so the PR makes the code a bit cleaner and simpler, by eliminating `parse_or_use_outer_attributes` and `Option<AttrWrapper>` arguments (in favour of the simpler `parse_outer_attributes` and `AttrWrapper` arguments), and simplifying `LhsExpr`.
r? `@petrochenkov`
The extra span is now recorded in the new `TokenKind::NtIdent` and
`TokenKind::NtLifetime`. These both consist of a single token, and so
there's no operator precedence problems with inserting them directly
into the token stream.
The other way to do this would be to wrap the ident/lifetime in invisible
delimiters, but there's a lot of code that assumes an interpolated
ident/lifetime fits in a single token, and changing all that code to work with
invisible delimiters would have been a pain. (Maybe it could be done in a
follow-up.)
This change might not seem like much of a win, but it's a first step toward the
much bigger and long-desired removal of `Nonterminal` and
`TokenKind::Interpolated`. That change is big and complex enough that it's
worth doing this piece separately. (Indeed, this commit is based on part of a
late commit in #114647, a prior attempt at that big and complex change.)
This span records the declaration of the metavariable in the LHS of the macro.
It's used in a couple of error messages. Unfortunately, it gets in the way of
the long-term goal of removing `TokenKind::Interpolated`. So this commit
removes it, which degrades a couple of (obscure) error messages but makes
things simpler and enables the next commit.
The starting point for this was identical comments on two different
fields, in `ast::VariantData::Struct` and `hir::VariantData::Struct`:
```
// FIXME: investigate making this a `Option<ErrorGuaranteed>`
recovered: bool
```
I tried that, and then found that I needed to add an `ErrorGuaranteed`
to `Recovered::Yes`. Then I ended up using `Recovered` instead of
`Option<ErrorGuaranteed>` for these two places and elsewhere, which
required moving `ErrorGuaranteed` from `rustc_parse` to `rustc_ast`.
This makes things more consistent, because `Recovered` is used in more
places, and there are fewer uses of `bool` and
`Option<ErrorGuaranteed>`. And safer, because it's difficult/impossible
to set `recovered` to `Recovered::Yes` without having emitted an error.
Improve `rustc_parse::Parser`'s debuggability
The main event is the final commit where I add `Parser::debug_lookahead`. Everything else was basically cleaning up things that bugged me (debugging, as it were) until I felt comfortable enough to actually work on it.
The motivation is that it's annoying as hell to try to figure out how the debug infra works in rustc without having basic queries like `debug!(?parser);` come up "empty". However, Parser has a lot of fields that are mostly irrelevant for most debugging, like the entire ParseSess. I think `Parser::debug_lookahead` with a capped lookahead might be fine as a general-purpose Debug impl, but this adapter version was suggested to allow more choice, and admittedly, it's a refined version of what I was already handrolling just to get some insight going.
I tried debugging a parser-related issue but found it annoying to not be
able to easily peek into the Parser's token stream.
Add a convenience fn that offers an opinionated view into the parser,
but one that is useful for answering basic questions about parser state.
There are some test cases involving `parse` and `tokenstream` and
`mut_visit` that are located in `rustc_expand`. Because it used to be
the case that constructing a `ParseSess` required the involvement of
`rustc_expand`. However, since #64197 merged (a long time ago)
`rust_expand` no longer needs to be involved.
This commit moves the tests into `rustc_parse`. This is the optimal
place for the `parse` tests. It's not ideal for the `tokenstream` and
`mut_visit` tests -- they would be better in `rustc_ast` -- but they
still rely on parsing, which is not available in `rustc_ast`. But
`rustc_parse` is lower down in the crate graph and closer to `rustc_ast`
than `rust_expand`, so it's still an improvement for them.
The exact renaming is as follows:
- rustc_expand/src/mut_visit/tests.rs -> rustc_parse/src/parser/mut_visit/tests.rs
- rustc_expand/src/tokenstream/tests.rs -> rustc_parse/src/parser/tokenstream/tests.rs
- rustc_expand/src/tests.rs + rustc_expand/src/parse/tests.rs ->
compiler/rustc_parse/src/parser/tests.rs
The latter two test files are combined because there's no need for them
to be separate, and having a `rustc_parse::parser::parse` module would
be weird. This also means some `pub(crate)`s can be removed.
Don't fatal when calling `expect_one_of` when recovering arg in `parse_seq`
In `parse_seq`, when parsing a sequence of token-separated items, if we don't see a separator, we try to parse another item eagerly in order to give a good diagnostic and recover from a missing separator:
d1a0fa5ed3/compiler/rustc_parse/src/parser/mod.rs (L900-L901)
If parsing the item itself calls `expect_one_of`, then we will fatal because of #58903:
d1a0fa5ed3/compiler/rustc_parse/src/parser/mod.rs (L513-L516)
For `precise_capturing` feature I implemented, we do end up calling `expected_one_of`:
d1a0fa5ed3/compiler/rustc_parse/src/parser/ty.rs (L712-L714)
This leads the compiler to fatal *before* having emitted the first error, leading to absolutely no useful information for the user about what happened in the parser.
This PR makes it so that we stop doing that.
Fixes#124195
Cleanup: Rename `ModSep` to `PathSep`
`::` is usually referred to as the *path separator* (citation needed).
The existing name `ModSep` for *module separator* is a bit misleading since it in fact separates the segments of arbitrary path segments, not only ones resolving to modules. Let me just give a shout-out to associated items (`T::Assoc`, `<Ty as Trait>::function`) and enum variants (`Option::None`).
Motivation: Reduce friction for new contributors, prevent potential confusion.
cc `@petrochenkov`
r? nnethercote or compiler
Interpolated cleanups
Various cleanups I made while working on attempts to remove `Interpolated`, that are worth merging now. Best reviewed one commit at a time.
r? `@petrochenkov`
Inline a bunch of trivial conditions in parser
It is often the case that these small, conditional functions, when inlined, reveal notable optimization opportunities to LLVM. While saethlin has done a lot of good work on making these kinds of small functions not need `#[inline]` tags as much, being clearer about what we want inlined will get both the MIR opts and LLVM to pursue it more aggressively.
On local perf runs, this seems fruitful. Let's see what rust-timer says.
r? `@ghost`
This commit combines `MatchedTokenTree` and `MatchedNonterminal`, which
are often considered together, into a single `MatchedSingle`. It shares
a representation with the newly-parameterized `ParseNtResult`.
This will also make things much simpler if/when variants from
`Interpolated` start being moved to `ParseNtResult`.
There are a bunch of small helper conditionals we use.
Inline them to get slightly better perf in a few cases,
especially when rustc is compiled without PGO.
Existing names for values of this type are `sess`, `parse_sess`,
`parse_session`, and `ps`. `sess` is particularly annoying because
that's also used for `Session` values, which are often co-located, and
it can be difficult to know which type a value named `sess` refers to.
(That annoyance is the main motivation for this change.) `psess` is nice
and short, which is good for a name used this much.
The commit also renames some `parse_sess_created` values as
`psess_created`.
This mostly works well, and eliminates a couple of delayed bugs.
One annoying thing is that we should really also add an
`ErrorGuaranteed` to `proc_macro::bridge::LitKind::Err`. But that's
difficult because `proc_macro` doesn't have access to `ErrorGuaranteed`,
so we have to fake it.
In #119606 I added them and used a `_mv` suffix, but that wasn't great.
A `with_` prefix has three different existing uses.
- Constructors, e.g. `Vec::with_capacity`.
- Wrappers that provide an environment to execute some code, e.g.
`with_session_globals`.
- Consuming chaining methods, e.g. `Span::with_{lo,hi,ctxt}`.
The third case is exactly what we want, so this commit changes
`DiagnosticBuilder::foo_mv` to `DiagnosticBuilder::with_foo`.
Thanks to @compiler-errors for the suggestion.
This works for most of its call sites. This is nice, because `emit` very
much makes sense as a consuming operation -- indeed,
`DiagnosticBuilderState` exists to ensure no diagnostic is emitted
twice, but it uses runtime checks.
For the small number of call sites where a consuming emit doesn't work,
the commit adds `DiagnosticBuilder::emit_without_consuming`. (This will
be removed in subsequent commits.)
Likewise, `emit_unless` becomes consuming. And `delay_as_bug` becomes
consuming, while `delay_as_bug_without_consuming` is added (which will
also be removed in subsequent commits.)
All this requires significant changes to `DiagnosticBuilder`'s chaining
methods. Currently `DiagnosticBuilder` method chaining uses a
non-consuming `&mut self -> &mut Self` style, which allows chaining to
be used when the chain ends in `emit()`, like so:
```
struct_err(msg).span(span).emit();
```
But it doesn't work when producing a `DiagnosticBuilder` value,
requiring this:
```
let mut err = self.struct_err(msg);
err.span(span);
err
```
This style of chaining won't work with consuming `emit` though. For
that, we need to use to a `self -> Self` style. That also would allow
`DiagnosticBuilder` production to be chained, e.g.:
```
self.struct_err(msg).span(span)
```
However, removing the `&mut self -> &mut Self` style would require that
individual modifications of a `DiagnosticBuilder` go from this:
```
err.span(span);
```
to this:
```
err = err.span(span);
```
There are *many* such places. I have a high tolerance for tedious
refactorings, but even I gave up after a long time trying to convert
them all.
Instead, this commit has it both ways: the existing `&mut self -> Self`
chaining methods are kept, and new `self -> Self` chaining methods are
added, all of which have a `_mv` suffix (short for "move"). Changes to
the existing `forward!` macro lets this happen with very little
additional boilerplate code. I chose to add the suffix to the new
chaining methods rather than the existing ones, because the number of
changes required is much smaller that way.
This doubled chainging is a bit clumsy, but I think it is worthwhile
because it allows a *lot* of good things to subsequently happen. In this
commit, there are many `mut` qualifiers removed in places where
diagnostics are emitted without being modified. In subsequent commits:
- chaining can be used more, making the code more concise;
- more use of chaining also permits the removal of redundant diagnostic
APIs like `struct_err_with_code`, which can be replaced easily with
`struct_err` + `code_mv`;
- `emit_without_diagnostic` can be removed, which simplifies a lot of
machinery, removing the need for `DiagnosticBuilderState`.
`Diagnostic` has 40 methods that return `&mut Self` and could be
considered setters. Four of them have a `set_` prefix. This doesn't seem
necessary for a type that implements the builder pattern. This commit
removes the `set_` prefixes on those four methods.
This involves lots of breaking changes. There are two big changes that
force changes. The first is that the bitflag types now don't
automatically implement normal derive traits, so we need to derive them
manually.
Additionally, bitflags now have a hidden inner type by default, which
breaks our custom derives. The bitflags docs recommend using the impl
form in these cases, which I did.
`IntoDiagnostic` defaults to `ErrorGuaranteed`, because errors are the
most common diagnostic level. It makes sense to do likewise for the
closely-related (and much more widely used) `DiagnosticBuilder` type,
letting us write `DiagnosticBuilder<'a, ErrorGuaranteed>` as just
`DiagnosticBuilder<'a>`. This cuts over 200 lines of code due to many
multi-line things becoming single line things.
This commit replaces this pattern:
```
err.into_diagnostic(dcx)
```
with this pattern:
```
dcx.create_err(err)
```
in a lot of places.
It's a little shorter, makes the error level explicit, avoids some
`IntoDiagnostic` imports, and is a necessary prerequisite for the next
commit which will add a `level` arg to `into_diagnostic`.
This requires adding `track_caller` on `create_err` to avoid mucking up
the output of `tests/ui/track-diagnostics/track4.rs`. It probably should
have been there already.
This is an extension of the previous commit. It means the output of
something like this:
```
stringify!(let a: Vec<u32> = vec![];)
```
goes from this:
```
let a: Vec<u32> = vec![] ;
```
With this PR, it now produces this string:
```
let a: Vec<u32> = vec![];
```
`tokenstream::Spacing` appears on all `TokenTree::Token` instances,
both punct and non-punct. Its current usage:
- `Joint` means "can join with the next token *and* that token is a
punct".
- `Alone` means "cannot join with the next token *or* can join with the
next token but that token is not a punct".
The fact that `Alone` is used for two different cases is awkward.
This commit augments `tokenstream::Spacing` with a new variant
`JointHidden`, resulting in:
- `Joint` means "can join with the next token *and* that token is a
punct".
- `JointHidden` means "can join with the next token *and* that token is a
not a punct".
- `Alone` means "cannot join with the next token".
This *drastically* improves the output of `print_tts`. For example,
this:
```
stringify!(let a: Vec<u32> = vec![];)
```
currently produces this string:
```
let a : Vec < u32 > = vec! [] ;
```
With this PR, it now produces this string:
```
let a: Vec<u32> = vec![] ;
```
(The space after the `]` is because `TokenTree::Delimited` currently
doesn't have spacing information. The subsequent commit fixes this.)
The new `print_tts` doesn't replicate original code perfectly. E.g.
multiple space characters will be condensed into a single space
character. But it's much improved.
`print_tts` still produces the old, uglier output for code produced by
proc macros. Because we have to translate the generated code from
`proc_macro::Spacing` to the more expressive `token::Spacing`, which
results in too much `proc_macro::Along` usage and no
`proc_macro::JointHidden` usage. So `space_between` still exists and
is used by `print_tts` in conjunction with the `Spacing` field.
This change will also help with the removal of `Token::Interpolated`.
Currently interpolated tokens are pretty-printed nicely via AST pretty
printing. `Token::Interpolated` removal will mean they get printed with
`print_tts`. Without this change, that would result in much uglier
output for code produced by decl macro expansions. With this change, AST
pretty printing and `print_tts` produce similar results.
The commit also tweaks the comments on `proc_macro::Spacing`. In
particular, it refers to "compound tokens" rather than "multi-char
operators" because lifetimes aren't operators.
More detail when expecting expression but encountering bad macro argument
On nested macro invocations where the same macro fragment changes fragment type from one to the next, point at the chain of invocations and at the macro fragment definition place, explaining that the change has occurred.
Fix#71039.
```
error: expected expression, found pattern `1 + 1`
--> $DIR/trace_faulty_macros.rs:49:37
|
LL | (let $p:pat = $e:expr) => {test!(($p,$e))};
| ------- -- this is interpreted as expression, but it is expected to be pattern
| |
| this macro fragment matcher is expression
...
LL | (($p:pat, $e:pat)) => {let $p = $e;};
| ------ ^^ expected expression
| |
| this macro fragment matcher is pattern
...
LL | test!(let x = 1+1);
| ------------------
| | |
| | this is expected to be expression
| in this macro invocation
|
= note: when forwarding a matched fragment to another macro-by-example, matchers in the second macro will see an opaque AST of the fragment type, not the underlying tokens
= note: this error originates in the macro `test` (in Nightly builds, run with -Z macro-backtrace for more info)
```
Most notably, this commit changes the `pub use crate::*;` in that file
to `use crate::*;`. This requires a lot of `use` items in other crates
to be adjusted, because everything defined within `rustc_span::*` was
also available via `rustc_span::source_map::*`, which is bizarre.
The commit also removes `SourceMap::span_to_relative_line_string`, which
is unused.
Implement `gen` blocks in the 2024 edition
Coroutines tracking issue https://github.com/rust-lang/rust/issues/43122
`gen` block tracking issue https://github.com/rust-lang/rust/issues/117078
This PR implements `gen` blocks that implement `Iterator`. Most of the logic with `async` blocks is shared, and thus I renamed various types that were referring to `async` specifically.
An example usage of `gen` blocks is
```rust
fn foo() -> impl Iterator<Item = i32> {
gen {
yield 42;
for i in 5..18 {
if i.is_even() { continue }
yield i * 2;
}
}
}
```
The limitations (to be resolved) of the implementation are listed in the tracking issue
When encountering code like `f::<f::<f::<f::<f::<f::<f::<f::<...` with
unmatched closing angle brackets, add a linear check that avoids the
exponential behavior of the parse recovery mechanism.
Fix#117080.
```
error: expected one of `,`, `:`, or `}`, found `.`
--> $DIR/missing-fat-arrow.rs:25:14
|
LL | Some(a) if a.value == b {
| - while parsing this struct
LL | a.value = 1;
| -^ expected one of `,`, `:`, or `}`
| |
| while parsing this struct field
|
help: try naming a field
|
LL | a: a.value = 1;
| ++
help: you might have meant to start a match arm after the match guard
|
LL | Some(a) if a.value == b => {
| ++
```
Fix#78585.
There was an incomplete version of the check in parsing and a second
version in AST validation. This meant that some, but not all, invalid
uses were allowed inside macros/disabled cfgs. It also means that later
passes have a hard time knowing when the let expression is in a valid
location, sometimes causing ICEs.
- Add a field to ExprKind::Let in AST/HIR to mark whether it's in a
valid location.
- Suppress later errors and MIR construction for invalid let
expressions.
It's the same as `Delimiter`, minus the `Invisible` variant. I'm
generally in favour of using types to make impossible states
unrepresentable, but this one feels very low-value, and the conversions
between the two types are annoying and confusing.
Look at the change in `src/tools/rustfmt/src/expr.rs` for an example:
the old code converted from `MacDelimiter` to `Delimiter` and back
again, for no good reason. This suggests the author was confused about
the types.
Similar to the last commit, it's more of a `Parser`-level concern than a
`TokenCursor`-level concern. And the struct size reductions are nice.
After this change, `TokenCursor` is as minimal as possible (two fields
and two methods) which is nice.
It's more of a `Parser`-level concern than a `TokenCursor`-level
concern. Also, `num_bump_calls` is a more accurate name, because it's
incremented in `Parser::bump`.
Move doc comment desugaring out of `TokenCursor`.
It's awkward that `TokenCursor` sometimes desugars doc comments on the fly, but usually doesn't.
r? `@petrochenkov`
`TokenCursor` currently does doc comment desugaring on the fly, if the
`desugar_doc_comment` field is set. This requires also modifying the
token stream on the fly with `replace_prev_and_rewind`.
This commit moves the doc comment desugaring out of `TokenCursor`, by
introducing a new `TokenStream::desugar_doc_comment` method. This
separation of desugaring and iterating makes the code nicer.
It doesn't really matter what the `desugar_doc_comments` argument is
here, because in practice we never look ahead through doc comments.
Changing it to `cursor.desugar_doc_comments` will allow some follow-up
simplifications.