Commit Graph

329 Commits

Author SHA1 Message Date
bohan
272dc5a6d5 fix(parse): return unpected when current token is EOF 2023-05-13 00:33:27 +08:00
Nicholas Nethercote
6b62f37402 Restrict From<S> for {D,Subd}iagnosticMessage.
Currently a `{D,Subd}iagnosticMessage` can be created from any type that
impls `Into<String>`. That includes `&str`, `String`, and `Cow<'static,
str>`, which are reasonable. It also includes `&String`, which is pretty
weird, and results in many places making unnecessary allocations for
patterns like this:
```
self.fatal(&format!(...))
```
This creates a string with `format!`, takes a reference, passes the
reference to `fatal`, which does an `into()`, which clones the
reference, doing a second allocation. Two allocations for a single
string, bleh.

This commit changes the `From` impls so that you can only create a
`{D,Subd}iagnosticMessage` from `&str`, `String`, or `Cow<'static,
str>`. This requires changing all the places that currently create one
from a `&String`. Most of these are of the `&format!(...)` form
described above; each one removes an unnecessary static `&`, plus an
allocation when executed. There are also a few places where the existing
use of `&String` was more reasonable; these now just use `clone()` at
the call site.

As well as making the code nicer and more efficient, this is a step
towards possibly using `Cow<'static, str>` in
`{D,Subd}iagnosticMessage::{Str,Eager}`. That would require changing
the `From<&'a str>` impls to `From<&'static str>`, which is doable, but
I'm not yet sure if it's worthwhile.
2023-05-03 08:44:39 +10:00
yukang
f54489978d fix tests 2023-05-01 16:15:17 +08:00
yukang
a4453c20ca fix parser size 2023-05-01 16:15:17 +08:00
Nilstrieb
c63b6a437e Rip it out
My type ascription
Oh rip it out
Ah
If you think we live too much then
You can sacrifice diagnostics
Don't mix your garbage
Into my syntax
So many weird hacks keep diagnostics alive
Yet I don't even step outside
So many bad diagnostics keep tyasc alive
Yet tyasc doesn't even bother to survive!
2023-05-01 16:15:13 +08:00
clubby789
1ce9d7254e Migrate trivially translatable rustc_parse diagnostics 2023-04-27 01:53:06 +01:00
Oli Scherer
54214c8d8d Use a simpler atomic operation than the compare_exchange hammer 2023-04-04 09:01:44 +00:00
Oli Scherer
c7a3a943f2 Replace a lock with an atomic 2023-04-04 09:01:44 +00:00
Ezra Shaw
05b5046633
feat: implement error recovery in expected_ident_found 2023-03-20 20:54:41 +13:00
Ezra Shaw
c9ddb73184
refactor: refactor identifier parsing somewhat 2023-03-19 20:20:20 +13:00
Michael Goulet
c3159b851a Gate const closures even when they appear in macros 2023-03-11 21:29:28 +00:00
Matthias Krüger
dd6f03de9a
Rollup merge of #108715 - chenyukang:yukang/cleanup-parser-delims, r=compiler-errors
Remove unclosed_delims from parser

After landing https://github.com/rust-lang/rust/pull/108297
we could remove `unclosed_delims` from the parser now.
2023-03-04 20:48:17 +01:00
yukang
d1073fab35 Remove unclosed_delims from parser 2023-03-03 23:09:36 +00:00
est31
6df5ae4fb0 Match unmatched backticks in comments in compiler/ 2023-03-03 08:39:00 +01:00
yukang
9ce7472db4 rename unmatched_braces to unmatched_delims 2023-02-28 07:57:17 +00:00
Yutaro Ohno
0e42298674 parser: provide better errors on closures with braces missing
We currently provide wrong suggestions and unhelpful errors on closure
bodies with braces missing. For example, given the following code:

```
fn main() {
    let _x = Box::new(|x|x+1;);
}
```

the current output is like this:

```
error: expected expression, found `)`
 --> ./main.rs:2:30
  |
2 |     let _x = Box::new(|x|x+1;);
  |                              ^ expected expression

error: closure bodies that contain statements must be surrounded by braces
 --> ./main.rs:2:25
  |
2 |     let _x = Box::new(|x|x+1;);
  |                         ^
3 | }
  | ^
  |

...

help: try adding braces
  |
2 ~     let _x = Box::new(|x| {x+1;);
3 ~ }}

...

error: expected `;`, found `}`
 --> ./main.rs:2:32
  |
2 |     let _x = Box::new(|x|x+1;);
  |                                ^ help: add `;` here
3 | }
  | - unexpected token

error: aborting due to 3 previous errors
```

This commit allows outputting correct suggestions and errors. The above
code would output like this:

```
error: closure bodies that contain statements must be surrounded by braces
 --> ./main.rs:2:25
  |
2 |     let _x = Box::new(|x|x+1;);
  |                         ^    ^
  |
note: statement found outside of a block
 --> ./main.rs:2:29
  |
2 |     let _x = Box::new(|x|x+1;);
  |                          ---^ this `;` turns the preceding closure into a statement
  |                          |
  |                          this expression is a statement because of the trailing semicolon
note: the closure body may be incorrectly delimited
 --> ./main.rs:2:23
  |
2 |     let _x = Box::new(|x|x+1;);
  |                       ^^^^^^ - ...but likely you meant the closure to end here
  |                       |
  |                       this is the parsed closure...
help: try adding braces
  |
2 |     let _x = Box::new(|x| {x+1;});
  |                           +    +

error: aborting due to previous error
```
2023-02-23 19:05:13 +09:00
Nicholas Nethercote
4143b101f9 Use ThinVec in various AST types.
This commit changes the sequence parsers to produce `ThinVec`, which
triggers numerous conversions.
2023-02-21 11:51:56 +11:00
Maybe Waffle
8751fa1a9a if $c:expr { Some($r:expr) } else { None } =>> $c.then(|| $r) 2023-02-16 15:26:00 +00:00
Michael Goulet
e99e05d135
Rollup merge of #107551 - fee1-dead-contrib:rm_const_fnmut_helper, r=oli-obk
Replace `ConstFnMutClosure` with const closures

Also fixes a parser bug. cc `@oli-obk` for compiler changes
2023-02-03 14:15:22 -08:00
Nicholas Nethercote
a86fc727fa Rename Cursor/CursorRef as TokenTreeCursor/RefTokenTreeCursor.
This makes it clear they return token trees, and makes for a nice
comparison against `TokenCursor` which returns tokens.
2023-02-03 10:06:52 +11:00
Nicholas Nethercote
b5ecbbb998 Remove TokenCursorFrame.
The motivation here is to eliminate the `Option<(Delimiter,
DelimSpan)>`, which is `None` for the outermost token stream and `Some`
for all other token streams.

We are already treating the innermost frame specially -- this is the
`frame` vs `stack` distinction in `TokenCursor`. We can push that
further so that `frame` only contains the cursor, and `stack` elements
contain the delimiters for their children. When we are in the outermost
token stream `stack` is empty, so there are no stored delimiters, which
is what we want because the outermost token stream *has* no delimiters.

This change also shrinks `TokenCursor`, which shrinks `Parser` and
`LazyAttrTokenStreamImpl`, which is nice.
2023-02-03 10:06:52 +11:00
Nicholas Nethercote
b23f272db0 Make clear that TokenTree::Token shouldn't contain a delimiter. 2023-02-03 10:06:52 +11:00
Nicholas Nethercote
af1d16e82d Improve doc comment desugaring.
Sometimes the parser needs to desugar a doc comment into `#[doc =
r"foo"]`. Currently it does this in a hacky way: by pushing a "fake" new
frame (one without a delimiter) onto the `TokenCursor` stack.

This commit changes things so that the token stream itself is modified
in place. The nice thing about this is that it means
`TokenCursorFrame::delim_sp` is now only `None` for the outermost frame.
2023-02-03 10:06:52 +11:00
Deadbeef
679dde7338 fix parser mistaking const closures for const item 2023-02-01 06:44:30 +00:00
Maybe Waffle
6a28fb42a8 Remove double spaces after dots in comments 2023-01-17 08:09:33 +00:00
Deadbeef
4fb10c0ce4 parse const closures 2023-01-12 02:28:37 +00:00
kraktus
d08134f1d2
fix comment for TokenCursor::desugar
the hashes of the text were forgotten.
2022-12-29 19:45:31 +01:00
Matthias Krüger
c52d58f346
Rollup merge of #105570 - Nilstrieb:actual-best-failure, r=compiler-errors
Properly calculate best failure in macro matching

Previously, we used spans. This was not good. Sometimes, the span of the token that failed to match may come from a position later in the file which has been transcribed into a token stream way earlier in the file. If precisely this token fails to match, we think that it was the best match because its span is so high, even though other arms might have gotten further in the token stream.

We now try to properly use the location in the token stream.

This needs a little cleanup as the `best_failure` field is getting out of hand but it should be mostly good to go. I hope I didn't violate too many abstraction boundaries..
2022-12-28 22:22:19 +01:00
Matthias Krüger
0aa4cde747 avoid .into() conversion to identical types 2022-12-18 16:20:32 +01:00
Nilstrieb
d72a0c437b Properly calculate best failure in macro matching
Previously, we used spans. This was not good. Sometimes, the span of the
token that failed to match may come from a position later in the file
which has been transcribed into a token stream way earlier in the file.
If precisely this token fails to match, we think that it was the best
match because its span is so high, even though other arms might have
gotten further in the token stream.

We now try to properly use the location in the token stream.
2022-12-12 17:05:27 +01:00
Yiming Lei
0e19fb92e1 While parsing enum variant, the error message always disappear
Because the error message that emit out is from main error of parser
The information of enum variant disappears while parsing enum variant with error
We only check the syntax of expecting token, i.e, in case #103869
It will error it without telling the message that this error is from pasring enum variant.
Propagate the sub-error from parsing enum variant to the main error of parser by chaining it with map_err
Check the sub-error before emitting the main error of parser and attach it.
Fix #103869
2022-12-01 22:48:52 -08:00
Maybe Waffle
616df0f03b rustc_parse: remove ref patterns 2022-11-22 18:49:16 +00:00
Nicholas Nethercote
3e3a4192d8 Split MacArgs in two.
`MacArgs` is an enum with three variants: `Empty`, `Delimited`, and `Eq`. It's
used in two ways:
- For representing attribute macro arguments (e.g. in `AttrItem`), where all
  three variants are used.
- For representing function-like macros (e.g. in `MacCall` and `MacroDef`),
  where only the `Delimited` variant is used.

In other words, `MacArgs` is used in two quite different places due to them
having partial overlap. I find this makes the code hard to read. It also leads
to various unreachable code paths, and allows invalid values (such as
accidentally using `MacArgs::Empty` in a `MacCall`).

This commit splits `MacArgs` in two:
- `DelimArgs` is a new struct just for the "delimited arguments" case. It is
  now used in `MacCall` and `MacroDef`.
- `AttrArgs` is a renaming of the old `MacArgs` enum for the attribute macro
  case. Its `Delimited` variant now contains a `DelimArgs`.

Various other related things are renamed as well.

These changes make the code clearer, avoids several unreachable paths, and
disallows the invalid values.
2022-11-22 09:04:15 +11:00
Nilstrieb
b7b67228f9
Only do parser recovery on retried macro matching
This prevents issues with eager parser recovery during macro matching.
2022-11-15 19:34:35 +01:00
bors
5b82ea74b7 Auto merge of #99918 - WaffleLapkin:fnFnfun, r=estebank
Recover wrong-cased keywords that start items

(_this pr was inspired by [this tweet](https://twitter.com/Azumanga/status/1552982326409367561)_)

r? `@estebank`

We've talked a bit about this recovery, but I just wanted to make sure that this is the right approach :)

For now I've only added the case insensitive recovery to `use`s, since most other items like `impl` blocks, modules, functions can start with multiple keywords which complicates the matter.
2022-11-11 02:07:52 +00:00
Nilstrieb
29e50e8d35
Gate some recovery behind a flag
Mainly in `expr.rs`
2022-10-28 22:07:36 +02:00
nils
da407ed38f
Fix typo
Co-authored-by: Esteban Kuber <estebank@users.noreply.github.com>
2022-10-26 22:06:35 +02:00
Nilstrieb
796114a5b0
Add documentation 2022-10-26 21:09:28 +02:00
Nilstrieb
ed14202864
Add flag to forbid recovery in the parser 2022-10-25 22:06:53 +02:00
yukang
2414357374 fix assertion failed for break_last_token and trailing token 2022-10-20 20:16:27 +08:00
Nicholas Nethercote
9de9cf19d7 Add comments to TokenCursor::desugar.
It took me some time to work out what this code was doing.
2022-10-03 11:42:29 +11:00
Maybe Waffle
d86f9cd464 Replace some bool params with an enum 2022-10-01 10:13:02 +00:00
Maybe Waffle
38b0865248 Recover wrong cased keywords starting functions 2022-10-01 10:08:53 +00:00
Maybe Waffle
3694429d09 recover wrong-cased uses (Use, USE, etc) 2022-10-01 10:07:47 +00:00
Xiretza
d7c64574e0 Implement IntoDiagnosticArg for rustc_ast::token::Token(Kind) 2022-09-27 20:29:19 +02:00
Xiretza
7507ee29fc Migrate "expected identifier" diagnostics to diagnostic structs 2022-09-27 20:29:19 +02:00
Xiretza
e1b1d7b029 Migrate more rustc_parse diagnostics to diagnostic structs 2022-09-27 20:29:18 +02:00
Xiretza
e56d6a68db Move rustc_parse diagnostic structs to separate module 2022-09-27 20:29:18 +02:00
Xiretza
6ae7a30927 Migrate "invalid literal suffix" diagnostic to diagnostic structs 2022-09-27 20:29:18 +02:00
Xiretza
ab7c7dc7ce Migrate more diagnostics in rustc_parse to diagnostic structs 2022-09-27 20:29:18 +02:00
Nicholas Nethercote
d2df07c425 Rename {Create,Lazy}TokenStream as {To,Lazy}AttrTokenStream.
`To` is better than `Create` for indicating that this is a non-consuming
conversion, rather than creating something out of nothing.

And the addition of `Attr` is because the current names makes them sound
like they relate to `TokenStream`, but really they relate to
`AttrTokenStream`.
2022-09-09 17:25:38 +10:00
Nicholas Nethercote
a56d345490 Rename AttrAnnotatedToken{Stream,Tree}.
These two type names are long and have long matching prefixes. I find
them hard to read, especially in combinations like
`AttrAnnotatedTokenStream::new(vec![AttrAnnotatedTokenTree::Token(..)])`.

This commit renames them as `AttrToken{Stream,Tree}`.
2022-09-09 12:45:26 +10:00
Oli Scherer
ee3c835018 Always import all tracing macros for the entire crate instead of piecemeal by module 2022-09-01 14:54:27 +00:00
Dezhi Wu
b1430fb7ca Fix a bunch of typo
This PR will fix some typos detected by [typos].

I only picked the ones I was sure were spelling errors to fix, mostly in
the comments.

[typos]: https://github.com/crate-ci/typos
2022-08-31 18:24:55 +08:00
Nicholas Nethercote
6087dc2054 Remove the symbol from ast::LitKind::Err.
Because it's never used meaningfully.
2022-08-23 16:56:24 +10:00
Nicholas Nethercote
2ef0479568 Simplify attribute handling in parse_bottom_expr.
`Parser::parse_bottom_expr` currently constructs an empty `attrs` and
then passes it to a large number of other functions. This makes the code
harder to read than it should be, because it's not clear that many
`attrs` arguments are always empty.

This commit removes `attrs` and the passing, simplifying a lot of
functions. The commit also renames `Parser::mk_expr` (which takes an
`attrs` argument) as `mk_expr_with_attrs`, and introduces a new
`mk_expr` which creates an expression with no attributes, which is the
more common case.
2022-08-15 13:29:28 +10:00
Jacob Pratt
be5672ecb2
Stringify non-shorthand visibility correctly 2022-08-09 23:31:45 -04:00
Matthias Krüger
beb4cdddde
Rollup merge of #100011 - compiler-errors:let-chain-restriction, r=fee1-dead
Use Parser's `restrictions` instead of `let_expr_allowed`

This also means that the `ALLOW_LET` flag is reset properly for subexpressions, so we can properly deny things like `a && (b && let c = d)`. Also the parser is a tiny bit smaller now.

It doesn't reject _all_ bad `let` expr usages, just a bit more.

cc `@c410-f3r`
2022-08-02 07:30:44 +02:00
Michael Goulet
6be7a87f9c Use expr parse restrictions for let expr parsing 2022-08-01 01:13:16 +00:00
Nicholas Nethercote
332dffb1f9 Remove TreeAndSpacing.
A `TokenStream` contains a `Lrc<Vec<(TokenTree, Spacing)>>`. But this is
not quite right. `Spacing` makes sense for `TokenTree::Token`, but does
not make sense for `TokenTree::Delimited`, because a
`TokenTree::Delimited` cannot be joined with another `TokenTree`.

This commit fixes this problem, by adding `Spacing` to `TokenTree::Token`,
changing `TokenStream` to contain a `Lrc<Vec<TokenTree>>`, and removing the
`TreeAndSpacing` typedef.

The commit removes these two impls:
- `impl From<TokenTree> for TokenStream`
- `impl From<TokenTree> for TreeAndSpacing`

These were useful, but also resulted in code with many `.into()` calls
that was hard to read, particularly for anyone not highly familiar with
the relevant types. This commit makes some other changes to compensate:
- `TokenTree::token()` becomes `TokenTree::token_{alone,joint}()`.
- `TokenStream::token_{alone,joint}()` are added.
- `TokenStream::delimited` is added.

This results in things like this:
```rust
TokenTree::token(token::Semi, stmt.span).into()
```
changing to this:
```rust
TokenStream::token_alone(token::Semi, stmt.span)
```
This makes the type of the result, and its spacing, clearer.

These changes also simplifies `Cursor` and `CursorRef`, because they no longer
need to distinguish between `next` and `next_with_spacing`.
2022-07-29 15:52:15 +10:00
Nixon Enraght-Moony
18ca2946e0 ast: Add span to Extern 2022-07-02 23:30:03 +01:00
Matthias Krüger
d34c4ca9be
Rollup merge of #98668 - TaKO8Ki:avoid-many-&str-to-string-conversions, r=Dylan-DPC
Avoid some `&str` to `String` conversions with `MultiSpan::push_span_label`

This patch removes some`&str` to `String` conversions with `MultiSpan::push_span_label`.
2022-06-29 20:35:07 +02:00
Takayuki Maeda
6212e6b339 avoid many &str to String conversions with MultiSpan::push_span_label 2022-06-29 21:16:43 +09:00
Caio
747586732b [rustc_parse] Forbid lets in certain places 2022-06-25 08:08:38 -03:00
Yuki Okushi
e3a3c00be8
Rollup merge of #95211 - terrarier2111:improve-parser, r=compiler-errors
Improve parser diagnostics

This pr fixes https://github.com/rust-lang/rust/issues/93867 and contains a couple of diagnostics related changes to the parser.
Here is a short list with some of the changes:
- don't suggest the same thing that is the current token
- suggest removing the current token if the following token is one of the suggestions (maybe incorrect)
- tell the user to put a type or lifetime after where if there is none (as a warning)
- reduce the amount of tokens suggested (via the new eat_noexpect and check_noexpect methods)

If any of these changes are undesirable, i can remove them, thanks!
2022-06-14 07:47:22 +09:00
Takayuki Maeda
77d6176e69 remove unnecessary to_string and String::new 2022-06-13 15:48:40 +09:00
threadexception
21fdd549f6 Improves parser diagnostics, fixes #93867 2022-06-12 17:48:52 +02:00
Jacob Pratt
7b987e34c0
Merge crate and restricted visibilities 2022-05-21 17:02:55 -04:00
Jacob Pratt
8cece636b2
Remove feature: crate visibility modifier 2022-05-21 14:22:06 -04:00
Jacob Pratt
49c82f31a8
Remove crate visibility usage in compiler 2022-05-20 20:04:54 -04:00
Vadim Petrochenkov
f2b7fa4847 ast: Introduce some traits to get AST node properties generically
And use them to avoid constructing some artificial `Nonterminal` tokens during expansion
2022-05-11 12:43:27 +03:00
bors
4c60a0ea5b Auto merge of #96546 - nnethercote:overhaul-MacArgs, r=petrochenkov
Overhaul `MacArgs`

Motivation:
- Clarify some code that I found hard to understand.
- Eliminate one use of three places where `TokenKind::Interpolated` values are created.

r? `@petrochenkov`
2022-05-04 21:16:28 +00:00
Nicholas Nethercote
99f5945f85 Overhaul MacArgs::Eq.
The value in `MacArgs::Eq` is currently represented as a `Token`.
Because of `TokenKind::Interpolated`, `Token` can be either a token or
an arbitrary AST fragment. In practice, a `MacArgs::Eq` starts out as a
literal or macro call AST fragment, and then is later lowered to a
literal token. But this is very non-obvious. `Token` is a much more
general type than what is needed.

This commit restricts things, by introducing a new type `MacArgsEqKind`
that is either an AST expression (pre-lowering) or an AST literal
(post-lowering). The downside is that the code is a bit more verbose in
a few places. The benefit is that makes it much clearer what the
possibilities are (though also shorter in some other places). Also, it
removes one use of `TokenKind::Interpolated`, taking us a step closer to
removing that variant, which will let us make `Token` impl `Copy` and
remove many "handle Interpolated" code paths in the parser.

Things to note:
- Error messages have improved. Messages like this:
  ```
  unexpected token: `"bug" + "found"`
  ```
  now say "unexpected expression", which makes more sense. Although
  arbitrary expressions can exist within tokens thanks to
  `TokenKind::Interpolated`, that's not obvious to anyone who doesn't
  know compiler internals.
- In `parse_mac_args_common`, we no longer need to collect tokens for
  the value expression.
2022-05-05 07:06:12 +10:00
David Wood
73fa217bc1 errors: span_suggestion takes impl ToString
Change `span_suggestion` (and variants) to take `impl ToString` rather
than `String` for the suggested code, as this simplifies the
requirements on the diagnostic derive.

Signed-off-by: David Wood <david.wood@huawei.com>
2022-04-29 02:05:20 +01:00
Vadim Petrochenkov
2733ec1be3 rustc_ast: Harmonize delimiter naming with proc_macro::Delimiter 2022-04-28 10:04:29 +03:00
Nicholas Nethercote
f0bbc782ac Avoid producing NoDelim values in TokenCursorFrame. 2022-04-27 08:15:05 +10:00
Nicholas Nethercote
643e9f707e Introduced Cursor::next_with_spacing_ref.
This lets us clone just the parts within a `TokenTree` that need
cloning, rather than the entire thing. This is a surprisingly large
performance win, up to 4% on `async-std-1.10.0`.
2022-04-21 13:49:40 +10:00
Nicholas Nethercote
cc4e3443ec Produce CloseDelim and pop the stack at the same time.
This makes `CloseDelim` handling more like `OpenDelim` handling, which
produces `OpenDelim` and pushes the stack at the same time. It requires
some adjustment to `parse_token_tree` now that we don't remain within
the frame after getting the `CloseDelim`.
2022-04-21 12:34:38 +10:00
Nicholas Nethercote
7a89255b20 Avoid some tuple destructuring.
Surprisingly, this is a non-trivial performance win.
2022-04-21 09:21:45 +10:00
Nicholas Nethercote
880318c70a Remove Eof sanity check in Parser::inlined_bump_with.
A Google search of the error message fails to return any relevant
resuts, suggesting this has never occurred in practice. And removeing it
reduces instruction counts by up to 2% on some benchmarks.
2022-04-20 14:52:54 +10:00
Nicholas Nethercote
9e6879fdba Only record fallback_span when necessary. 2022-04-20 14:04:22 +10:00
Nicholas Nethercote
b09522a634 Remove the loop from Parser::bump().
The loop is there to handle a `NoDelim` open/close token. This commit
changes `TokenCursor::inlined_next` so it never returns such a token.
This is a performance win because the conditional test in `bump()` is
removed.

If the parser needs changing in the future to handle `NoDelim` tokens,
then `inlined_next()` can easily be changed to return them.
2022-04-20 12:28:26 +10:00
Nicholas Nethercote
3cd5e34617 Remove TokenCursorFrame::open_delim.
Because it's now always true.
2022-04-20 12:28:26 +10:00
Nicholas Nethercote
86723d3d46 Use true for open_delim/close_delim in one spot.
The `DelimToken` here is `NoDelim`, which means the returned delim
tokens will just be ignored by `Parser::bump()`. This commit changes
things so the delim tokens won't be returned.
2022-04-20 12:26:49 +10:00
Nicholas Nethercote
804103b0ae Add a size assertion for Parser. 2022-04-20 11:48:07 +10:00
Nicholas Nethercote
f1c32c10c4 Move desugaring code into its own function.
It's not hot, so shouldn't be within the always inlined part.
2022-04-20 08:33:25 +10:00
Nicholas Nethercote
d235ac7801 Handle Delimited opening immediately.
Instead of letting the next iteration of the loop handle it.
2022-04-19 17:02:49 +10:00
Nicholas Nethercote
29c78cc086 Add {open,close}_delim arguments to TokenCursorFrame::new().
This will facilitate the change in the next commit.

`boolean` arguments aren't great, but the function is only used in three
places within this one file.
2022-04-19 17:02:48 +10:00
Nicholas Nethercote
02317542eb Rearrange TokenCursor::inlined_next().
In particular, avoid wrapping a token within `TokenTree::Token` and then
immediately matching it and returning the token within. Just return the
token immediately.
2022-04-19 17:02:48 +10:00
Nicholas Nethercote
b1e6dee596 Merge TokenCursor::{next,next_desugared}.
And likewise for the inlined variants.

I did this for simplicity, but interesting it was a performance win as
well.
2022-04-19 17:02:48 +10:00
Nicholas Nethercote
89ec75b0e9 Inline and remove Parser::next_tok().
It has a single call site.
2022-04-19 17:02:48 +10:00
Nicholas Nethercote
aefbbeec34 Inline and remove TokenTree::{open_tt,close_tt}.
They both have a single call site.
2022-04-19 17:02:48 +10:00
Dylan DPC
22d554657d
Rollup merge of #94985 - dtolnay:constattr, r=pnkfelix
Parse inner attributes on inline const block

According to https://github.com/rust-lang/rust/pull/84414#issuecomment-826150936, inner attributes are intended to be supported *"in all containers for statements (or some subset of statements)"*.

This PR adds inner attribute parsing and pretty-printing for inline const blocks (https://github.com/rust-lang/rust/issues/76001), which contain statements just like an unsafe block or a loop body.

```rust
let _ = const {
    #![allow(...)]

    let x = ();
    x
};
```
2022-04-16 19:42:00 +02:00
Nicholas Nethercote
d9592c2d9f Shrink Nonterminal.
By heap allocating the argument within `NtPath`, `NtVis`, and `NtStmt`.
This slightly reduces cumulative and peak allocation amounts, most
notably on `deep-vector`.
2022-04-07 12:51:50 +10:00
David Wood
c45f29595d span: move MultiSpan
`MultiSpan` contains labels, which are more complicated with the
introduction of diagnostic translation and will use types from
`rustc_errors` - however, `rustc_errors` depends on `rustc_span` so
`rustc_span` cannot use types like `DiagnosticMessage` without
dependency cycles. Introduce a new `rustc_error_messages` crate that can
contain `DiagnosticMessage` and `MultiSpan`.

Signed-off-by: David Wood <david.wood@huawei.com>
2022-04-05 07:01:00 +01:00
Yuri Astrakhan
5160f8f843 Spellchecking compiler comments
This PR cleans up the rest of the spelling mistakes in the compiler comments. This PR does not change any literal or code spelling issues.
2022-03-30 15:14:15 -04:00
Nicholas Nethercote
364b908d57 Remove Nonterminal::NtTT.
It's only needed for macro expansion, not as a general element in the
AST. This commit removes it, adds `NtOrTt` for the parser and macro
expansion cases, and renames the variants in `NamedMatch` to better
match the new type.
2022-03-28 10:03:02 +11:00
Nicholas Nethercote
f8f1d3f00b Split TokenCursor::{next,next_desugared} into inlined and non-inlined halves. 2022-03-22 11:05:54 +11:00
Nicholas Nethercote
4e700a023c Split Parser::bump_with into inlined and non-inlined halves.
The call site within `Parser::bump` is hot.

Also add an inline annotation to `Parser::next_tok`. It was already
being inlined by the compiler; this just makes sure that continues.
2022-03-22 11:01:53 +11:00
David Tolnay
f427698c03
Parse inner attributes on inline const block 2022-03-15 17:56:59 -07:00
mark
e489a94dee rename ErrorReported -> ErrorGuaranteed 2022-03-02 09:45:25 -06:00
Matthias Krüger
5be38d2bb3
Rollup merge of #94445 - c410-f3r:more-let-chains, r=cjgillot
4 - Make more use of `let_chains`

Continuation of #94376.

cc #53667
2022-02-28 20:05:17 +01:00
Esteban Kuber
f42b4f595e Tweak diagnostics
* Recover from invalid `'label: ` before block.
* Make suggestion to enclose statements in a block multipart.
* Point at `match`, `while`, `loop` and `unsafe` keywords when failing
  to parse their expression.
* Do not suggest `{ ; }`.
* Do not suggest `|` when very unlikely to be what was wanted (in `let`
  statements).
2022-02-28 18:22:45 +00:00
Caio
e3e902bb06 4 - Make more use of let_chains
Continuation of #94376.

cc #53667
2022-02-28 07:49:56 -03:00
Eduard-Mihai Burtescu
b7e95dee65 rustc_errors: let DiagnosticBuilder::emit return a "guarantee of emission". 2022-02-23 06:38:52 +00:00
Eduard-Mihai Burtescu
0b9d70cf6d rustc_errors: take self by value in DiagnosticBuilder::cancel. 2022-02-23 06:08:06 +00:00
Michael Howell
74437e477e Do not add ; to expected tokens list when it's wrong
There's a few spots where semicolons are checked for to do error recovery,
and should not be suggested (or checked for other stuff).

Fixes #87647
2021-12-04 11:05:30 -07:00
Gary Guo
6d61d87b22 Split inline const to two feature gates 2021-11-22 22:17:03 +00:00
r00ster91
3c1d55422a Some "parenthesis" and "parentheses" fixes 2021-10-17 12:04:01 +02:00
Sasha Pourcelot
b21425de3c Emit proper errors on missing closure braces
This commit focuses on emitting clean errors for the following syntax
error:

```
Some(42).map(|a|
    dbg!(a);
    a
);
```

Previous implementation tried to recover after parsing the closure body
(the `dbg` expression) by replacing the next `;` with a `,`, which made
the next expression belong to the next function argument. As such, the
following errors were emitted (among others):
  - the semicolon token was not expected,
  - a is not in scope,
  - Option::map is supposed to take one argument, not two.

This commit allows us to gracefully handle this situation by adding
giving the parser the ability to remember when it has just parsed a
closure body inside a function call. When this happens, we can treat the
unexpected `;` specifically and try to parse as much statements as
possible in order to eat the whole block. When we can't parse statements
anymore, we generate a clean error indicating that the braces are
missing, and return an ExprKind::Err.
2021-09-09 17:44:40 +02:00
bors
29d8fb746d Auto merge of #88386 - estebank:unmatched-delims, r=jackh726
Point at unclosed delimiters as part of the primary MultiSpan

Both the place where the parser encounters a needed closed delimiter and
the unclosed opening delimiter are important, so they should get the
same level of highlighting in the output.

_Context: https://twitter.com/mwk4/status/1430631546432675840_
2021-09-03 03:13:18 +00:00
bors
ae0b03bc6b Auto merge of #88262 - klensy:pprust-cow, r=nagisa
Cow'ify some pprust methods

Reduce number of potential needless de/allocations by using `Cow<'static, str>` instead of explicit `String` type.
2021-08-29 17:46:29 +00:00
Esteban Kuber
c6d800d854 Point at unclosed delimiters as part of the primary MultiSpan
Both the place where the parser encounters a needed closed delimiter and
the unclosed opening delimiter are important, so they should get the
same level of highlighting in the output.
2021-08-27 14:24:47 +00:00
klensy
c565339c37 Convert some functions to return Cow<'static,str> instead of String to reduce potential reallocations 2021-08-25 00:24:44 +03:00
Frank Steffahn
bf88b113ea Fix typos “a”→“an” 2021-08-22 15:35:11 +02:00
Fabian Wolff
2362450425 Suggest a path separator if a stray colon is found in a match arm
Co-authored-by: Esteban Kuber <estebank@users.noreply.github.com>
2021-07-14 01:15:59 +02:00
Vadim Petrochenkov
cbdfa1edca parser: Ensure that all nonterminals have tokens after parsing 2021-06-06 14:21:12 +03:00
Joshua Nelson
e48b6b4599 Stabilize extended_key_value_attributes
# Stabilization report

 ## Summary

This stabilizes using macro expansion in key-value attributes, like so:

 ```rust
 #[doc = include_str!("my_doc.md")]
 struct S;

 #[path = concat!(env!("OUT_DIR"), "/generated.rs")]
 mod m;
 ```

See the changes to the reference for details on what macros are allowed;
see Petrochenkov's excellent blog post [on internals](https://internals.rust-lang.org/t/macro-expansion-points-in-attributes/11455)
for alternatives that were considered and rejected ("why accept no more
and no less?")

This has been available on nightly since 1.50 with no major issues.

 ## Notes

 ### Accepted syntax

The parser accepts arbitrary Rust expressions in this position, but any expression other than a macro invocation will ultimately lead to an error because it is not expected by the built-in expression forms (e.g., `#[doc]`).  Note that decorators and the like may be able to observe other expression forms.

 ### Expansion ordering

Expansion of macro expressions in "inert" attributes occurs after decorators have executed, analogously to macro expressions appearing in the function body or other parts of decorator input.

There is currently no way for decorators to accept macros in key-value position if macro expansion must be performed before the decorator executes (if the macro can simply be copied into the output for later expansion, that can work).

 ## Test cases

 - https://github.com/rust-lang/rust/blob/master/src/test/ui/attributes/key-value-expansion-on-mac.rs
 - https://github.com/rust-lang/rust/blob/master/src/test/rustdoc/external-doc.rs

The feature has also been dogfooded extensively in the compiler and
standard library:

- https://github.com/rust-lang/rust/pull/83329
- https://github.com/rust-lang/rust/pull/83230
- https://github.com/rust-lang/rust/pull/82641
- https://github.com/rust-lang/rust/pull/80534

 ## Implementation history

- Initial proposal: https://github.com/rust-lang/rust/issues/55414#issuecomment-554005412
- Experiment to see how much code it would break: https://github.com/rust-lang/rust/pull/67121
- Preliminary work to restrict expansion that would conflict with this
feature: https://github.com/rust-lang/rust/pull/77271
- Initial implementation: https://github.com/rust-lang/rust/pull/78837
- Fix for an ICE: https://github.com/rust-lang/rust/pull/80563

 ## Unresolved Questions

~~https://github.com/rust-lang/rust/pull/83366#issuecomment-805180738 listed some concerns, but they have been resolved as of this final report.~~

 ## Additional Information

 There are two workarounds that have a similar effect for `#[doc]`
attributes on nightly. One is to emulate this behavior by using a limited version of this feature that was stabilized for historical reasons:

```rust
macro_rules! forward_inner_docs {
    ($e:expr => $i:item) => {
        #[doc = $e]
        $i
    };
}

forward_inner_docs!(include_str!("lib.rs") => struct S {});
```

This also works for other attributes (like `#[path = concat!(...)]`).
The other is to use `doc(include)`:

```rust
 #![feature(external_doc)]
 #[doc(include = "lib.rs")]
 struct S {}
```

The first works, but is non-trivial for people to discover, and
difficult to read and maintain. The second is a strange special-case for
a particular use of the macro. This generalizes it to work for any use
case, not just including files.

I plan to remove `doc(include)` when this is stabilized. The
`forward_inner_docs` workaround will still compile without warnings, but
I expect it to be used less once it's no longer necessary.
2021-05-18 01:01:36 -04:00
bors
2fb1dee14b Auto merge of #85104 - hi-rustin:rustin-patch-typo, r=jonas-schievink
Fix typo
2021-05-10 07:15:23 +00:00
hi-rustin
fc544abe03 Fix typo 2021-05-09 12:24:58 +08:00
Joshua Nelson
955fdaea4a Rename Parser::span_fatal_err -> Parser::span_err
The name was misleading, it wasn't actually a fatal error.
2021-05-08 23:11:59 -04:00
LeSeulArtichaut
cecb3be49a Improve diagnostics for functions in struct definitions 2021-05-07 21:44:10 +02:00
Aaron Hill
c6d67f8317
Add fast path when None delimiters are not involved 2021-04-12 17:26:26 -04:00
Aaron Hill
eb7b1a150f
Fix lookahead with None-delimited group 2021-04-12 11:50:16 -04:00
Aaron Hill
a93c4f05de
Implement token-based handling of attributes during expansion
This PR modifies the macro expansion infrastructure to handle attributes
in a fully token-based manner. As a result:

* Derives macros no longer lose spans when their input is modified
  by eager cfg-expansion. This is accomplished by performing eager
  cfg-expansion on the token stream that we pass to the derive
  proc-macro
* Inner attributes now preserve spans in all cases, including when we
  have multiple inner attributes in a row.

This is accomplished through the following changes:

* New structs `AttrAnnotatedTokenStream` and `AttrAnnotatedTokenTree` are introduced.
  These are very similar to a normal `TokenTree`, but they also track
  the position of attributes and attribute targets within the stream.
  They are built when we collect tokens during parsing.
  An `AttrAnnotatedTokenStream` is converted to a regular `TokenStream` when
  we invoke a macro.
* Token capturing and `LazyTokenStream` are modified to work with
  `AttrAnnotatedTokenStream`. A new `ReplaceRange` type is introduced, which
  is created during the parsing of a nested AST node to make the 'outer'
  AST node aware of the attributes and attribute target stored deeper in the token stream.
* When we need to perform eager cfg-expansion (either due to `#[derive]` or `#[cfg_eval]`),
we tokenize and reparse our target, capturing additional information about the locations of
`#[cfg]` and `#[cfg_attr]` attributes at any depth within the target.
This is a performance optimization, allowing us to perform less work
in the typical case where captured tokens never have eager cfg-expansion run.
2021-04-11 01:31:36 -04:00
Esteban Küber
0d7167698f Avoid ; -> , recovery and unclosed } recovery from being too verbose
Those two recovery attempts have a very bad interaction that causes too
unnecessary output. Add a simple gate to avoid interpreting a `;` as a
`,` when there are unclosed braces.
2021-04-09 10:22:41 -07:00
Aaron Hill
f94360fd83
Always preserve None-delimited groups in a captured TokenStream
Previously, we would silently remove any `None`-delimiters when
capturing a `TokenStream`, 'flattenting' them to their inner tokens.
This was not normally visible, since we usually have
`TokenKind::Interpolated` (which gets converted to a `None`-delimited
group during macro invocation) instead of an actual `None`-delimited
group.

However, there are a couple of cases where this becomes visible to
proc-macros:
1. A cross-crate `macro_rules!` macro has a `None`-delimited group
   stored in its body (as a result of being produced by another
   `macro_rules!` macro). The cross-crate `macro_rules!` invocation
   can then expand to an attribute macro invocation, which needs
   to be able to see the `None`-delimited group.
2. A proc-macro can invoke an attribute proc-macro with its re-collected
   input. If there are any nonterminals present in the input, they will
   get re-collected to `None`-delimited groups, which will then get
   captured as part of the attribute macro invocation.

Both of these cases are incredibly obscure, so there hopefully won't be
any breakage. This change will allow more agressive 'flattenting' of
nonterminals in #82608 without losing `None`-delimited groups.
2021-03-26 23:32:18 -04:00
Aaron Hill
7504b9bb96
Avoid double-collection for expression nonterminals 2021-03-25 18:05:49 -04:00
mark
db5629adcb stabilize or_patterns 2021-03-19 19:45:32 -05:00
Aaron Hill
fb5fec017b
Combine HasAttrs and HasTokens into AstLike
When token-based attribute handling is implemeneted in #80689,
we will need to access tokens from `HasAttrs` (to perform
cfg-stripping), and we will to access attributes from `HasTokens` (to
construct a `PreexpTokenStream`).

This PR merges the `HasAttrs` and `HasTokens` traits into a new
`AstLike` trait. The previous `HasAttrs` impls from `Vec<Attribute>` and `AttrVec`
are removed - they aren't attribute targets, so the impls never really
made sense.
2021-02-27 00:14:13 -05:00
Dylan DPC
8e51bd4315
Rollup merge of #81235 - reese:rw-tuple-diagnostics, r=estebank
Improve suggestion for tuple struct pattern matching errors.

Closes #80174

This change allows numbers to be parsed as field names when pattern matching on structs, which allows us to provide better error messages when tuple structs are matched using a struct pattern.

r? ``@estebank``
2021-02-23 02:51:44 +01:00
Matthias Krüger
85bd00fd85 parser: remove unneccessary wrapping of return value in parse_extern() 2021-02-21 13:25:12 +01:00
mark
aee1e59e6f Simplify pattern grammar by allowing nested leading vert
Along the way, we also implement a handful of diagnostics improvements
and fixes, particularly with respect to the special handling of `||` in
place of `|` and when there are leading verts in function params, which
don't allow top-level or-patterns anyway.
2021-02-15 12:07:54 -06:00
Aaron Hill
3321d70161
Address review comments 2021-02-13 13:04:54 -05:00
Aaron Hill
0b411f56e1
Require passing an AttrWrapper to collect_tokens_trailing_token
This is a pure refactoring split out from #80689.
It represents the most invasive part of that PR, requiring changes in
every caller of `parse_outer_attributes`

In order to eagerly expand `#[cfg]` attributes while preserving the
original `TokenStream`, we need to know the range of tokens that
corresponds to every attribute target. This is accomplished by making
`parse_outer_attributes` return an opaque `AttrWrapper` struct. An
`AttrWrapper` must be converted to a plain `AttrVec` by passing it to
`collect_tokens_trailing_token`. This makes it difficult to accidentally
construct an AST node with attributes without calling `collect_tokens_trailing_token`,
since AST nodes store an `AttrVec`, not an `AttrWrapper`.

As a result, we now call `collect_tokens_trailing_token` for attribute
targets which only support inert attributes, such as generic arguments
and struct fields. Currently, the constructed `LazyTokenStream` is
simply discarded. Future PRs will record the token range corresponding
to the attribute target, allowing those tokens to be removed from an
enclosing `collect_tokens_trailing_token` call if necessary.
2021-02-13 12:07:15 -05:00
Aaron Hill
5d739180cd
Clone entire TokenCursor when collecting tokens
Reverts PR #80830
Fixes taiki-e/pin-project#312

We can have an arbitrary number of `None`-delimited group frames pushed
on the stack due to proc-macro invocations, which can legally be exited.
Attempting to account for this would add a lot of complexity for a tiny
performance gain, so let's just use the original strategy.
2021-01-28 09:47:59 -05:00
Vadim Petrochenkov
bd07165690 parser: Collect tokens for values in key-value attributes 2021-01-24 17:11:56 +03:00
bors
1986b58c64 Auto merge of #80065 - b-naber:parse-angle-arg-diagnostics, r=petrochenkov
Improve diagnostics when parsing angle args

https://github.com/rust-lang/rust/pull/79266 introduced parsing of generic arguments in associated type constraints, this however resulted in possibly very confusing error messages in cases in which closing angle brackets were missing such as in `Vec<(u32, _, _) = vec![]`, which outputs an incorrectly parsed equality constraint error, as noted by `@cynecx.`

This PR tries to provide better error messages in such cases.

r? `@petrochenkov`
2021-01-23 06:27:21 +00:00
b-naber
728d257839 improve diagnostics for angle args 2021-01-22 17:07:27 +01:00
Aaron Hill
ccfc292999
Refactor token collection to capture trailing token immediately 2021-01-22 00:33:03 -05:00
Reese Williams
8a83c8f64f Improve suggestion for tuple struct pattern matching errors.
Currently, when a user uses a struct pattern to pattern match on
a tuple struct, the errors we emit generally suggest adding fields
using their field names, which are numbers. However, numbers are
not valid identifiers, so the suggestions, which use the shorthand
notation, are not valid syntax. This commit changes those errors
to suggest using the actual tuple struct pattern syntax instead,
which is a more actionable suggestion.
2021-01-20 23:06:19 -05:00
Aaron Hill
11b1e37016
Force token collection to run when parsing nonterminals
Fixes #81007

Previously, we would fail to collect tokens in the proper place when
only builtin attributes were present. As a result, we would end up with
attribute tokens in the collected `TokenStream`, leading to duplication
when we attempted to prepend the attributes from the AST node.

We now explicitly track when token collection must be performed due to
nomterminal parsing.
2021-01-20 18:09:32 -05:00
Aaron Hill
a961e6785c
Set tokens on AST node in collect_tokens
A new `HasTokens` trait is introduced, which is used to move logic from
the callers of `collect_tokens` into the body of `collect_tokens`.

In addition to reducing duplication, this paves the way for PR #80689,
which needs to perform additional logic during token collection.
2021-01-13 22:10:36 -05:00
bors
f30733adb9 Auto merge of #80441 - petrochenkov:kwtok, r=Aaron1011
ast: Remove some indirection layers from values in key-value attributes

Trying to address some perf regressions from https://github.com/rust-lang/rust/pull/78837#issuecomment-745380762.
2021-01-09 22:19:46 +00:00
Vadim Petrochenkov
71cd6f42a6 ast: Remove some indirection layers from values in key-value attributes 2021-01-09 21:50:39 +03:00
Aaron Hill
7b36408b5f
Use an empty TokenCursorFrame stack when capturing tokens
We will never need to pop  past our starting frame during token
capturing. Using an empty stack allows us to avoid pointless heap
allocations/deallocations.
2021-01-08 18:16:20 -05:00
bors
44e3daf5ee Auto merge of #80459 - mark-i-m:or-pat-reg, r=petrochenkov
Implement edition-based macro :pat feature

This PR does two things:
1. Fixes the perf regression from https://github.com/rust-lang/rust/pull/80100#issuecomment-750893149
2. Implements `:pat2018` and `:pat2021` matchers, as described by `@joshtriplett`  in https://github.com/rust-lang/rust/issues/54883#issuecomment-745509090 behind the feature gate `edition_macro_pat`.

r? `@petrochenkov`

cc `@Mark-Simulacrum`
2020-12-31 14:52:26 +00:00
mark
40bf3c0f09 Implement edition-based macro pat feature 2020-12-30 09:57:49 -06:00
Yuki Okushi
4ae99cc843 Fix ICE when pointing at multi bytes character 2020-12-30 22:33:13 +09:00
mark
1a7d00a529 implement edition-specific :pat behavior for 2015/18 2020-12-19 07:13:36 -06:00
Aaron Hill
e6fa6334dd
Properly capture trailing 'unglued' token
If we try to capture the `Vec<u8>` in `Option<Vec<u8>>`, we'll
need to capture a `>` token which was 'unglued' from a `>>` token.
The processing of unglueing a token for parsing purposes bypasses the
usual capturing infrastructure, so we currently lose the trailing `>`.
As a result, we fall back to the reparsed `TokenStream`, causing us to
lose spans.

This commit makes token capturing keep track of a trailing 'unglued'
token. Note that we don't need to care about unglueing except at the end
of the captured tokens - if we capture both the first and second unglued
tokens, then we'll end up capturing the full 'glued' token, which
already works correctly.
2020-12-12 16:28:13 -05:00
Vadim Petrochenkov
31d72c2658 Accept arbitrary expressions in key-value attributes at parse time 2020-12-09 21:37:32 +03:00
Ryan Levick
823f64532c A slightly clearer diagnostic when misusing 2020-12-04 11:33:30 +01:00
Jonas Schievink
a732c3a369
Rollup merge of #78853 - calebcartwright:fix-const-block-expr-span, r=spastorino
rustc_parse: fix ConstBlock expr span

The span for a ConstBlock expression should presumably run through the end of the block it contains and not stop at the keyword, just like is done with similar block-containing expression kinds, such as a TryBlock
2020-11-28 15:58:15 +01:00
Aaron Hill
de88bf148b
Properly handle attributes on statements
We now collect tokens for the underlying node wrapped by `StmtKind`
instead of storing tokens directly in `Stmt`.

`LazyTokenStream` now supports capturing a trailing semicolon after it
is initially constructed. This allows us to avoid refactoring statement
parsing to wrap the parsing of the semicolon in `parse_tokens`.

Attributes on item statements
(e.g. `fn foo() { #[bar] struct MyStruct; }`) are now treated as
item attributes, not statement attributes, which is consistent with how
we handle attributes on other kinds of statements. The feature-gating
code is adjusted so that proc-macro attributes are still allowed on item
statements on stable.

Two built-in macros (`#[global_allocator]` and `#[test]`) needed to be
adjusted to support being passed `Annotatable::Stmt`.
2020-11-26 17:08:35 -05:00
Vadim Petrochenkov
2879ab793e rustc_parse: Remove optimization for 0-length streams in collect_tokens
The optimization conflates empty token streams with unknown token stream, which is at least suspicious, and doesn't affect performance because 0-length token streams are very rare.
2020-11-12 22:00:48 +03:00
Caleb Cartwright
e1d5c3c054 fix(rustc_parse): ConstBlock expr span 2020-11-07 14:33:34 -06:00
Vadim Petrochenkov
d0c63bccc5 parser: Cleanup LazyTokenStream and avoid some clones
by using a named struct instead of a closure.
2020-10-31 01:56:34 +03:00
bors
20b1e05a8d Auto merge of #77502 - varkor:const-generics-suggest-enclosing-braces, r=petrochenkov
Suggest that expressions that look like const generic arguments should be enclosed in brackets

I pulled out the changes for const expressions from https://github.com/rust-lang/rust/pull/71592 (without the trait object diagnostic changes) and made some small changes; the implementation is `@estebank's.`

We're also going to want to make some changes separately to account for trait objects (they result in poor diagnostics, as is evident from one of the test cases here), such as an adaption of https://github.com/rust-lang/rust/pull/72273.

Fixes https://github.com/rust-lang/rust/issues/70753.

r? `@petrochenkov`
2020-10-27 09:25:54 +00:00
varkor
ac1454001c Suggest expressions that look like const generic arguments should be enclosed in brackets
Co-Authored-By: Esteban Kuber <github@kuber.com.ar>
2020-10-26 21:54:45 +00:00
bors
ffa2e7ae8f Auto merge of #77255 - Aaron1011:feature/collect-attr-tokens, r=petrochenkov
Unconditionally capture tokens for attributes.

This allows us to avoid synthesizing tokens in `prepend_attr`, since we
have the original tokens available.

We still need to synthesize tokens when expanding `cfg_attr`,
but this is an unavoidable consequence of the syntax of `cfg_attr` -
the user does not supply the `#` and `[]` tokens that a `cfg_attr`
expands to.

This is based on PR https://github.com/rust-lang/rust/pull/77250 - this PR exposes a bug in the current `collect_tokens` implementation, which is fixed by the rewrite.
2020-10-24 19:23:32 +00:00
Santiago Pastorino
83abed9df6
Make inline const work for half open ranges 2020-10-22 13:22:12 -03:00
Santiago Pastorino
954b5a81b4
Rename parse_const_expr to parse_const_block 2020-10-22 13:21:18 -03:00
Aaron Hill
920bed1213
Don't create an empty LazyTokenStream 2020-10-22 10:09:08 -04:00
bors
22e6b9c689 Auto merge of #77250 - Aaron1011:feature/flat-token-collection, r=petrochenkov
Rewrite `collect_tokens` implementations to use a flattened buffer

Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.

The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.

This implementation has a number of advantages over the previous one:

* It is significantly simpler, with no edge cases around capturing the
  start/end of a delimited group.

* It can be easily extended to allow replacing tokens an an arbitrary
  'depth' by just using `Vec::splice` at the proper position. This is
  important for PR #76130, which requires us to track information about
  attributes along with tokens.

* The lazy approach to `TokenStream` construction allows us to easily
  parse an AST struct, and then decide after the fact whether we need a
  `TokenStream`. This will be useful when we start collecting tokens for
  `Attribute` - we can discard the `LazyTokenStream` if the parsed
  attribute doesn't need tokens (e.g. is a builtin attribute).

The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
2020-10-21 15:03:14 +00:00
Yuki Okushi
de24210ebf
Rollup merge of #78118 - spastorino:inline-const-followups, r=petrochenkov
Inline const followups

r? @petrochenkov

Follow ups of #77124
2020-10-21 13:59:44 +09:00
Santiago Pastorino
d641cb82c1
Allow NtBlock to parse on check inline const next token 2020-10-19 18:50:58 -03:00
Aaron Hill
593fdd3d45
Rewrite collect_tokens implementations to use a flattened buffer
Instead of trying to collect tokens at each depth, we 'flatten' the
stream as we go allong, pushing open/close delimiters to our buffer
just like regular tokens. One capturing is complete, we reconstruct a
nested `TokenTree::Delimited` structure, producing a normal
`TokenStream`.

The reconstructed `TokenStream` is not created immediately - instead, it is
produced on-demand by a closure (wrapped in a new `LazyTokenStream` type). This
closure stores a clone of the original `TokenCursor`, plus a record of the
number of calls to `next()/next_desugared()`. This is sufficient to reconstruct
the tokenstream seen by the callback without storing any additional state. If
the tokenstream is never used (e.g. when a captured `macro_rules!` argument is
never passed to a proc macro), we never actually create a `TokenStream`.

This implementation has a number of advantages over the previous one:

* It is significantly simpler, with no edge cases around capturing the
  start/end of a delimited group.

* It can be easily extended to allow replacing tokens an an arbitrary
  'depth' by just using `Vec::splice` at the proper position. This is
  important for PR #76130, which requires us to track information about
  attributes along with tokens.

* The lazy approach to `TokenStream` construction allows us to easily
  parse an AST struct, and then decide after the fact whether we need a
  `TokenStream`. This will be useful when we start collecting tokens for
  `Attribute` - we can discard the `LazyTokenStream` if the parsed
  attribute doesn't need tokens (e.g. is a builtin attribute).

The performance impact seems to be neglibile (see
https://github.com/rust-lang/rust/pull/77250#issuecomment-703960604). There is a
small slowdown on a few benchmarks, but it only rises above 1% for incremental
builds, where it represents a larger fraction of the much smaller instruction
count. There a ~1% speedup on a few other incremental benchmarks - my guess is
that the speedups and slowdowns will usually cancel out in practice.
2020-10-19 13:59:18 -04:00
Aaron Hill
f6aec82d4d
Avoid cloning the contents of a TokenStream in a few places 2020-10-19 12:30:41 -04:00
Santiago Pastorino
c3e8d7965c
Parse inline const expressions 2020-10-16 15:15:30 -03:00
Esteban Küber
e5f83bcd04 Detect blocks that could be struct expr bodies
This approach lives exclusively in the parser, so struct expr bodies
that are syntactically correct on their own but are otherwise incorrect
will still emit confusing errors, like in the following case:

```rust
fn foo() -> Foo {
    bar: Vec::new()
}
```

```
error[E0425]: cannot find value `bar` in this scope
 --> src/file.rs:5:5
  |
5 |     bar: Vec::new()
  |     ^^^ expecting a type here because of type ascription

error[E0214]: parenthesized type parameters may only be used with a `Fn` trait
 --> src/file.rs:5:15
  |
5 |     bar: Vec::new()
  |               ^^^^^ only `Fn` traits may use parentheses

error[E0107]: wrong number of type arguments: expected 1, found 0
 --> src/file.rs:5:10
  |
5 |     bar: Vec::new()
  |          ^^^^^^^^^^ expected 1 type argument
  ```

If that field had a trailing comma, that would be a parse error and it
would trigger the new, more targetted, error:

```
error: struct literal body without path
 --> file.rs:4:17
  |
4 |   fn foo() -> Foo {
  |  _________________^
5 | |     bar: Vec::new(),
6 | | }
  | |_^
  |
help: you might have forgotten to add the struct literal inside the block
  |
4 | fn foo() -> Foo { Path {
5 |     bar: Vec::new(),
6 | } }
  |
```

Partially address last part of #34255.
2020-10-07 13:40:52 -07:00
Vadim Petrochenkov
219c66c55c rustc_parse: Make Parser::unexpected public and use it in built-in macros 2020-10-06 00:23:36 +03:00
Aurélien Deharbe
62068a59ee repairing broken error message and rustfix application for the new test
case
2020-09-11 17:31:52 +02:00
Aaron Hill
c1011165e6
Attach TokenStream to ast::Visibility
A `Visibility` does not have outer attributes, so we only capture tokens
when parsing a `macro_rules!` matcher
2020-09-10 17:33:06 -04:00
Aleksey Kladov
ccf41dd5eb Rename IsJoint -> Spacing
To match better naming from proc-macro
2020-09-03 17:32:45 +02:00
bors
80fc9b0ecb Auto merge of #76160 - scileo:format-recovery, r=petrochenkov
Improve recovery on malformed format call

The token following a format expression should be a comma. However, when it is replaced with a similar token (such as a dot), then the corresponding error is emitted, but the token is treated as a comma, and the parsing step continues.

r? @petrochenkov
2020-09-02 19:29:27 +00:00
Sasha
3524c3ef43 Improve recovery on malformed format call
If a comma in a format call is replaced with a similar token, then we
emit an error and continue parsing, instead of stopping at this point.
2020-09-02 13:18:19 +02:00
Caleb Cartwright
883b1e7592 parser: restore some fn visibility for rustfmt 2020-08-30 13:04:36 -05:00
mark
9e5f7d5631 mv compiler to compiler/ 2020-08-30 18:45:07 +03:00