Do not parenthesize exterior struct lit inside match guards
Before this PR, the AST pretty-printer injects parentheses around expressions any time parens _could_ be needed depending on what else is in the code that surrounds that expression. But the pretty-printer did not pass around enough context to understand whether parentheses really _are_ needed on any particular expression. As a consequence, there are false positives where unneeded parentheses are being inserted.
Example:
```rust
#![feature(if_let_guard)]
macro_rules! pp {
($e:expr) => {
stringify!($e)
};
}
fn main() {
println!("{}", pp!(match () { () if let _ = Struct {} => {} }));
}
```
**Before:**
```console
match () { () if let _ = (Struct {}) => {} }
```
**After:**
```console
match () { () if let _ = Struct {} => {} }
```
This PR introduces a bit of state that is passed across various expression printing methods to help understand accurately whether particular situations require parentheses injected by the pretty printer, and it fixes one such false positive involving match guards as shown above.
There are other parenthesization false positive cases not fixed by this PR. I intend to address these in follow-up PRs. For example here is one: the expression `{ let _ = match x {} + 1; }` is pretty-printed as `{ let _ = (match x {}) + 1; }` despite there being no reason for parentheses to appear there.
This is an extension of the previous commit. It means the output of
something like this:
```
stringify!(let a: Vec<u32> = vec![];)
```
goes from this:
```
let a: Vec<u32> = vec![] ;
```
With this PR, it now produces this string:
```
let a: Vec<u32> = vec![];
```
`tokenstream::Spacing` appears on all `TokenTree::Token` instances,
both punct and non-punct. Its current usage:
- `Joint` means "can join with the next token *and* that token is a
punct".
- `Alone` means "cannot join with the next token *or* can join with the
next token but that token is not a punct".
The fact that `Alone` is used for two different cases is awkward.
This commit augments `tokenstream::Spacing` with a new variant
`JointHidden`, resulting in:
- `Joint` means "can join with the next token *and* that token is a
punct".
- `JointHidden` means "can join with the next token *and* that token is a
not a punct".
- `Alone` means "cannot join with the next token".
This *drastically* improves the output of `print_tts`. For example,
this:
```
stringify!(let a: Vec<u32> = vec![];)
```
currently produces this string:
```
let a : Vec < u32 > = vec! [] ;
```
With this PR, it now produces this string:
```
let a: Vec<u32> = vec![] ;
```
(The space after the `]` is because `TokenTree::Delimited` currently
doesn't have spacing information. The subsequent commit fixes this.)
The new `print_tts` doesn't replicate original code perfectly. E.g.
multiple space characters will be condensed into a single space
character. But it's much improved.
`print_tts` still produces the old, uglier output for code produced by
proc macros. Because we have to translate the generated code from
`proc_macro::Spacing` to the more expressive `token::Spacing`, which
results in too much `proc_macro::Along` usage and no
`proc_macro::JointHidden` usage. So `space_between` still exists and
is used by `print_tts` in conjunction with the `Spacing` field.
This change will also help with the removal of `Token::Interpolated`.
Currently interpolated tokens are pretty-printed nicely via AST pretty
printing. `Token::Interpolated` removal will mean they get printed with
`print_tts`. Without this change, that would result in much uglier
output for code produced by decl macro expansions. With this change, AST
pretty printing and `print_tts` produce similar results.
The commit also tweaks the comments on `proc_macro::Spacing`. In
particular, it refers to "compound tokens" rather than "multi-char
operators" because lifetimes aren't operators.
tidy error: /git/rust/compiler/rustc_ast_pretty/src/pprust/state.rs:1165: unexplained "```ignore" doctest; try one:
* make the test actually pass, by adding necessary imports and declarations, or
* use "```text", if the code is not Rust code, or
* use "```compile_fail,Ennnn", if the code is expected to fail at compile time, or
* use "```should_panic", if the code is expected to fail at run time, or
* use "```no_run", if the code should type-check but not necessary linkable/runnable, or
* explain it like "```ignore (cannot-test-this-because-xxxx)", if the annotation cannot be avoided.
tidy error: /git/rust/compiler/rustc_ast_pretty/src/pprust/state.rs:1176: unexplained "```ignore" doctest; try one:
* make the test actually pass, by adding necessary imports and declarations, or
* use "```text", if the code is not Rust code, or
* use "```compile_fail,Ennnn", if the code is expected to fail at compile time, or
* use "```should_panic", if the code is expected to fail at run time, or
* use "```no_run", if the code should type-check but not necessary linkable/runnable, or
* explain it like "```ignore (cannot-test-this-because-xxxx)", if the annotation cannot be avoided.
In all four of Break, Closure, Ret, Yeet, the needs_par_as_let_scrutinee
is guaranteed to return true because the .precedence().order() of those
expr kinds is <= AssocOp::LAnd.precedence().
The relevant functions in rustc_ast::util::parser are:
fn needs_par_as_let_scrutinee(order: i8) -> bool {
order <= prec_let_scrutinee_needs_par() as i8
}
fn prec_let_scrutinee_needs_par() -> usize {
AssocOp::LAnd.precedence()
}
The .precedence().order() of Closure is PREC_CLOSURE (-40) and of Break,
Ret, Yeet is PREC_JUMP (-30).
The value of AssocOp::LAnd.precedence() is 6.
So this commit causes no change in behavior, only potentially
performance by doing a redundant call to contains_exterior_struct_lit in
those four cases. This is fine because Break, Closure, Ret, Yeet should
be exceedingly rare in the position of a let scrutinee.
Introduce support for `async gen` blocks
I'm delighted to demonstrate that `async gen` block are not very difficult to support. They're simply coroutines that yield `Poll<Option<T>>` and return `()`.
**This PR is WIP and in draft mode for now** -- I'm mostly putting it up to show folks that it's possible. This PR needs a lang-team experiment associated with it or possible an RFC, since I don't think it falls under the jurisdiction of the `gen` RFC that was recently authored by oli (https://github.com/rust-lang/rfcs/pull/3513, https://github.com/rust-lang/rust/issues/117078).
### Technical note on the pre-generator-transform yield type:
The reason that the underlying coroutines yield `Poll<Option<T>>` and not `Poll<T>` (which would make more sense, IMO, for the pre-transformed coroutine), is because the `TransformVisitor` that is used to turn coroutines into built-in state machine functions would have to destructure and reconstruct the latter into the former, which requires at least inserting a new basic block (for a `switchInt` terminator, to match on the `Poll` discriminant).
This does mean that the desugaring (at the `rustc_ast_lowering` level) of `async gen` blocks is a bit more involved. However, since we already need to intercept both `.await` and `yield` operators, I don't consider it much of a technical burden.
r? `@ghost`
never_patterns: Parse match arms with no body
Never patterns are meant to signal unreachable cases, and thus don't take bodies:
```rust
let ptr: *const Option<!> = ...;
match *ptr {
None => { foo(); }
Some(!),
}
```
This PR makes rustc accept the above, and enforces that an arm has a body xor is a never pattern. This affects parsing of match arms even with the feature off, so this is delicate. (Plus this is my first non-trivial change to the parser).
~~The last commit is optional; it introduces a bit of churn to allow the new suggestions to be machine-applicable. There may be a better solution? I'm not sure.~~ EDIT: I removed that commit
r? `@compiler-errors`
Add `never_patterns` feature gate
This PR adds the feature gate and most basic parsing for the experimental `never_patterns` feature. See the tracking issue (https://github.com/rust-lang/rust/issues/118155) for details on the experiment.
`@scottmcm` has agreed to be my lang-team liaison for this experiment.
Remove HIR opkinds
`hir::BinOp`, `hir::BinOpKind`, and `hir::UnOp` are identical to `ast::BinOp`, `ast::BinOpKind`, and `ast::UnOp`, respectively. This seems silly, so this PR removes the HIR ones. (A re-export lets the AST ones be referred to using a `hir::` qualifier, which avoids renaming churn.)
r? `@cjgillot`
- Rename them both `as_str`, which is the typical name for a function
that returns a `&str`. (`to_string` is appropriate for functions
returning `String` or maybe `Cow<'a, str>`.)
- Change `UnOp::as_str` from an associated function (weird!) to a
method.
- Avoid needless `self` dereferences.
The AST and HIR versions of `State::print_ident` are textually
identical, but the types differ slightly. This commit factors out the
common code they both have by replacing `print_ident` with `ann_post`,
which is a smaller function that still captures the type difference.
`PrintState` is a trait containing code that can be used by both AST and
HIR pretty-printing. But several of its methods are only used by AST
printing.
This commit moves those methods out of the trait and into the AST
`State` impl, so they are not exposed unnecessarily. This commit also
removes four unused methods: `param_to_string`,
`foreign_item_to_string`, `assoc_item_to_string`, and
`print_inner_attributes_inline`.
To avoid `!matches!(...)`, which is hard to think about. Instead every
case now uses direct pattern matching and returns true or false.
Also add a couple of cases to the `stringify.rs` test that currently
print badly.
- Sort dependencies and features sections.
- Add `tidy` markers to the sorted sections so they stay sorted.
- Remove empty `[lib`] sections.
- Remove "See more keys..." comments.
Excluded files:
- rustc_codegen_{cranelift,gcc}, because they're external.
- rustc_lexer, because it has external use.
- stable_mir, because it has external use.
docs: add Rust logo to more compiler crates
c6e6ecb1af added it to some of the compiler's crates, but avoided adding it to all of them to reduce bit-rot. This commit adds to more.
r? `@GuillaumeGomez`
There was an incomplete version of the check in parsing and a second
version in AST validation. This meant that some, but not all, invalid
uses were allowed inside macros/disabled cfgs. It also means that later
passes have a hard time knowing when the let expression is in a valid
location, sometimes causing ICEs.
- Add a field to ExprKind::Let in AST/HIR to mark whether it's in a
valid location.
- Suppress later errors and MIR construction for invalid let
expressions.
Parse unnamed fields and anonymous structs or unions (no-recovery)
It is part of #114782 which implements #49804. Only parse anonymous structs or unions in struct field definition positions.
r? `@petrochenkov`
Anonymous structs or unions are only allowed in struct field
definitions.
Co-authored-by: carbotaniuman <41451839+carbotaniuman@users.noreply.github.com>
Indexing is similar to method calls in having an arbitrary
left-hand-side and then something on the right, which is the main part
of the expression. Method calls already have a span for that right part,
but indexing does not. This means that long method chains that use
indexing have really bad spans, especially when the indexing panics and
that span in coverted into a panic location.
This does the same thing as method calls for the AST and HIR, storing an
extra span which is then put into the `fn_span` field in THIR.
It's the same as `Delimiter`, minus the `Invisible` variant. I'm
generally in favour of using types to make impossible states
unrepresentable, but this one feels very low-value, and the conversions
between the two types are annoying and confusing.
Look at the change in `src/tools/rustfmt/src/expr.rs` for an example:
the old code converted from `MacDelimiter` to `Delimiter` and back
again, for no good reason. This suggests the author was confused about
the types.
My type ascription
Oh rip it out
Ah
If you think we live too much then
You can sacrifice diagnostics
Don't mix your garbage
Into my syntax
So many weird hacks keep diagnostics alive
Yet I don't even step outside
So many bad diagnostics keep tyasc alive
Yet tyasc doesn't even bother to survive!
Close parentheses for `offset_of` in AST pretty printing
HIR pretty printing already handles it correctly.
This will conflict with #110694 but it seems like that PR is gonna take bit more time.
Tweak await span to not contain dot
Fixes a discrepancy between method calls and await expressions where the latter are desugared to have a span that *contains* the dot (i.e. `.await`) but method call identifiers don't contain the dot. This leads to weird suggestions suggestions in borrowck -- see linked issue.
Fixes#110761
This mostly touches a bunch of tests to tighten their `await` span.
Move format_args!() into AST (and expand it during AST lowering)
Implements https://github.com/rust-lang/compiler-team/issues/541
This moves FormatArgs from rustc_builtin_macros to rustc_ast_lowering. For now, the end result is the same. But this allows for future changes to do smarter things with format_args!(). It also allows Clippy to directly access the ast::FormatArgs, making things a lot easier.
This change turns the format args types into lang items. The builtin macro used to refer to them by their path. After this change, the path is no longer relevant, making it easier to make changes in `core`.
This updates clippy to use the new language items, but this doesn't yet make clippy use the ast::FormatArgs structure that's now available. That should be done after this is merged.
Convert all the crates that have had their diagnostic migration
completed (except save_analysis because that will be deleted soon and
apfloat because of the licensing problem).
Remove `token::Lit` from `ast::MetaItemLit`.
Currently `ast::MetaItemLit` represents the literal kind twice. This PR removes that redundancy. Best reviewed one commit at a time.
r? `@petrochenkov`
Remove useless borrows and derefs
They are nothing more than noise.
<sub>These are not all of them, but my clippy started crashing (stack overflow), so rip :(</sub>
`token::Lit` contains a `kind` field that indicates what kind of literal
it is. `ast::MetaItemLit` currently wraps a `token::Lit` but also has
its own `kind` field. This means that `ast::MetaItemLit` encodes the
literal kind in two different ways.
This commit changes `ast::MetaItemLit` so it no longer wraps
`token::Lit`. It now contains the `symbol` and `suffix` fields from
`token::Lit`, but not the `kind` field, eliminating the redundancy.
This is required to distinguish between cooked and raw byte string
literals in an `ast::LitKind`, without referring to an adjacent
`token::Lit`. It's a prerequisite for the next commit.
Lower them into a single item with multiple resolutions instead.
This also allows to remove additional `NodId`s and `DefId`s related to those additional items.
There is code for converting `Attribute` (syntactic) to `MetaItem`
(semantic). There is also code for the reverse direction. The reverse
direction isn't really necessary; it's currently only used when
generating attributes, e.g. in `derive` code.
This commit adds some new functions for creating `Attributes`s directly,
without involving `MetaItem`s: `mk_attr_word`, `mk_attr_name_value_str`,
`mk_attr_nested_word`, and
`ExtCtxt::attr_{word,name_value_str,nested_word}`.
These new methods replace the old functions for creating `Attribute`s:
`mk_attr_inner`, `mk_attr_outer`, and `ExtCtxt::attribute`. Those
functions took `MetaItem`s as input, and relied on many other functions
that created `MetaItems`, which are also removed: `mk_name_value_item`,
`mk_list_item`, `mk_word_item`, `mk_nested_word_item`,
`{MetaItem,MetaItemKind,NestedMetaItem}::token_trees`,
`MetaItemKind::attr_args`, `MetaItemLit::{from_lit_kind,to_token}`,
`ExtCtxt::meta_word`.
Overall this cuts more than 100 lines of code and makes thing simpler.
`Lit::from_included_bytes` calls `Lit::from_lit_kind`, but the two call
sites only need the resulting `token::Lit`, not the full `ast::Lit`.
This commit changes those call sites to use `LitKind::to_token_lit`,
which means `from_included_bytes` can be removed.
`MacArgs` is an enum with three variants: `Empty`, `Delimited`, and `Eq`. It's
used in two ways:
- For representing attribute macro arguments (e.g. in `AttrItem`), where all
three variants are used.
- For representing function-like macros (e.g. in `MacCall` and `MacroDef`),
where only the `Delimited` variant is used.
In other words, `MacArgs` is used in two quite different places due to them
having partial overlap. I find this makes the code hard to read. It also leads
to various unreachable code paths, and allows invalid values (such as
accidentally using `MacArgs::Empty` in a `MacCall`).
This commit splits `MacArgs` in two:
- `DelimArgs` is a new struct just for the "delimited arguments" case. It is
now used in `MacCall` and `MacroDef`.
- `AttrArgs` is a renaming of the old `MacArgs` enum for the attribute macro
case. Its `Delimited` variant now contains a `DelimArgs`.
Various other related things are renamed as well.
These changes make the code clearer, avoids several unreachable paths, and
disallows the invalid values.
Instead of `ast::Lit`.
Literal lowering now happens at two different times. Expression literals
are lowered when HIR is crated. Attribute literals are lowered during
parsing.
This commit changes the language very slightly. Some programs that used
to not compile now will compile. This is because some invalid literals
that are removed by `cfg` or attribute macros will no longer trigger
errors. See this comment for more details:
https://github.com/rust-lang/rust/pull/102944#issuecomment-1277476773
`BindingAnnotation` refactor
* `ast::BindingMode` is deleted and replaced with `hir::BindingAnnotation` (which is moved to `ast`)
* `BindingAnnotation` is changed from an enum to a tuple struct e.g. `BindingAnnotation(ByRef::No, Mutability::Mut)`
* Associated constants added for convenience `BindingAnnotation::{NONE, REF, MUT, REF_MUT}`
One goal is to make it more clear that `BindingAnnotation` merely represents syntax `ref mut` and not the actual binding mode. This was especially confusing since we had `ast::BindingMode`->`hir::BindingAnnotation`->`thir::BindingMode`.
I wish there were more symmetry between `ByRef` and `Mutability` (variant) naming (maybe `Mutable::Yes`?), and I also don't love how long the name `BindingAnnotation` is, but this seems like the best compromise. Ideas welcome.
- Rename `ast::Lit::token` as `ast::Lit::token_lit`, because its type is
`token::Lit`, which is not a token. (This has been confusing me for a
long time.)
reasonable because we have an `ast::token::Lit` inside an `ast::Lit`.
- Rename `LitKind::{from,to}_lit_token` as
`LitKind::{from,to}_token_lit`, to match the above change and
`token::Lit`.
Stringify non-shorthand visibility correctly
This makes `stringify!(pub(in crate))` evaluate to `pub(in crate)` rather than `pub(crate)`, matching the behavior before the `crate` shorthand was removed. Further, this changes `stringify!(pub(in super))` to evaluate to `pub(in super)` rather than the current `pub(super)`. If the latter is not desired (it is _technically_ breaking), it can be undone.
Fixes#99981
`@rustbot` label +C-bug +regression-from-stable-to-beta +T-compiler
A `TokenStream` contains a `Lrc<Vec<(TokenTree, Spacing)>>`. But this is
not quite right. `Spacing` makes sense for `TokenTree::Token`, but does
not make sense for `TokenTree::Delimited`, because a
`TokenTree::Delimited` cannot be joined with another `TokenTree`.
This commit fixes this problem, by adding `Spacing` to `TokenTree::Token`,
changing `TokenStream` to contain a `Lrc<Vec<TokenTree>>`, and removing the
`TreeAndSpacing` typedef.
The commit removes these two impls:
- `impl From<TokenTree> for TokenStream`
- `impl From<TokenTree> for TreeAndSpacing`
These were useful, but also resulted in code with many `.into()` calls
that was hard to read, particularly for anyone not highly familiar with
the relevant types. This commit makes some other changes to compensate:
- `TokenTree::token()` becomes `TokenTree::token_{alone,joint}()`.
- `TokenStream::token_{alone,joint}()` are added.
- `TokenStream::delimited` is added.
This results in things like this:
```rust
TokenTree::token(token::Semi, stmt.span).into()
```
changing to this:
```rust
TokenStream::token_alone(token::Semi, stmt.span)
```
This makes the type of the result, and its spacing, clearer.
These changes also simplifies `Cursor` and `CursorRef`, because they no longer
need to distinguish between `next` and `next_with_spacing`.
The change was "Show invisible delimiters (within comments) when pretty
printing". It's useful to show these delimiters, but is a breaking
change for some proc macros.
Fixes#97608.
Begin fixing all the broken doctests in `compiler/`
Begins to fix#95994.
All of them pass now but 24 of them I've marked with `ignore HELP (<explanation>)` (asking for help) as I'm unsure how to get them to work / if we should leave them as they are.
There are also a few that I marked `ignore` that could maybe be made to work but seem less important.
Each `ignore` has a rough "reason" for ignoring after it parentheses, with
- `(pseudo-rust)` meaning "mostly rust-like but contains foreign syntax"
- `(illustrative)` a somewhat catchall for either a fragment of rust that doesn't stand on its own (like a lone type), or abbreviated rust with ellipses and undeclared types that would get too cluttered if made compile-worthy.
- `(not-rust)` stuff that isn't rust but benefits from the syntax highlighting, like MIR.
- `(internal)` uses `rustc_*` code which would be difficult to make work with the testing setup.
Those reason notes are a bit inconsistently applied and messy though. If that's important I can go through them again and try a more principled approach. When I run `rg '```ignore \(' .` on the repo, there look to be lots of different conventions other people have used for this sort of thing. I could try unifying them all if that would be helpful.
I'm not sure if there was a better existing way to do this but I wrote my own script to help me run all the doctests and wade through the output. If that would be useful to anyone else, I put it here: https://github.com/Elliot-Roberts/rust_doctest_fixing_tool
Overhaul `MacArgs`
Motivation:
- Clarify some code that I found hard to understand.
- Eliminate one use of three places where `TokenKind::Interpolated` values are created.
r? `@petrochenkov`
The value in `MacArgs::Eq` is currently represented as a `Token`.
Because of `TokenKind::Interpolated`, `Token` can be either a token or
an arbitrary AST fragment. In practice, a `MacArgs::Eq` starts out as a
literal or macro call AST fragment, and then is later lowered to a
literal token. But this is very non-obvious. `Token` is a much more
general type than what is needed.
This commit restricts things, by introducing a new type `MacArgsEqKind`
that is either an AST expression (pre-lowering) or an AST literal
(post-lowering). The downside is that the code is a bit more verbose in
a few places. The benefit is that makes it much clearer what the
possibilities are (though also shorter in some other places). Also, it
removes one use of `TokenKind::Interpolated`, taking us a step closer to
removing that variant, which will let us make `Token` impl `Copy` and
remove many "handle Interpolated" code paths in the parser.
Things to note:
- Error messages have improved. Messages like this:
```
unexpected token: `"bug" + "found"`
```
now say "unexpected expression", which makes more sense. Although
arbitrary expressions can exist within tokens thanks to
`TokenKind::Interpolated`, that's not obvious to anyone who doesn't
know compiler internals.
- In `parse_mac_args_common`, we no longer need to collect tokens for
the value expression.