Fix `DebugParser`.
I tried using this and it didn't work at all. `prev_token` is never eof, so the accumulator is always false, which means the `then_some` always returns `None`, which means `scan` always returns `None`, and `tokens` always ends up an empty vec. I'm not sure how this code was supposed to work.
(An aside: I find `Iterator::scan` to be a pretty wretched function, that produces code which is very hard to understand. Probably why this is just one of two uses of it in the entire compiler.)
This commit changes it to a simpler imperative style that produces a valid `tokens` vec.
r? `@workingjubilee`
Clear `inner_attr_ranges` regularly.
There's a comment saying we don't do it for performance reasons, but it doesn't actually affect performance.
The commit also tweaks the control flow, to make clearer that two code paths are mutually exclusive.
r? ````@petrochenkov````
It currently doesn't work at all. This commit changes it to a simpler
imperative style that produces a valid `tokens` vec.
(An aside: I find `Iterator::scan` to be a pretty wretched function,
that produces code which is very hard to understand. Probably why this
is just one of two uses of it in the entire compiler.)
That method is currently badly broken, and the test output reflects
this. The obtained tokens list is always empty, except in the case where
we go two `bump`s past the final token, whereupon it will produce as
many `Eof` tokens as asked for.
Fix `Parser::look_ahead`
`Parser::look_ahead` has a slow but simple general case, and a fast special case that is hit most of the time. But the special case is buggy and behaves differently to the general case. There are also no unit tests. This PR fixes all of this, resulting in a `Parser::look_ahead` that is equally fast, slightly simpler, more correct, and better tested.
r? `@davidtwco`
This new special case is simpler than the old special case because it
only is used when `dist == 1`. But that's still enough to cover ~98% of
cases. This results in equivalent performance to the old special case,
and identical behaviour as the general case.
The general case at the bottom of `look_ahead` is slow, because it
clones the token cursor. Above it there is a special case for
performance that is hit most of the time and avoids the cloning.
Unfortunately, its behaviour differs from the general case in two ways.
- When within a pair of delimiters, if you look any distance past the
closing delimiter you get the closing delimiter instead of what comes
after the closing delimiter.
- It uses `tree_cursor.look_ahead(dist - 1)` which totally confuses
tokens with token trees. This means that only the first token in a
token tree will be seen. E.g. in a sequence like `{ a }` the `a` and
`}` will be skipped over. Bad!
It's likely that these differences weren't noticed before now because
the use of `look_ahead` in the parser is limited to small distances and
relatively few contexts.
Removing the special case causes slowdowns up of to 2% on a range of
benchmarks. The next commit will add a new, correct special case to
regain that lost performance.
The new condition is equivalent in practice, but it's much more obvious
that it would result in an empty range, because the condition lines up
with the contents of the iterator.
There's a comment saying we don't do it for performance reasons, but it
doesn't actually affect performance.
The commit also tweaks the control flow, to make clearer that two code
paths are mutually exclusive.
Currently the second element is a `Vec<(FlatToken, Spacing)>`. But the
vector always has zero or one elements, and the `FlatToken` is always
`FlatToken::AttrTarget` (which contains an `AttributesData`), and the
spacing is always `Alone`. So we can simplify it to
`Option<AttributesData>`.
An assertion in `to_attr_token_stream` can can also be removed, because
`new_tokens.len()` was always 0 or 1, which means than `range.len()`
is always greater than or equal to it, because `range.is_empty()` is
always false (as per the earlier assertion).
And update the comment. Clearly the return type of this function was
changed at some point in the past, but its name and comment weren't
updated to match.
The number of source code bytes can't exceed a `u32`'s range, so a token
position also can't. This reduces the size of `Parser` and
`LazyAttrTokenStreamImpl` by eight bytes each.
Move binder and polarity parsing into `parse_generic_ty_bound`
Let's pull out the parts of #127054 which just:
1. Make the parsing code less confusing
2. Fix `?use<>` (to correctly be denied)
3. Improve `T: for<'a> 'a` diagnostics
This should have no user-facing effects on stable parsing.
r? fmease
It currently goes one token too far.
Example: line 259 of `tests/ui/abi/compatibility.rs`:
```
test_abi_compatible!(fn_fn, fn(), fn(i32) -> i32);
```
This commit changes the span for the second element from `fn(),` to
`fn()`, i.e. removes the extraneous comma.
coverage: Overhaul validation of the `#[coverage(..)]` attribute
This PR makes sweeping changes to how the (currently-unstable) coverage attribute is validated:
- Multiple coverage attributes on the same item/expression are now treated as an error.
- The attribute must always be `#[coverage(off)]` or `#[coverage(on)]`, and the error messages for this are more consistent.
- A trailing comma is still allowed after off/on, since that's part of the normal attribute syntax.
- Some places that silently ignored a coverage attribute now produce an error instead.
- These cases were all clearly bugs.
- Some places that ignored a coverage attribute (with a warning) now produce an error instead.
- These were originally added as lints, but I don't think it makes much sense to knowingly allow new attributes to be used in meaningless places.
- Some of these errors might soon disappear, if it's easy to extend recursive coverage attributes to things like modules and impl blocks.
---
One of the goals of this PR is to lay a more solid foundation for making the coverage attribute recursive, so that it applies to all nested functions/closures instead of just the one it is directly attached to.
Fixes#126658.
This PR incorporates #126659, which adds more tests for validation of the coverage attribute.
`@rustbot` label +A-code-coverage
Special case when a code line only has multiline span starts
Minimize multline span overlap when there are multiple of them starting on the same line:
```
3 | X0 Y0 Z0
| _____^ - -
| | _______| |
| || _________|
4 | ||| X1 Y1 Z1
5 | ||| X2 Y2 Z2
| |||____^__-__- `Z` label
| ||_____|__|
| |______| `Y` is a good letter too
| `X` is a good letter
```
Add hard error and migration lint for unsafe attrs
More implementation work for https://github.com/rust-lang/rust/issues/123757
This adds the migration lint for unsafe attributes, as well as making it a hard error in Rust 2024.
Merge `PatParam`/`PatWithOr`, and `Expr`/`Expr2021`, for a few reasons.
- It's conceptually nice, because the two pattern kinds and the two
expression kinds are very similar.
- With expressions in particular, there are several places where both
expression kinds get the same treatment.
- It removes one unreachable match arm.
- Most importantly, for #124141 I will need to introduce a new type
`MetaVarKind` that is very similar to `NonterminalKind`, but records a
couple of extra fields for expression metavars. It's nicer to have a
single `MetaVarKind::Expr` expression variant to hold those extra
fields instead of duplicating them across two variants
`MetaVarKind::{Expr,Expr2021}`. And then it makes sense for patterns
to be treated the same way, and for `NonterminalKind` to also be
treated the same way.
I also clarified the comments, because I have long found them a little
hard to understand.
`StaticForeignItem` and `StaticItem` are the same
The struct `StaticItem` and `StaticForeignItem` are the same, so remove `StaticForeignItem`. Having them be separate is unique to `static` items -- unlike `ForeignItemKind::{Fn,TyAlias}`, which use the normal AST item.
r? ``@spastorino`` or ``@oli-obk``