Split `MacArgs` in two.
`MacArgs` is an enum with three variants: `Empty`, `Delimited`, and `Eq`. It's used in two ways:
- For representing attribute macro arguments (e.g. in `AttrItem`), where all three variants are used.
- For representing function-like macros (e.g. in `MacCall` and `MacroDef`), where only the `Delimited` variant is used.
In other words, `MacArgs` is used in two quite different places due to them having partial overlap. I find this makes the code hard to read. It also leads to various unreachable code paths, and allows invalid values (such as accidentally using `MacArgs::Empty` in a `MacCall`).
This commit splits `MacArgs` in two:
- `DelimArgs` is a new struct just for the "delimited arguments" case. It is now used in `MacCall` and `MacroDef`.
- `AttrArgs` is a renaming of the old `MacArgs` enum for the attribute macro case. Its `Delimited` variant now contains a `DelimArgs`.
Various other related things are renamed as well.
These changes make the code clearer, avoids several unreachable paths, and disallows the invalid values.
r? `@petrochenkov`
`MacArgs` is an enum with three variants: `Empty`, `Delimited`, and `Eq`. It's
used in two ways:
- For representing attribute macro arguments (e.g. in `AttrItem`), where all
three variants are used.
- For representing function-like macros (e.g. in `MacCall` and `MacroDef`),
where only the `Delimited` variant is used.
In other words, `MacArgs` is used in two quite different places due to them
having partial overlap. I find this makes the code hard to read. It also leads
to various unreachable code paths, and allows invalid values (such as
accidentally using `MacArgs::Empty` in a `MacCall`).
This commit splits `MacArgs` in two:
- `DelimArgs` is a new struct just for the "delimited arguments" case. It is
now used in `MacCall` and `MacroDef`.
- `AttrArgs` is a renaming of the old `MacArgs` enum for the attribute macro
case. Its `Delimited` variant now contains a `DelimArgs`.
Various other related things are renamed as well.
These changes make the code clearer, avoids several unreachable paths, and
disallows the invalid values.
Use `token::Lit` in `ast::ExprKind::Lit`.
Instead of `ast::Lit`.
Literal lowering now happens at two different times. Expression literals are lowered when HIR is crated. Attribute literals are lowered during parsing.
r? `@petrochenkov`
Instead of `ast::Lit`.
Literal lowering now happens at two different times. Expression literals
are lowered when HIR is crated. Attribute literals are lowered during
parsing.
This commit changes the language very slightly. Some programs that used
to not compile now will compile. This is because some invalid literals
that are removed by `cfg` or attribute macros will no longer trigger
errors. See this comment for more details:
https://github.com/rust-lang/rust/pull/102944#issuecomment-1277476773
Show note where the macro failed to match
When feeding the wrong tokens, it used to fail with a very generic error that wasn't very helpful. This change tries to help by noting where specifically the matching went wrong.
```rust
macro_rules! uwu {
(a a a b) => {};
}
uwu! { a a a c }
```
```diff
error: no rules expected the token `c`
--> macros.rs:5:14
|
1 | macro_rules! uwu {
| ---------------- when calling this macro
...
4 | uwu! { a a a c }
| ^ no rules expected this token in macro call
|
+note: while trying to match `b`
+ --> macros.rs:2:12
+ |
+2 | (a a a b) => {};
+ | ^
```
Delay `include_bytes` to AST lowering
Hopefully addresses #65818.
This PR introduces a new `ExprKind::IncludedBytes` which stores the path and bytes of a file included with `include_bytes!()`. We can then create a literal from the bytes during AST lowering, which means we don't need to escape the bytes into valid UTF8 which is the cause of most of the overhead of embedding large binary blobs.
Add the `#[derive_const]` attribute
Closes#102371. This is a minimal patchset for the attribute to work. There are no restrictions on what traits this attribute applies to.
r? `````@oli-obk`````
For now, we only collect the small info for the `best_failure`, but
using this tracker, we can easily extend it in the future to track
things with more performance overhead.
We cannot retry cases where the macro failed with a parser error that
was emitted already, as that would cause us to emit the same error to
the user twice.
This moves out the matching part of expansion into a new function. This
function will try to match the macro and return an error if it failed to
match. A tracker can be used to get more information about the matching.
Track where diagnostics were created.
This implements the `-Ztrack-diagnostics` flag, which uses `#[track_caller]` to track where diagnostics are created. It is meant as a debugging tool much like `-Ztreat-err-as-bug`.
For example, the following code...
```rust
struct A;
struct B;
fn main(){
let _: A = B;
}
```
...now emits the following error message:
```
error[E0308]: mismatched types
--> src\main.rs:5:16
|
5 | let _: A = B;
| - ^ expected struct `A`, found struct `B`
| |
| expected due to this
-Ztrack-diagnostics: created at compiler\rustc_infer\src\infer\error_reporting\mod.rs:2275:31
```
Add flag to forbid recovery in the parser
To start the effort of fixing #103534, this adds a new flag to the parser, which forbids the parser from doing recovery, which it shouldn't do in macros.
This doesn't add any new checks for recoveries yet and is just here to bikeshed the names for the functions here before doing more.
r? `@compiler-errors`
Only apply `ProceduralMasquerade` hack to older versions of `rental`
The latest version of `rental` (v0.5.6) contains a fix that allows it to
compile without relying on the pretty-print back-compat hack.
Hopefully, there are no longer any crates relying on the affected
versions of the (much less popular) `procedural-masquerade` crate. This
should allow us to target the pretty-print back-compat hack specifically
to older versions of `rental`, and specifically mention upgrading to
`rental` v0.5.6 in the lint message.
The latest version of `rental` (v0.5.6) contains a fix that allows it to
compile without relying on the pretty-print back-compat hack.
Hopefully, there are no longer any crates relying on the affected
versions of the (much less popular) `procedural-masquerade` crate. This
should allow us to target the pretty-print back-compat hack specifically
to older versions of `rental`, and specifically mention upgrading to
`rental` v0.5.6 in the lint message.
Remove `TokenStreamBuilder`
`TokenStreamBuilder` is used to combine multiple token streams. It can be removed, leaving the code a little simpler and a little faster.
r? `@Aaron1011`
`TokenStreamBuilder` exists to concatenate multiple `TokenStream`s
together. This commit removes it, and moves the concatenation
functionality directly into `TokenStream`, via two new methods
`push_tree` and `push_stream`. This makes things both simpler and
faster.
`push_tree` is particularly important. `TokenStreamBuilder` only had a
single `push` method, which pushed a stream. But in practice most of the
time we push a single token tree rather than a stream, and `push_tree`
avoids the need to build a token stream with a single entry (which
requires two allocations, one for the `Lrc` and one for the `Vec`).
The main `push_tree` use arises from a change to one of the `ToInternal`
impls in `proc_macro_server.rs`. It now returns a `SmallVec` instead of
a `TokenStream`. This return value is then iterated over by
`concat_trees`, which does `push_tree` on each element. Furthermore, the
use of `SmallVec` avoids more allocations, because there is always only
one or two token trees.
Note: the removed `TokenStreamBuilder::push` method had some code to
deal with a quadratic blowup case from #57735. This commit removes the
code. I tried and failed to reproduce the blowup from that PR, before
and after this change. Various other changes have happened to
`TokenStreamBuilder` in the meantime, so I suspect the original problem
is no longer relevant, though I don't have proof of this. Generally
speaking, repeatedly extending a `Vec` without pre-determining its
capacity is *not* quadratic. It's also incredibly common, within rustc
and many other Rust programs, so if there were performance problems
there you'd think it would show up in other places, too.
`TokenTree::Punct` is handled outside the `match`. This commits moves it
inside the `match`, avoiding the need for the `return`s and making it
easier to read.
FIX - ambiguous Diagnostic link in docs
UPDATE - rename diagnostic_items to IntoDiagnostic and AddToDiagnostic
[Gardening] FIX - formatting via `x fmt`
FIX - rebase conflicts. NOTE: Confirm wheather or not we want to handle TargetDataLayoutErrorsWrapper this way
DELETE - unneeded allow attributes in Handler method
FIX - broken test
FIX - Rebase conflict
UPDATE - rename residual _SessionDiagnostic and fix LintDiag link
On later stages, the feature is already stable.
Result of running:
rg -l "feature.let_else" compiler/ src/librustdoc/ library/ | xargs sed -s -i "s#\\[feature.let_else#\\[cfg_attr\\(bootstrap, feature\\(let_else\\)#"
`To` is better than `Create` for indicating that this is a non-consuming
conversion, rather than creating something out of nothing.
And the addition of `Attr` is because the current names makes them sound
like they relate to `TokenStream`, but really they relate to
`AttrTokenStream`.
These two type names are long and have long matching prefixes. I find
them hard to read, especially in combinations like
`AttrAnnotatedTokenStream::new(vec![AttrAnnotatedTokenTree::Token(..)])`.
This commit renames them as `AttrToken{Stream,Tree}`.
Debuginfo line information for macro invocations are collapsed by
default - line information are replaced by the line of the outermost
expansion site. Using `-Zdebug-macros` disables this behaviour.
When the `collapse_debuginfo` feature is enabled, the default behaviour
is reversed so that debuginfo is not collapsed by default. In addition,
the `#[collapse_debuginfo]` attribute is available and can be applied to
macro definitions which will then have their line information collapsed.
Signed-off-by: David Wood <david.wood@huawei.com>
Fix a bunch of typo
This PR will fix some typos detected by [typos].
I only picked the ones I was sure were spelling errors to fix, mostly in
the comments.
[typos]: https://github.com/crate-ci/typos
proc_macro/bridge: send diagnostics over the bridge as a struct
This removes some RPC when creating and emitting diagnostics, and
simplifies the bridge slightly.
After this change, there are no remaining methods which take advantage
of the support for `&mut` references to objects in the store as
arguments, meaning that support for them could technically be removed if
we wanted. The only remaining uses of immutable references into the
store are `TokenStream` and `SourceFile`.
r? `@eddyb`
This PR will fix some typos detected by [typos].
I only picked the ones I was sure were spelling errors to fix, mostly in
the comments.
[typos]: https://github.com/crate-ci/typos
In some places we use `Vec<Attribute>` and some places we use
`ThinVec<Attribute>` (a.k.a. `AttrVec`). This results in various points
where we have to convert between `Vec` and `ThinVec`.
This commit changes the places that use `Vec<Attribute>` to use
`AttrVec`. A lot of this is mechanical and boring, but there are
some interesting parts:
- It adds a few new methods to `ThinVec`.
- It implements `MapInPlace` for `ThinVec`, and introduces a macro to
avoid the repetition of this trait for `Vec`, `SmallVec`, and
`ThinVec`.
Overall, it makes the code a little nicer, and has little effect on
performance. But it is a precursor to removing
`rustc_data_structures::thin_vec::ThinVec` and replacing it with
`thin_vec::ThinVec`, which is implemented more efficiently.
Migrations for rustc_expand transcribe.rs
This PR includes some migrations to the new diagnostics API for the `rustc_expand` module.
r? ```@davidtwco```
- Rename `ast::Lit::token` as `ast::Lit::token_lit`, because its type is
`token::Lit`, which is not a token. (This has been confusing me for a
long time.)
reasonable because we have an `ast::token::Lit` inside an `ast::Lit`.
- Rename `LitKind::{from,to}_lit_token` as
`LitKind::{from,to}_token_lit`, to match the above change and
`token::Lit`.
Implement `#[rustc_default_body_unstable]`
This PR implements a new stability attribute — `#[rustc_default_body_unstable]`.
`#[rustc_default_body_unstable]` controls the stability of default bodies in traits.
For example:
```rust
pub trait Trait {
#[rustc_default_body_unstable(feature = "feat", isssue = "none")]
fn item() {}
}
```
In order to implement `Trait` user needs to either
- implement `item` (even though it has a default implementation)
- enable `#![feature(feat)]`
This is useful in conjunction with [`#[rustc_must_implement_one_of]`](https://github.com/rust-lang/rust/pull/92164), we may want to relax requirements for a trait, for example allowing implementing either of `PartialEq::{eq, ne}`, but do so in a safe way — making implementation of only `PartialEq::ne` unstable.
r? `@Aaron1011`
cc `@nrc` (iirc you were interested in this wrt `read_buf`), `@danielhenrymantilla` (you were interested in the related `#[rustc_must_implement_one_of]`)
P.S. This is my first time working with stability attributes, so I'm not sure if I did everything right 😅
This removes some RPC when creating and emitting diagnostics, and
simplifies the bridge slightly.
After this change, there are no remaining methods which take advantage
of the support for `&mut` references to objects in the store as
arguments, meaning that support for them could technically be removed if
we wanted. The only remaining uses of immutable references into the
store are `TokenStream` and `SourceFile`.
Remove `TreeAndSpacing`.
A `TokenStream` contains a `Lrc<Vec<(TokenTree, Spacing)>>`. But this is
not quite right. `Spacing` makes sense for `TokenTree::Token`, but does
not make sense for `TokenTree::Delimited`, because a
`TokenTree::Delimited` cannot be joined with another `TokenTree`.
This commit fixes this problem, by adding `Spacing` to `TokenTree::Token`,
changing `TokenStream` to contain a `Lrc<Vec<TokenTree>>`, and removing the
`TreeAndSpacing` typedef.
The commit removes these two impls:
- `impl From<TokenTree> for TokenStream`
- `impl From<TokenTree> for TreeAndSpacing`
These were useful, but also resulted in code with many `.into()` calls
that was hard to read, particularly for anyone not highly familiar with
the relevant types. This commit makes some other changes to compensate:
- `TokenTree::token()` becomes `TokenTree::token_{alone,joint}()`.
- `TokenStream::token_{alone,joint}()` are added.
- `TokenStream::delimited` is added.
This results in things like this:
```rust
TokenTree::token(token::Semi, stmt.span).into()
```
changing to this:
```rust
TokenStream::token_alone(token::Semi, stmt.span)
```
This makes the type of the result, and its spacing, clearer.
These changes also simplifies `Cursor` and `CursorRef`, because they no longer
need to distinguish between `next` and `next_with_spacing`.
r? `@petrochenkov`
This is done by having the crossbeam dependency inserted into the
proc_macro server code from the server side, to avoid adding a
dependency to proc_macro.
In addition, this introduces a -Z command-line option which will switch
rustc to run proc-macros using this cross-thread executor. With the
changes to the bridge in #98186, #98187, #98188 and #98189, the
performance of the executor should be much closer to same-thread
execution.
In local testing, the crossbeam executor was substantially more
performant than either of the two existing CrossThread strategies, so
they have been removed to keep things simple.
A `TokenStream` contains a `Lrc<Vec<(TokenTree, Spacing)>>`. But this is
not quite right. `Spacing` makes sense for `TokenTree::Token`, but does
not make sense for `TokenTree::Delimited`, because a
`TokenTree::Delimited` cannot be joined with another `TokenTree`.
This commit fixes this problem, by adding `Spacing` to `TokenTree::Token`,
changing `TokenStream` to contain a `Lrc<Vec<TokenTree>>`, and removing the
`TreeAndSpacing` typedef.
The commit removes these two impls:
- `impl From<TokenTree> for TokenStream`
- `impl From<TokenTree> for TreeAndSpacing`
These were useful, but also resulted in code with many `.into()` calls
that was hard to read, particularly for anyone not highly familiar with
the relevant types. This commit makes some other changes to compensate:
- `TokenTree::token()` becomes `TokenTree::token_{alone,joint}()`.
- `TokenStream::token_{alone,joint}()` are added.
- `TokenStream::delimited` is added.
This results in things like this:
```rust
TokenTree::token(token::Semi, stmt.span).into()
```
changing to this:
```rust
TokenStream::token_alone(token::Semi, stmt.span)
```
This makes the type of the result, and its spacing, clearer.
These changes also simplifies `Cursor` and `CursorRef`, because they no longer
need to distinguish between `next` and `next_with_spacing`.
This attribute allows to mark default body of a trait function as
unstable. This means that implementing the trait without implementing
the function will require enabling unstable feature.
This is useful in conjunction with `#[rustc_must_implement_one_of]`,
we may want to relax requirements for a trait, for example allowing
implementing either of `PartialEq::{eq, ne}`, but do so in a safe way
-- making implementation of only `PartialEq::ne` unstable.
Revert "Stabilize $$ in Rust 1.63.0"
This mechanically reverts commit 9edaa76adc, the one commit from #95860.
https://github.com/rust-lang/rust/issues/99035; the behavior of `$$crate` is potentially unexpected and not ready to be stabilized. https://github.com/rust-lang/rust/pull/99193 attempts to forbid `$$crate` without also destabilizing `$$` more generally.
`@rustbot` modify labels +T-compiler +T-lang +P-medium +beta-nominated +relnotes
(applying the labels I think are accurate from the issue and alternative partial revert)
cc `@Mark-Simulacrum`
This method is still only used for Literal::subspan, however the
implementation only depends on the Span component, so it is simpler and
more efficient for now to pass down only the information that is needed.
In the future, if more information about the Literal is required in the
implementation (e.g. to validate that spans line up as expected with
source text), that extra information can be added back with extra
arguments.
This builds on the symbol infrastructure built for `Ident` to replicate
the `LitKind` and `Lit` structures in rustc within the `proc_macro`
client, allowing literals to be fully created and interacted with from
the client thread. Only parsing and subspan operations still require
sync RPC.
Doing this for all unicode identifiers would require a dependency on
`unicode-normalization` and `rustc_lexer`, which is currently not
possible for `proc_macro` due to it being built concurrently with `std`
and `core`. Instead, ASCII identifiers are validated locally, and an RPC
message is used to validate unicode identifiers when needed.
String values are interned on the both the server and client when
deserializing, to avoid unnecessary copies and keep Ident cheap to copy and
move. This appears to be important for performance.
The client-side interner is based roughly on the one from rustc_span, and uses
an arena inspired by rustc_arena.
RPC messages passing symbols always include the full value. This could
potentially be optimized in the future if it is revealed to be a
performance bottleneck.
Despite now having a relevant implementaion of Display for Ident, ToString is
still specialized, as it is a hot-path for this object.
The symbol infrastructure will also be used for literals in the next
part.
Emit warning when named arguments are used positionally in format
Addresses Issue 98466 by emitting an error if a named argument
is used like a position argument (i.e. the name is not used in
the string to be formatted).
Fixes rust-lang#98466
Implement `for<>` lifetime binder for closures
This PR implements RFC 3216 ([TI](https://github.com/rust-lang/rust/issues/97362)) and allows code like the following:
```rust
let _f = for<'a, 'b> |a: &'a A, b: &'b B| -> &'b C { b.c(a) };
// ^^^^^^^^^^^--- new!
```
cc ``@Aaron1011`` ``@cjgillot``
Addresses Issue 98466 by emitting a warning if a named argument
is used like a position argument (i.e. the name is not used in
the string to be formatted).
Fixes rust-lang#98466
proc_macro: Fix expand_expr expansion of bool literals
Previously, the expand_expr method would expand bool literals as a
`Literal` token containing a `LitKind::Bool`, rather than as an `Ident`.
This is not a valid token, and the `LitKind::Bool` case needs to be
handled seperately.
Tests were added to more deeply compare the streams in the expand-expr
test suite to catch mistakes like this in the future.
All derive ops currently use match-destructuring to access fields. This
is reasonable for enums, but sub-optimal for structs. E.g.:
```
fn eq(&self, other: &Point) -> bool {
match *other {
Self { x: ref __self_1_0, y: ref __self_1_1 } =>
match *self {
Self { x: ref __self_0_0, y: ref __self_0_1 } =>
(*__self_0_0) == (*__self_1_0) &&
(*__self_0_1) == (*__self_1_1),
},
}
}
```
This commit changes derive ops on structs to use field access instead, e.g.:
```
fn eq(&self, other: &Point) -> bool {
self.x == other.x && self.y == other.y
}
```
This is faster to compile, results in smaller binaries, and is simpler to
generate. Unfortunately, we have to keep the old pattern generating code around
for `repr(packed)` structs because something like `&self.x` (which doesn't show
up in `PartialEq` ops, but does show up in `Debug` and `Hash` ops) isn't
allowed. But this commit at least changes those cases to use let-destructuring
instead of match-destructuring, e.g.:
```
fn hash<__H: ::core:#️⃣:Hasher>(&self, state: &mut __H) -> () {
{
let Self(ref __self_0_0) = *self;
{ ::core:#️⃣:Hash::hash(&(*__self_0_0), state) }
}
}
```
There are some unnecessary blocks remaining in the generated code, but I
will fix them in a follow-up PR.
Avoid some `&str` to `String` conversions with `MultiSpan::push_span_label`
This patch removes some`&str` to `String` conversions with `MultiSpan::push_span_label`.
The `rustc_lint_diagnostics` attribute is used by the diagnostic
translation/struct migration lints to identify calls where
non-translatable diagnostics or diagnostics outwith impls are being
created. Any function used in creating a diagnostic should be annotated
with this attribute so this commit adds the attribute to many more
functions.
Signed-off-by: David Wood <david.wood@huawei.com>
This greatly reduces round-trips to fetch relevant extra information about the
token in proc macro code, and avoids RPC messages to create Group tokens.
This greatly reduces round-trips to fetch relevant extra information about the
token in proc macro code, and avoids RPC messages to create Punct tokens.
proc_macro/bridge: cache static spans in proc_macro's client thread-local state
This is the second part of https://github.com/rust-lang/rust/pull/86822, split off as requested in https://github.com/rust-lang/rust/pull/86822#pullrequestreview-1008655452. This patch removes the RPC calls required for the very common operations of `Span::call_site()`, `Span::def_site()` and `Span::mixed_site()`.
Some notes:
This part is one of the ones I don't love as a final solution from a design standpoint, because I don't like how the spans are serialized immediately at macro invocation. I think a more elegant solution might've been to reserve special IDs for `call_site`, `def_site`, and `mixed_site` at compile time (either starting at 1 or from `u32::MAX`) and making reading a Span handle automatically map these IDs to the relevant values, rather than doing extra serialization.
This would also have an advantage for potential future work to allow `proc_macro` to operate more independently from the compiler (e.g. to reduce the necessity of `proc-macro2`), as methods like `Span::call_site()` could be made to function without access to the compiler backend.
That was unfortunately tricky to do at the time, as this was the first part I wrote of the patches. After the later part (#98188, #98189), the other uses of `InternedStore` are removed meaning that a custom serialization strategy for `Span` is easier to implement.
If we want to go that path, we'll still need the majority of the work to split the bridge object and introduce the `Context` trait for free methods, and it will be easier to do after `Span` is the only user of `InternedStore` (after #98189).
Previously, the expand_expr method would expand bool literals as a
`Literal` token containing a `LitKind::Bool`, rather than as an `Ident`.
This is not a valid token, and the `LitKind::Bool` case needs to be
handled seperately.
Tests were added to more deeply compare the streams in the expand-expr
test suite to catch mistakes like this in the future.
This commit adds new methods that combine sequences of existing
formatting methods.
- `Formatter::debug_{tuple,struct}_field[12345]_finish`, equivalent to a
`Formatter::debug_{tuple,struct}` + N x `Debug{Tuple,Struct}::field` +
`Debug{Tuple,Struct}::finish` call sequence.
- `Formatter::debug_{tuple,struct}_fields_finish` is similar, but can
handle any number of fields by using arrays.
These new methods are all marked as `doc(hidden)` and unstable. They are
intended for the compiler's own use.
Special-casing up to 5 fields gives significantly better performance
results than always using arrays (as was tried in #95637).
The commit also changes the `Debug` deriving code to use these new methods. For
example, where the old `Debug` code for a struct with two fields would be like
this:
```
fn fmt(&self, f: &mut ::core::fmt::Formatter) -> ::core::fmt::Result {
match *self {
Self {
f1: ref __self_0_0,
f2: ref __self_0_1,
} => {
let debug_trait_builder = &mut ::core::fmt::Formatter::debug_struct(f, "S2");
let _ = ::core::fmt::DebugStruct::field(debug_trait_builder, "f1", &&(*__self_0_0));
let _ = ::core::fmt::DebugStruct::field(debug_trait_builder, "f2", &&(*__self_0_1));
::core::fmt::DebugStruct::finish(debug_trait_builder)
}
}
}
```
the new code is like this:
```
fn fmt(&self, f: &mut ::core::fmt::Formatter) -> ::core::fmt::Result {
match *self {
Self {
f1: ref __self_0_0,
f2: ref __self_0_1,
} => ::core::fmt::Formatter::debug_struct_field2_finish(
f,
"S2",
"f1",
&&(*__self_0_0),
"f2",
&&(*__self_0_1),
),
}
}
```
This shrinks the code produced for `Debug` instances
considerably, reducing compile times and binary sizes.
Co-authored-by: Scott McMurray <scottmcm@users.noreply.github.com>
It's a weird function: it lets you modify the token stream in the middle
of iteration. There is only one call site, and it is only used for the
rare `ProceduralMasquerade` legacy case.
Batch proc_macro RPC for TokenStream iteration and combination operations
This is the first part of #86822, split off as requested in https://github.com/rust-lang/rust/pull/86822#pullrequestreview-1008655452. It reduces the number of RPC calls required for common operations such as iterating over and concatenating TokenStreams.
This is an experimental patch to try to reduce the codegen complexity of
TokenStream's FromIterator and Extend implementations for downstream
crates, by moving the core logic into a helper type. This might help
improve build performance of crates which depend on proc_macro as
iterators are used less, and the compiler may take less time to do
things like attempt specializations or other iterator optimizations.
The change intentionally sacrifices some optimization opportunities,
such as using the specializations for collecting iterators derived from
Vec::into_iter() into Vec.
This is one of the simpler potential approaches to reducing the amount
of code generated in crates depending on proc_macro, so it seems worth
trying before other more-involved changes.
This significantly reduces the cost of common interactions with TokenStream
when running with the CrossThread execution strategy, by reducing the number of
RPC calls required.
Support lint expectations for `--force-warn` lints (RFC 2383)
Rustc has a `--force-warn` flag, which overrides lint level attributes and forces the diagnostics to always be warn. This means, that for lint expectations, the diagnostic can't be suppressed as usual. This also means that the expectation would not be fulfilled, even if a lint had been triggered in the expected scope.
This PR now also tracks the expectation ID in the `ForceWarn` level. I've also made some minor adjustments, to possibly catch more bugs and make the whole implementation more robust.
This will probably conflict with https://github.com/rust-lang/rust/pull/97718. That PR should ideally be reviewed and merged first. The conflict itself will be trivial to fix.
---
r? `@wesleywiser`
cc: `@flip1995` since you've helped with the initial review and also discussed this topic with me. 🙃
Follow-up of: https://github.com/rust-lang/rust/pull/87835
Issue: https://github.com/rust-lang/rust/issues/85549
Yeah, and that's it.
Never regard macro rules with compile_error! invocations as unused
The very point of compile_error! is to never be reached, and one of
the use cases of the macro, currently also listed as examples in the
documentation of compile_error, is to create nicer errors for wrong
macro invocations. Thus, we should never warn about unused macro arms
that contain invocations of compile_error.
See also https://github.com/rust-lang/rust/pull/96150#issuecomment-1126599107 and the discussion after that.
Furthermore, the PR also contains two commits to silence `unused_macro_rules` when a macro has an invalid rule, and to add a test that `unused_macros` does not behave badly in the same situation.
r? `@petrochenkov` as I've talked to them about this
Prior to this commit, if a macro had any malformed rules, all rules would
be reported as unused, regardless of whether they were used or not.
So we just turn off unused rule checking completely for macros with
malformed rules.
The very point of compile_error! is to never be reached, and one of
the use cases of the macro, currently also listed as examples in the
documentation of compile_error, is to create nicer errors for wrong
macro invocations. Thus, we shuuld never warn about unused macro arms
that contain invocations of compile_error.
Remove FIXME on `ExtCtxt::fn_decl()`
`ExtCtxt::fn_decl()` is used like `self.fn_decl(..)` or `self.cx.fn_decl(..)`, coverting it to an assoc fn, for example, makes it inconvenience (e.g. `self.cx.fn_decl(..)` would be longer to represent). Given that, it doesn't seem a "FIXME" thing and unused `self` is okay, I think.
Implement a lint to warn about unused macro rules
This implements a new lint to warn about unused macro rules (arms/matchers), similar to the `unused_macros` lint added by #41907 that warns about entire macros.
```rust
macro_rules! unused_empty {
(hello) => { println!("Hello, world!") };
() => { println!("empty") }; //~ ERROR: 1st rule of macro `unused_empty` is never used
}
fn main() {
unused_empty!(hello);
}
```
Builds upon #96149 and #96156.
Fixes#73576
Begin fixing all the broken doctests in `compiler/`
Begins to fix#95994.
All of them pass now but 24 of them I've marked with `ignore HELP (<explanation>)` (asking for help) as I'm unsure how to get them to work / if we should leave them as they are.
There are also a few that I marked `ignore` that could maybe be made to work but seem less important.
Each `ignore` has a rough "reason" for ignoring after it parentheses, with
- `(pseudo-rust)` meaning "mostly rust-like but contains foreign syntax"
- `(illustrative)` a somewhat catchall for either a fragment of rust that doesn't stand on its own (like a lone type), or abbreviated rust with ellipses and undeclared types that would get too cluttered if made compile-worthy.
- `(not-rust)` stuff that isn't rust but benefits from the syntax highlighting, like MIR.
- `(internal)` uses `rustc_*` code which would be difficult to make work with the testing setup.
Those reason notes are a bit inconsistently applied and messy though. If that's important I can go through them again and try a more principled approach. When I run `rg '```ignore \(' .` on the repo, there look to be lots of different conventions other people have used for this sort of thing. I could try unifying them all if that would be helpful.
I'm not sure if there was a better existing way to do this but I wrote my own script to help me run all the doctests and wade through the output. If that would be useful to anyone else, I put it here: https://github.com/Elliot-Roberts/rust_doctest_fixing_tool
Add a new Rust attribute to support embedding debugger visualizers
Implemented [this RFC](https://github.com/rust-lang/rfcs/pull/3191) to add support for embedding debugger visualizers into a PDB.
Added a new attribute `#[debugger_visualizer]` and updated the `CrateMetadata` to store debugger visualizers for crate dependencies.
RFC: https://github.com/rust-lang/rfcs/pull/3191
Cleanup `DebuggerVisualizerFile` type and other minor cleanup of queries.
Merge the queries for debugger visualizers into a single query.
Revert move of `resolve_path` to `rustc_builtin_macros`. Update dependencies in Cargo.toml for `rustc_passes`.
Respond to PR comments. Load visualizer files into opaque bytes `Vec<u8>`. Debugger visualizers for dynamically linked crates should not be embedded in the current crate.
Update the unstable book with the new feature. Add the tracking issue for the debugger_visualizer feature.
Respond to PR comments and minor cleanups.
Change `span_suggestion` (and variants) to take `impl ToString` rather
than `String` for the suggested code, as this simplifies the
requirements on the diagnostic derive.
Signed-off-by: David Wood <david.wood@huawei.com>
The code currently ignores the actual delimiter on the RHS and fakes up
a `NoDelim`/`DelimSpan::dummy()` one. This commit changes it to use the
actual delimiter.
The commit also reorders the fields for the `Delimited` variant to match
the `Sequence` variant.
Create (unstable) 2024 edition
[On Zulip](https://rust-lang.zulipchat.com/#narrow/stream/213817-t-lang/topic/Deprecating.20macro.20scoping.20shenanigans/near/272860652), there was a small aside regarding creating the 2024 edition now as opposed to later. There was a reasonable amount of support and no stated opposition.
This change creates the 2024 edition in the compiler and creates a prelude for the 2024 edition. There is no current difference between the 2021 and 2024 editions. Cargo and other tools will need to be updated separately, as it's not in the same repository. This change permits the vast majority of work towards the next edition to proceed _now_ instead of waiting until 2024.
For sanity purposes, I've merged the "hello" UI tests into a single file with multiple revisions. Otherwise we'd end up with a file per edition, despite them being essentially identical.
````@rustbot```` label +T-lang +S-waiting-on-review
Not sure on the relevant team, to be honest.
Remove `<mbe::TokenTree as Clone>`
`mbe::TokenTree` doesn't really need to implement `Clone`, and getting rid of that impl leads to some speed-ups.
r? `@petrochenkov`
Loading the fallback bundle in compilation sessions that won't go on to
emit any errors unnecessarily degrades compile time performance, so
lazily create the Fluent bundle when it is first required.
Signed-off-by: David Wood <david.wood@huawei.com>
When a `macro_rules! foo { ... }` invocation is compiled the name used
is `foo`, not `macro_rules!`. This is different to all other macro
invocations, and confused me when I was inserted debugging println
statements for macro evaluation.
This commit changes it to `macro_rules` (or just `macro`), which is what
I expected. There are no externally visible changes.
expand: Remove `ParseSess::missing_fragment_specifiers`
It was used for deduplicating some errors for legacy code which are mostly deduplicated even without that, but at cost of global mutable state, which is not a good tradeoff.
cc https://github.com/rust-lang/rust/pull/95747#issuecomment-1091619403
r? ``@nnethercote``
Left overs of #95761
These are just nits. Feel free to close this PR if all modifications are not worth merging.
* `#![feature(decl_macro)]` is not needed anymore in `rustc_expand`
* `tuple_impls` does not require `$Tuple:ident`. I guess it is there to enhance readability?
r? ```@petrochenkov```
refactor: simplify few string related interactions
Few small optimizations:
check_doc_keyword: don't alloc string for emptiness check
check_doc_alias_value: get argument as Symbol to prevent needless string convertions
check_doc_attrs: don't alloc vec, iterate over slice.
replace as_str() check with symbol check
get_single_str_from_tts: don't prealloc string
trivial string to str replace
LifetimeScopeForPath::NonElided use Vec<Symbol> instead of Vec<String>
AssertModuleSource use FxHashSet<Symbol> instead of BTreeSet<String>
CrateInfo.crate_name replace FxHashMap<CrateNum, String> with FxHashMap<CrateNum, Symbol>
It was used for deduplicating some errors for legacy code which are mostly deduplicated even without that, but at cost of global mutable state, which is not a good tradeoff.
Remove explicit delimiter token trees from `Delimited`.
They were introduced by the final commit in #95159 and gave a
performance win. But since the introduction of `MatcherLoc` they are no
longer needed. This commit reverts that change, making the code a bit
simpler.
r? `@petrochenkov`
They were introduced by the final commit in #95159 and gave a
performance win. But since the introduction of `MatcherLoc` they are no
longer needed. This commit reverts that change, making the code a bit
simpler.
check_doc_alias_value: get argument as Symbol to prevent needless string convertions
check_doc_attrs: don't alloc vec, iterate over slice. Vec introduced in #83149, but no perf run posted on merge
replace as_str() check with symbol check
get_single_str_from_tts: don't prealloc string
trivial string to str replace
LifetimeScopeForPath::NonElided use Vec<Symbol> instead of Vec<String>
AssertModuleSource use BTreeSet<Symbol> instead of BTreeSet<String>
CrateInfo.crate_name replace FxHashMap<CrateNum, String> with FxHashMap<CrateNum, Symbol>
By heap allocating the argument within `NtPath`, `NtVis`, and `NtStmt`.
This slightly reduces cumulative and peak allocation amounts, most
notably on `deep-vector`.
Currently it's called in `parse_tt` every time a match rule is invoked.
This commit moves it so it's called instead once per match rule, in
`compile_declarative_macro. This is a performance win.
The commit also moves `compute_locs` out of `TtParser`, because there's
no longer any reason for it to be in there.
Use the proc-macro descr to track their individual expansions with
self-profiling events. This will help diagnose performance issues
with slow proc-macros.
In #95555 this was moved out of `parse_tt_inner` and `nameize` into
`compute_locs`. But the next commit will be moving `compute_locs`
outwards to a place that isn't suitable for the missing fragment
identifier checking. So this reinstates the old checking.
Add an option for enabling and disabling Fluent's directionality
isolation markers in output. Disabled by default as these can render in
some terminals and applications.
Signed-off-by: David Wood <david.wood@huawei.com>
Extend loading of Fluent bundles so that bundles can be loaded from the
sysroot based on the language requested by the user, or using a nightly
flag.
Sysroot bundles are loaded from `$sysroot/share/locale/$locale/*.ftl`.
Signed-off-by: David Wood <david.wood@huawei.com>
This commit updates the signatures of all diagnostic functions to accept
types that can be converted into a `DiagnosticMessage`. This enables
existing diagnostic calls to continue to work as before and Fluent
identifiers to be provided. The `SessionDiagnostic` derive just
generates normal diagnostic calls, so these APIs had to be modified to
accept Fluent identifiers.
In addition, loading of the "fallback" Fluent bundle, which contains the
built-in English messages, has been implemented.
Each diagnostic now has "arguments" which correspond to variables in the
Fluent messages (necessary to render a Fluent message) but no API for
adding arguments has been added yet. Therefore, diagnostics (that do not
require interpolation) can be converted to use Fluent identifiers and
will be output as before.
`MultiSpan` contains labels, which are more complicated with the
introduction of diagnostic translation and will use types from
`rustc_errors` - however, `rustc_errors` depends on `rustc_span` so
`rustc_span` cannot use types like `DiagnosticMessage` without
dependency cycles. Introduce a new `rustc_error_messages` crate that can
contain `DiagnosticMessage` and `MultiSpan`.
Signed-off-by: David Wood <david.wood@huawei.com>
Introduce a `DiagnosticMessage` type that will enable diagnostic
messages to be simple strings or Fluent identifiers.
`DiagnosticMessage` is now used in the implementation of the standard
`DiagnosticBuilder` APIs.
Signed-off-by: David Wood <david.wood@huawei.com>
Reduce unnecessary escaping in proc_macro::Literal::character/string
I noticed that https://doc.rust-lang.org/proc_macro/struct.Literal.html#method.character is producing unreadable literals that make macro-expanded code unnecessarily hard to read. Since the proc macro server was using `escape_unicode()`, every char is escaped using `\u{…}` regardless of whether there is any need to do so. For example `Literal::character('=')` would previously produce `'\u{3d}'` which unnecessarily obscures the meaning when reading the macro-expanded code.
I've changed Literal::string also in this PR because `str`'s `Debug` impl is also smarter than just calling `escape_debug` on every char. For example `Literal::string("ferris's")` would previously produce `"ferris\'s"` but will now produce `"ferris's"`.
`parse_tt` currently traverses a `&[TokenTree]` to do matching. But this
is a bad representation for the traversal.
- `TokenTree` is nested, and there's a bunch of expensive and fiddly
state required to handle entering and exiting nested submatchers.
- There are three positions (sequence separators, sequence Kleene ops,
and end of the matcher) that are represented by an index that exceeds
the end of the `&[TokenTree]`, which is clumsy and error-prone.
This commit introduces a new representation called `MatcherLoc` that is
designed specifically for matching. It fixes all the above problems,
making the code much easier to read. A `&[TokenTree]` is converted to a
`&[MatcherLoc]` before matching begins. Despite the cost of the
conversion, it's still a net performance win, because various pieces of
traversal state are computed once up-front, rather than having to be
recomputed repeatedly during the macro matching.
Some improvements worth noting.
- `parse_tt_inner` is *much* easier to read. No more having to compare
`idx` against `len` and read comments to understand what the result
means.
- The handling of `Delimited` in `parse_tt_inner` is now trivial.
- The three end-of-sequence cases in `parse_tt_inner` are now handled in
three separate match arms, and the control flow is much simpler.
- `nameize` is no longer recursive.
- There were two places that issued "missing fragment specifier" errors:
one in `parse_tt_inner()`, and one in `nameize()`. Presumably the
latter was never executed. There's now a single place issuing these
errors, in `compute_locs()`.
- The number of heap allocations done for a `check full` build of
`async-std-1.10.0` (an extreme example of heavy macro use) drops from
11.8M to 2.6M, and most of these occur outside of macro matching.
- The size of `MatcherPos` drops from 64 bytes to 16 bytes. Small enough
that it no longer needs boxing, which partly accounts for the
reduction in allocations.
- The rest of the drop in allocations is due to the removal of
`MatcherKind`, because we no longer need to record anything for the
parent matcher when entering a submatcher.
- Overall it reduces code size by 45 lines.
It's only used in one place, and there we clone and then make a bunch of
modifications. It's clearer if we duplicate more explicitly, and there's
a symmetry now between `sequence()` and `empty_sequence()`.
`parse_tt` needs a way to get from within submatchers make to the
enclosing submatchers. Currently it has two distinct mechanisms for
this:
- `Delimited` submatchers use `MatcherPos::stack` to record stuff about
the parent (and further back ancestors).
- `Sequence` submatchers use `MatcherPosSequence::parent` to point to
the parent matcher position.
Having two mechanisms is really confusing, and it took me a long time to
understand all this.
This commit eliminates `MatcherPos::stack`, and changes `Delimited`
submatchers to use the same mechanism as sequence submatchers. That
mechanism is also changed a bit: instead of storing the entire parent
`MatcherPos`, we now only store the necessary parts from the parent
`MatcherPos`.
Overall this is a small performance win, with the positives outweighing
the negatives, but it's mostly for clarity.
Spellchecking compiler comments
This PR cleans up the rest of the spelling mistakes in the compiler comments. This PR does not change any literal or code spelling issues.
Currently, we detect an exit from a `Delimited` submatcher when `idx`
exceeds the bounds of the current submatcher *and* there is a `stack`
entry.
This commit changes it to something simpler: just look for a
`CloseDelim` token.
Yet more `parse_tt` improvements
Including lots of comment improvements, and an overhaul of how `matches` work that gives big speedups.
r? `@petrochenkov`
Currently, matches within a sequence are recorded in a new empty
`matches` vector. Then when the sequence finishes the matches are merged
into the `matches` vector of the parent.
This commit changes things so that a sequence mp inherits the matches
made so far. This means that additional matches from the sequence don't
need to be merged into the parent. `push_match` becomes more
complicated, and the current sequence depth needs to be tracked. But
it's a sizeable performance win because it avoids one or more
`push_match` calls on every iteration of a sequence.
The commit also removes `match_hi`, which is no longer necessary.
Ignore doc comments in a declarative macro matcher.
Fixes#95267. Reverts to the old behaviour before #95159 introduced a
regression.
r? `@petrochenkov`
Remove `Nonterminal::NtTT`.
It's only needed for macro expansion, not as a general element in the
AST. This commit removes it, adds `NtOrTt` for the parser and macro
expansion cases, and renames the variants in `NamedMatch` to better
match the new type.
r? `@petrochenkov`
It's only needed for macro expansion, not as a general element in the
AST. This commit removes it, adds `NtOrTt` for the parser and macro
expansion cases, and renames the variants in `NamedMatch` to better
match the new type.
Remove `Session::one_time_diagnostic`
This is untracked mutable state, which modified the behaviour of queries.
It was used for 2 things: some full-blown errors, but mostly for lint declaration notes ("the lint level is defined here" notes).
It is replaced by the diagnostic deduplication infra which already exists in the diagnostic emitter.
A new diagnostic level `OnceNote` is introduced specifically for lint notes, to deduplicate subdiagnostics.
As a drive-by, diagnostic emission takes a `&mut` to allow dropping the `SubDiagnostic`s.
Currently it copies a `KleeneOp` and a `Token` out of a
`SequenceRepetition`. It's better to store a reference to the
`SequenceRepetition`, which is now possible due to #95159 having changed
the lifetimes.
The `Lrc` is only relevant within `transcribe()`. There, the `Lrc` is
helpful for the non-`NtTT` cases, because the entire nonterminal is
cloned. But for the `NtTT` cases the inner token tree is cloned (a full
clone) and so the `Lrc` is of no help.
This commit splits the `NtTT` and non-`NtTT` cases, avoiding the useless
`Lrc` in the former case, for the following effect on macro-heavy
crates.
- It reduces the total number of allocations a lot.
- It increases the size of some of the remaining allocations.
- It doesn't affect *peak* memory usage, because the larger allocations
are short-lived.
This overall gives a speed win.
Introduce `TtParser`
These commits make a number of changes to declarative macro expansion, resulting in code that is shorter, simpler, and faster.
Best reviewed one commit at a time.
r? `@petrochenkov`
As its name suggests, `TokenTreeOrTokenTreeSlice` is either a single
`TokenTree` or a slice of them. It has methods `len` and `get_tt` that
let it be treated much like an ordinary slice. The reason it isn't an
ordinary slice is that for `TokenTree::Delimited` the open and close
delimiters are represented implicitly, and when they are needed they are
constructed on the fly with `Delimited::{open,close}_tt`, rather than
being present in memory.
This commit changes `Delimited` so the open and close delimiters are
represented explicitly. As a result, `TokenTreeOrTokenTreeSlice` is no
longer needed and `MatcherPos` and `MatcherTtFrame` can just use an
ordinary slice. `TokenTree::{len,get_tt}` are also removed, because they
were only needed to support `TokenTreeOrTokenTreeSlice`.
The change makes the code shorter and a little bit faster on benchmarks
that use macro expansion heavily, partly because `MatcherPos` is a lot
smaller (less data to `memcpy`) and partly because ordinary slice
operations are faster than `TokenTreeOrTokenTreeSlice::{len,get_tt}`.
By putting them in `TtParser`, we can reuse them for every rule in a
macro. With that done, they can be `SmallVec` instead of `Vec`, and this
is a performance win because these vectors are hot and `SmallVec`
operations are a bit slower due to always needing an "inline or heap?"
check.
This type was a small performance win for `html5ever`, which uses a
macro with hundreds of very simple rules that don't contain any
metavariables. But this type is complicated (extra lifetimes) and
perf-neutral for macros that do have metavariables.
This commit removes `MatcherPosHandle`, simplifying things a lot. This
increases the allocation rate for `html5ever` and similar cases a bit,
but makes things easier for follow-up changes that will improve
performance more than what we lost here.
It currently has no state, just the three methods `parse_tt`,
`parse_tt_inner`, and `bb_items_ambiguity_error`.
This commit is large but trivial, and mostly consists of changes to the
indentation of those methods. Subsequent commits will do more.
There are a few places were we have to construct it, though, and a few
places that are more invasive to change. To do this, we create a
constructor with a long obvious name.
More robust fallback for `use` suggestion
Our old way to suggest where to add `use`s would first look for pre-existing `use`s in the relevant crate/module, and if there are *no* uses, it would fallback on trying to use another item as the basis for the suggestion.
But this was fragile, as illustrated in issue #87613
This PR instead identifies span of the first token after any inner attributes, and uses *that* as the fallback for the `use` suggestion.
Fix#87613
The current structure makes it hard to tell that there are just four
distinct code paths, depending on how many items there are in `bb_items`
and `next_items`. This commit introduces a `match` that clarifies
things.
Ensure stability directives are checked in all cases
Split off #93017
Stability and deprecation were not checked in all cases, for instance if a type error happened.
This PR moves the check earlier in the pipeline to ensure the errors are emitted in all cases.
r? `@lcnr`
Fix invalid lint_node_id being put on a removed stmt
This pull-request remove a invalid `assign_id!` being put on an stmt node.
The problem is that this node is being removed away by a cfg making it unreachable when triggering a buffered lint.
The comment in the other match arm already tell to not assign a id because it could have a `#[cfg()]` so this is just respecting the comment.
Fixes https://github.com/rust-lang/rust/issues/94523
r? ```````@petrochenkov```````
There are three `Option` fields in `MatcherPos` that are only used in
tandem. This commit combines them, making the code slightly easier to
read. (It also makes clear that the `sep` field arguably should have
been `Option<Option<Token>>`!)
To avoid the strange style where comments force `else` onto its own
line.
The commit also removes several else-after-return constructs, which can
be hard to read.
Improve allowness of the unexpected_cfgs lint
This pull-request improve the allowness (`#[allow(...)]`) of the `unexpected_cfgs` lint.
Before this PR only crate level `#![allow(unexpected_cfgs)]` worked, now with this PR it also work when put around `cfg!` or if it is in a upper level. Making it work ~for the attributes `cfg`, `cfg_attr`, ...~ for the same level is awkward as the current code is design to give "Some parent node that is close to this macro call" (cf. https://doc.rust-lang.org/nightly/nightly-rustc/rustc_expand/base/struct.ExpansionData.html) meaning that allow on the same line as an attribute won't work. I'm note even sure if this would be possible.
Found while working on https://github.com/rust-lang/rust/pull/94298.
r? ````````@petrochenkov````````
* Recover from invalid `'label: ` before block.
* Make suggestion to enclose statements in a block multipart.
* Point at `match`, `while`, `loop` and `unsafe` keywords when failing
to parse their expression.
* Do not suggest `{ ; }`.
* Do not suggest `|` when very unlikely to be what was wanted (in `let`
statements).
As an example:
#[test]
#[ignore = "not yet implemented"]
fn test_ignored() {
...
}
Will now render as:
running 2 tests
test tests::test_ignored ... ignored, not yet implemented
test result: ok. 1 passed; 0 failed; 1 ignored; 0 measured; 0 filtered out; finished in 0.00s
Adopt let else in more places
Continuation of #89933, #91018, #91481, #93046, #93590, #94011.
I have extended my clippy lint to also recognize tuple passing and match statements. The diff caused by fixing it is way above 1 thousand lines. Thus, I split it up into multiple pull requests to make reviewing easier. This is the biggest of these PRs and handles the changes outside of rustdoc, rustc_typeck, rustc_const_eval, rustc_trait_selection, which were handled in PRs #94139, #94142, #94143, #94144.
expand: Pick `cfg`s and `cfg_attrs` one by one, like other attributes
This is a rebase of https://github.com/rust-lang/rust/pull/83354, but without any language-changing parts ~(except for https://github.com/rust-lang/rust/pull/84110)~, i.e. the attribute expansion order is the same.
This is a pre-requisite for any other changes making cfg attributes closer to regular macro attributes
- Possibly changing their expansion order (https://github.com/rust-lang/rust/issues/83331)
- Keeping macro backtraces for cfg attributes, or otherwise making them visible after expansion without keeping them in place literally (https://github.com/rust-lang/rust/pull/84110).
Two exceptions to the "one by one" behavior are:
- cfgs eagerly expanded by `derive` and `cfg_eval`, they are still expanded in a batch, that's by design.
- cfgs at the crate root, they are currently expanded not during the main expansion pass, but before that, during `#![feature]` collection. I'll try to disentangle that logic later in a separate PR.
r? `@Aaron1011`
Replace usages of vec![].into_iter with [].into_iter
`[].into_iter` is idiomatic over `vec![].into_iter` because its simpler and faster (unless the vec is optimized away in which case it would be the same)
So we should change all the implementation, documentation and tests to use it.
I skipped:
* `src/tools` - Those are copied in from upstream
* `src/test/ui` - Hard to tell if `vec![].into_iter` was used intentionally or not here and not much benefit to changing it.
* any case where `vec![].into_iter` was used because we specifically needed a `Vec::IntoIter<T>`
* any case where it looked like we were intentionally using `vec![].into_iter` to test it.
ast: Avoid aborts on fatal errors thrown from mutable AST visitor
Set the node to some dummy value and rethrow the error instead.
When using the old aborting `visit_clobber` in `InvocationCollector::visit_crate` the next tests abort due to fatal errors:
```
ui\modules\path-invalid-form.rs
ui\modules\path-macro.rs
ui\modules\path-no-file-name.rs
ui\parser\issues\issue-5806.rs
ui\parser\mod_file_with_path_attr.rs
```
Follow up to https://github.com/rust-lang/rust/pull/91313.
Remove `SymbolStr`
This was originally proposed in https://github.com/rust-lang/rust/pull/74554#discussion_r466203544. As well as removing the icky `SymbolStr` type, it allows the removal of a lot of `&` and `*` occurrences.
Best reviewed one commit at a time.
r? `@oli-obk`
By changing `as_str()` to take `&self` instead of `self`, we can just
return `&str`. We're still lying about lifetimes, but it's a smaller lie
than before, where `SymbolStr` contained a (fake) `&'static str`!
This feature is aimed at giving proc macros access to powers similar to
those used by builtin macros such as `format_args!` or `concat!`. These
macros are able to accept macros in place of string literal parameters,
such as the format string, as they perform recursive macro expansion
while being expanded.
This can be especially useful in many cases thanks to helper macros like
`concat!`, `stringify!` and `include_str!` which are often used to
construct string literals at compile-time in user code.
For now, this method only allows expanding macros which produce
literals, although more expresisons will be supported before the method
is stabilized.
The only reason to use `abort_if_errors` is when the program is so broken that either:
1. later passes get confused and ICE
2. any diagnostics from later passes would be noise
This is never the case for lints, because the compiler has to be able to deal with `allow`-ed lints.
So it can continue to lint and compile even if there are lint errors.
rustc_ast: Turn `MutVisitor::token_visiting_enabled` into a constant
It's a visitor property rather than something that needs to be determined at runtime
Adopt let_else across the compiler
This performs a substitution of code following the pattern:
```
let <id> = if let <pat> = ... { identity } else { ... : ! };
```
To simplify it to:
```
let <pat> = ... { identity } else { ... : ! };
```
By adopting the `let_else` feature (cc #87335).
The PR also updates the syn crate because the currently used version of the crate doesn't support `let_else` syntax yet.
Note: Generally I'm the person who *removes* usages of unstable features from the compiler, not adds more usages of them, but in this instance I think it hopefully helps the feature get stabilized sooner and in a better state. I have written a [comment](https://github.com/rust-lang/rust/issues/87335#issuecomment-944846205) on the tracking issue about my experience and what I feel could be improved before stabilization of `let_else`.
This performs a substitution of code following the pattern:
let <id> = if let <pat> = ... { identity } else { ... : ! };
To simplify it to:
let <pat> = ... { identity } else { ... : ! };
By adopting the let_else feature.
Fix linting when trailing macro expands to a trailing semi
When a macro is used in the trailing expression position of a block
(e.g. `fn foo() { my_macro!() }`), we currently parse it as an
expression, rather than a statement. As a result, we ended up
using the `NodeId` of the containing statement as our `lint_node_id`,
even though we don't normally do this for macro calls.
If such a macro expands to an expression with a `#[cfg]` attribute,
then the trailing statement can get removed entirely. This lead to
an ICE, since we were usng the `NodeId` of the expression to emit
a lint.
Ths commit makes us skip updating `lint_node_id` when handling
a macro in trailing expression position. This will cause us to
lint at the closest parent of the macro call.
When a macro is used in the trailing expression position of a block
(e.g. `fn foo() { my_macro!() }`), we currently parse it as an
expression, rather than a statement. As a result, we ended up
using the `NodeId` of the containing statement as our `lint_node_id`,
even though we don't normally do this for macro calls.
If such a macro expands to an expression with a `#[cfg]` attribute,
then the trailing statement can get removed entirely. This lead to
an ICE, since we were usng the `NodeId` of the expression to emit
a lint.
Ths commit makes us skip updating `lint_node_id` when handling
a macro in trailing expression position. This will cause us to
lint at the closest parent of the macro call.
Encode spans relative to the enclosing item
The aim of this PR is to avoid recomputing queries when code is moved without modification.
MCP at https://github.com/rust-lang/compiler-team/issues/443
This is achieved by :
1. storing the HIR owner LocalDefId information inside the span;
2. encoding and decoding spans relative to the enclosing item in the incremental on-disk cache;
3. marking a dependency to the `source_span(LocalDefId)` query when we translate a span from the short (`Span`) representation to its explicit (`SpanData`) representation.
Since all client code uses `Span`, step 3 ensures that all manipulations
of span byte positions actually create the dependency edge between
the caller and the `source_span(LocalDefId)`.
This query return the actual absolute span of the parent item.
As a consequence, any source code motion that changes the absolute byte position of a node will either:
- modify the distance to the parent's beginning, so change the relative span's hash;
- dirty `source_span`, and trigger the incremental recomputation of all code that
depends on the span's absolute byte position.
With this scheme, I believe the dependency tracking to be accurate.
For the moment, the spans are marked during lowering.
I'd rather do this during def-collection,
but the AST MutVisitor is not practical enough just yet.
The only difference is that we attach macro-expanded spans
to their expansion point instead of the macro itself.
Add proc_macro::Span::{before, after}.
This adds `proc_macro::Span::before()` and `proc_macro::Span::after()` to get a zero width span at the start or end of the span.
These are equivalent to rustc's `Span::shrink_to_lo()` and `Span::shrink_to_hi()` but with a less cryptic name. They are useful when generating diagnostlics like "missing \<thing\> after \<thing\>".
E.g.
```rust
syn::Error::new(ident.span().after(), "missing `:` after field name").into_compile_error()
```
Detect bare blocks with type ascription that were meant to be a `struct` literal
Address part of #34255.
Potential improvement: silence the other knock down errors in `issue-34255-1.rs`.
expand: Treat more macro calls as statement macro calls
This PR implements the suggestion from https://github.com/rust-lang/rust/pull/87981#issuecomment-906641052 and treats fn-like macro calls inside `StmtKind::Item` and `StmtKind::Semi` as statement macro calls, which is consistent with treatment of attribute invocations in the same positions and with token-based macro expansion model in general.
This also allows to remove a special case in `NodeId` assignment (previously tried in #87779), and to use statement `NodeId`s for linting (`assign_id!`).
r? `@Aaron1011`
Path remapping: Make behavior of diagnostics output dependent on presence of --remap-path-prefix.
This PR fixes a regression (#87745) with `--remap-path-prefix` where the flag stopped causing diagnostic messages to be remapped as well. The regression was introduced in https://github.com/rust-lang/rust/pull/83813 where we erroneously assumed that remapping of diagnostic messages was not desired anymore (because #70642 partially undid that functionality with nobody objecting).
The issue is fixed by making `--remap-path-prefix` remap diagnostic messages again, including for paths that have been remapped in upstream crates (e.g. the standard library). This means that "sysroot-localization" (implemented in #70642) is also disabled if `rustc` is invoked with `--remap-path-prefix`. The assumption is that once someone starts explicitly remapping paths they also don't want paths to their local Rust installation in their build output.
In the future we might want to give more fine-grained control over this behavior via compiler flags (see https://github.com/rust-lang/rfcs/pull/3127 for a related RFC). For now this PR is intended as a regression fix.
This PR is an alternative to https://github.com/rust-lang/rust/pull/88191, which makes diagnostic messages be remapped unconditionally. That approach, however, would effectively revert #70642.
Fixes https://github.com/rust-lang/rust/issues/87745.
cc `@cbeuw`
r? `@ghost`
Instead of updating global state to mark attributes as used,
we now explicitly emit a warning when an attribute is used in
an unsupported position. As a side effect, we are to emit more
detailed warning messages (instead of just a generic "unused" message).
`Session.check_name` is removed, since its only purpose was to mark
the attribute as used. All of the callers are modified to use
`Attribute.has_name`
Additionally, `AttributeType::AssumedUsed` is removed - an 'assumed
used' attribute is implemented by simply not performing any checks
in `CheckAttrVisitor` for a particular attribute.
We no longer emit unused attribute warnings for the `#[rustc_dummy]`
attribute - it's an internal attribute used for tests, so it doesn't
mark sense to treat it as 'unused'.
With this commit, a large source of global untracked state is removed.
Fixes#87877
This change interacts badly with `noop_flat_map_stmt`,
which synthesizes multiple statements with the same `NodeId`.
I'm working on a better fix that will still allow us to
remove this special case. For now, let's revert the change
to fix the ICE.
This reverts commit a4262cc984, reversing
changes made to 8ee962f88e.
Support negative numbers in Literal::from_str
proc_macro::Literal has allowed negative numbers in a single literal token ever since Rust 1.29, using https://doc.rust-lang.org/stable/proc_macro/struct.Literal.html#method.isize_unsuffixed and similar constructors.
```rust
let lit = proc_macro::Literal::isize_unsuffixed(-10);
```
However, the suite of constructors on Literal is not sufficient for all use cases, for example arbitrary precision floats, or custom suffixes in FFI macros.
```rust
let lit = proc_macro::Literal::f64_unsuffixed(0.101001000100001000001000000100000001); // :(
let lit = proc_macro::Literal::i???_suffixed(10ulong); // :(
```
For those, macros construct the literal using from_str instead, which preserves arbitrary precision, custom suffixes, base, and digit grouping.
```rust
let lit = "0.101001000100001000001000000100000001".parse::<Literal>().unwrap();
let lit = "10ulong".parse::<Literal>().unwrap();
let lit = "0b1000_0100_0010_0001".parse::<Literal>().unwrap();
```
However, until this PR it was not possible to construct a literal token that is **both** negative **and** preserving of arbitrary precision etc.
This PR fixes `Literal::from_str` to recognize negative integer and float literals.
Display an extra note for trailing semicolon lint with trailing macro
Currently, we parse macros at the end of a block
(e.g. `fn foo() { my_macro!() }`) as expressions, rather than
statements. This means that a macro invoked in this position
cannot expand to items or semicolon-terminated expressions.
In the future, we might want to start parsing these kinds of macros
as statements. This would make expansion more 'token-based'
(i.e. macro expansion behaves (almost) as if you just textually
replaced the macro invocation with its output). However,
this is a breaking change (see PR #78991), so it will require
further discussion.
Since the current behavior will not be changing any time soon,
we need to address the interaction with the
`SEMICOLON_IN_EXPRESSIONS_FROM_MACROS` lint. Since we are parsing
the result of macro expansion as an expression, we will emit a lint
if there's a trailing semicolon in the macro output. However, this
results in a somewhat confusing message for users, since it visually
looks like there should be no problem with having a semicolon
at the end of a block
(e.g. `fn foo() { my_macro!() }` => `fn foo() { produced_expr; }`)
To help reduce confusion, this commit adds a note explaining
that the macro is being interpreted as an expression. Additionally,
we suggest adding a semicolon after the macro *invocation* - this
will cause us to parse the macro call as a statement. We do *not*
use a structured suggestion for this, since the user may actually
want to remove the semicolon from the macro definition (allowing
the block to evaluate to the expression produced by the macro).
Currently, we parse macros at the end of a block
(e.g. `fn foo() { my_macro!() }`) as expressions, rather than
statements. This means that a macro invoked in this position
cannot expand to items or semicolon-terminated expressions.
In the future, we might want to start parsing these kinds of macros
as statements. This would make expansion more 'token-based'
(i.e. macro expansion behaves (almost) as if you just textually
replaced the macro invocation with its output). However,
this is a breaking change (see PR #78991), so it will require
further discussion.
Since the current behavior will not be changing any time soon,
we need to address the interaction with the
`SEMICOLON_IN_EXPRESSIONS_FROM_MACROS` lint. Since we are parsing
the result of macro expansion as an expression, we will emit a lint
if there's a trailing semicolon in the macro output. However, this
results in a somewhat confusing message for users, since it visually
looks like there should be no problem with having a semicolon
at the end of a block
(e.g. `fn foo() { my_macro!() }` => `fn foo() { produced_expr; }`)
To help reduce confusion, this commit adds a note explaining
that the macro is being interpreted as an expression. Additionally,
we suggest adding a semicolon after the macro *invocation* - this
will cause us to parse the macro call as a statement. We do *not*
use a structured suggestion for this, since the user may actually
want to remove the semicolon from the macro definition (allowing
the block to evaluate to the expression produced by the macro).
These attributes are currently discarded.
This may change in the future (see #63221), but for now,
placing inert attributes on a macro invocation does nothing,
so we should warn users about it.
Technically, it's possible for there to be attribute macro
on the same macro invocation (or at a higher scope), which
inspects the inert attribute. For example:
```rust
#[look_for_inline_attr]
#[inline]
my_macro!()
#[look_for_nested_inline]
mod foo { #[inline] my_macro!() }
```
However, this would be a very strange thing to do.
Anyone running into this can manually suppress the warning.
When we need to emit a lint at a macro invocation, we currently use the
`NodeId` of its parent definition (e.g. the enclosing function). This
means that any `#[allow]` / `#[deny]` attributes placed 'closer' to the
macro (e.g. on an enclosing block or statement) will have no effect.
This commit computes a better `lint_node_id` in `InvocationCollector`.
When we visit/flat_map an AST node, we assign it a `NodeId` (earlier
than we normally would), and store than `NodeId` in current
`ExpansionData`. When we collect a macro invocation, the current
`lint_node_id` gets cloned along with our `ExpansionData`, allowing it
to be used if we need to emit a lint later on.
This improves the handling of `#[allow]` / `#[deny]` for
`SEMICOLON_IN_EXPRESSIONS_FROM_MACROS` and some `asm!`-related lints.
The 'legacy derive helpers' lint retains its current behavior
(I've inlined the now-removed `lint_node_id` function), since
there isn't an `ExpansionData` readily available.
expand: Support helper attributes for built-in derive macros
This is needed for https://github.com/rust-lang/rust/pull/86735 (derive macro `Default` should have a helper attribute `default`).
With this PR we can specify helper attributes for built-in derives using syntax `#[rustc_builtin_macro(MacroName, attributes(attr1, attr2, ...))]` which mirrors equivalent syntax for proc macros `#[proc_macro_derive(MacroName, attributes(attr1, attr2, ...))]`.
Otherwise expansion infra was already ready for this.
The attribute parsing code is shared between proc macro derives and built-in macros (`fn parse_macro_name_and_helper_attrs`).
Rollup of 6 pull requests
Successful merges:
- #87085 (Search result colors)
- #87090 (Make BTreeSet::split_off name elements like other set methods do)
- #87098 (Unignore some pretty printing tests)
- #87099 (Upgrade `cc` crate to 1.0.69)
- #87101 (Suggest a path separator if a stray colon is found in a match arm)
- #87102 (Add GUI test for "go to first" feature)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
- The `Rustc::expn_id` field kept redundant information
- `SyntaxContext` is no longer thrown away before `save_proc_macro_span` because it's thrown away during metadata encoding anyway
Currently, we only point at the span of the macro argument. When the
macro call is itself generated by another macro, this can make it
difficult or impossible to determine which macro is responsible for
producing the error.
Remove unused feature gates
The first commit removes a usage of a feature gate, but I don't expect it to be controversial as the feature gate was only used to workaround a limitation of rust in the past. (closures never being `Clone`)
The second commit uses `#[allow_internal_unstable]` to avoid leaking the `trusted_step` feature gate usage from inside the index newtype macro. It didn't work for the `min_specialization` feature gate though.
The third commit removes (almost) all feature gates from the compiler that weren't used anyway.
As described in issue #85708, we currently do not properly decode
`SyntaxContext::root()` and `ExpnId::root()` from foreign crates. As a
result, when we decode a span from a foreign crate with
`SyntaxContext::root()`, we end up up considering it to have the edition
of the *current* crate, instead of the foreign crate where it was
originally created.
A full fix for this issue will be a fairly significant undertaking.
Fortunately, it's possible to implement a partial fix, which gives us
the correct edition-dependent behavior for `:pat` matchers when the
macro is loaded from another crate. Since we have the edition of the
macro's defining crate available, we can 'recover' from seeing a
`SyntaxContext::root()` and use the edition of the macro's defining
crate.
Any solution to issue #85708 must reproduce the behavior of this
targeted fix - properly preserving a foreign `SyntaxContext::root()`
means (among other things) preserving its edition, which by definition
is the edition of the foreign crate itself. Therefore, this fix moves us
closer to the correct overall solution, and does not expose any new
incorrect behavior to macros.
Fix `--remap-path-prefix` not correctly remapping `rust-src` component paths and unify handling of path mapping with virtualized paths
This PR fixes#73167 ("Binaries end up containing path to the rust-src component despite `--remap-path-prefix`") by preventing real local filesystem paths from reaching compilation output if the path is supposed to be remapped.
`RealFileName::Named` introduced in #72767 is now renamed as `LocalPath`, because this variant wraps a (most likely) valid local filesystem path.
`RealFileName::Devirtualized` is renamed as `Remapped` to be used for remapped path from a real path via `--remap-path-prefix` argument, as well as real path inferred from a virtualized (during compiler bootstrapping) `/rustc/...` path. The `local_path` field is now an `Option<PathBuf>`, as it will be set to `None` before serialisation, so it never reaches any build output. Attempting to serialise a non-`None` `local_path` will cause an assertion faliure.
When a path is remapped, a `RealFileName::Remapped` variant is created. The original path is preserved in `local_path` field and the remapped path is saved in `virtual_name` field. Previously, the `local_path` is directly modified which goes against its purpose of "suitable for reading from the file system on the local host".
`rustc_span::SourceFile`'s fields `unmapped_path` (introduced by #44940) and `name_was_remapped` (introduced by #41508 when `--remap-path-prefix` feature originally added) are removed, as these two pieces of information can be inferred from the `name` field: if it's anything other than a `FileName::Real(_)`, or if it is a `FileName::Real(RealFileName::LocalPath(_))`, then clearly `name_was_remapped` would've been false and `unmapped_path` would've been `None`. If it is a `FileName::Real(RealFileName::Remapped{local_path, virtual_name})`, then `name_was_remapped` would've been true and `unmapped_path` would've been `Some(local_path)`.
cc `@eddyb` who implemented `/rustc/...` path devirtualisation
This PR implements span quoting, allowing proc-macros to produce spans
pointing *into their own crate*. This is used by the unstable
`proc_macro::quote!` macro, allowing us to get error messages like this:
```
error[E0412]: cannot find type `MissingType` in this scope
--> $DIR/auxiliary/span-from-proc-macro.rs:37:20
|
LL | pub fn error_from_attribute(_args: TokenStream, _input: TokenStream) -> TokenStream {
| ----------------------------------------------------------------------------------- in this expansion of procedural macro `#[error_from_attribute]`
...
LL | field: MissingType
| ^^^^^^^^^^^ not found in this scope
|
::: $DIR/span-from-proc-macro.rs:8:1
|
LL | #[error_from_attribute]
| ----------------------- in this macro invocation
```
Here, `MissingType` occurs inside the implementation of the proc-macro
`#[error_from_attribute]`. Previosuly, this would always result in a
span pointing at `#[error_from_attribute]`
This will make many proc-macro-related error message much more useful -
when a proc-macro generates code containing an error, users will get an
error message pointing directly at that code (within the macro
definition), instead of always getting a span pointing at the macro
invocation site.
This is implemented as follows:
* When a proc-macro crate is being *compiled*, it causes the `quote!`
macro to get run. This saves all of the sapns in the input to `quote!`
into the metadata of *the proc-macro-crate* (which we are currently
compiling). The `quote!` macro then expands to a call to
`proc_macro::Span::recover_proc_macro_span(id)`, where `id` is an
opaque identifier for the span in the crate metadata.
* When the same proc-macro crate is *run* (e.g. it is loaded from disk
and invoked by some consumer crate), the call to
`proc_macro::Span::recover_proc_macro_span` causes us to load the span
from the proc-macro crate's metadata. The proc-macro then produces a
`TokenStream` containing a `Span` pointing into the proc-macro crate
itself.
The recursive nature of 'quote!' can be difficult to understand at
first. The file `src/test/ui/proc-macro/quote-debug.stdout` shows
the output of the `quote!` macro, which should make this eaier to
understand.
This PR also supports custom quoting spans in custom quote macros (e.g.
the `quote` crate). All span quoting goes through the
`proc_macro::quote_span` method, which can be called by a custom quote
macro to perform span quoting. An example of this usage is provided in
`src/test/ui/proc-macro/auxiliary/custom-quote.rs`
Custom quoting currently has a few limitations:
In order to quote a span, we need to generate a call to
`proc_macro::Span::recover_proc_macro_span`. However, proc-macros
support renaming the `proc_macro` crate, so we can't simply hardcode
this path. Previously, the `quote_span` method used the path
`crate::Span` - however, this only works when it is called by the
builtin `quote!` macro in the same crate. To support being called from
arbitrary crates, we need access to the name of the `proc_macro` crate
to generate a path. This PR adds an additional argument to `quote_span`
to specify the name of the `proc_macro` crate. Howver, this feels kind
of hacky, and we may want to change this before stabilizing anything
quote-related.
Additionally, using `quote_span` currently requires enabling the
`proc_macro_internals` feature. The builtin `quote!` macro
has an `#[allow_internal_unstable]` attribute, but this won't work for
custom quote implementations. This will likely require some additional
tricks to apply `allow_internal_unstable` to the span of
`proc_macro::Span::recover_proc_macro_span`.
Unify rustc and rustdoc parsing of `cfg()`
This extracts a new `parse_cfg` function that's used between both.
- Treat `#[doc(cfg(x), cfg(y))]` the same as `#[doc(cfg(x)]
#[doc(cfg(y))]`. Previously it would be completely ignored.
- Treat `#[doc(inline, cfg(x))]` the same as `#[doc(inline)]
#[doc(cfg(x))]`. Previously, the cfg would be ignored.
- Pass the cfg predicate through to rustc_expand to be validated
Technically this is a breaking change, but doc_cfg is still nightly so I don't think it matters.
Fixes https://github.com/rust-lang/rust/issues/84437.
r? `````````@petrochenkov`````````
This extracts a new `parse_cfg` function that's used between both.
- Treat `#[doc(cfg(x), cfg(y))]` the same as `#[doc(cfg(x)]
#[doc(cfg(y))]`. Previously it would be completely ignored.
- Treat `#[doc(inline, cfg(x))]` the same as `#[doc(inline)]
#[doc(cfg(x))]`. Previously, the cfg would be ignored.
- Pass the cfg predicate through to rustc_expand to be validated
Co-authored-by: Vadim Petrochenkov <vadim.petrochenkov@gmail.com>
Implement RFC 1260 with feature_name `imported_main`.
This is the second extraction part of #84062 plus additional adjustments.
This (mostly) implements RFC 1260.
However there's still one test case failure in the extern crate case. Maybe `LocalDefId` doesn't work here? I'm not sure.
cc https://github.com/rust-lang/rust/issues/28937
r? `@petrochenkov`
This PR modifies the macro expansion infrastructure to handle attributes
in a fully token-based manner. As a result:
* Derives macros no longer lose spans when their input is modified
by eager cfg-expansion. This is accomplished by performing eager
cfg-expansion on the token stream that we pass to the derive
proc-macro
* Inner attributes now preserve spans in all cases, including when we
have multiple inner attributes in a row.
This is accomplished through the following changes:
* New structs `AttrAnnotatedTokenStream` and `AttrAnnotatedTokenTree` are introduced.
These are very similar to a normal `TokenTree`, but they also track
the position of attributes and attribute targets within the stream.
They are built when we collect tokens during parsing.
An `AttrAnnotatedTokenStream` is converted to a regular `TokenStream` when
we invoke a macro.
* Token capturing and `LazyTokenStream` are modified to work with
`AttrAnnotatedTokenStream`. A new `ReplaceRange` type is introduced, which
is created during the parsing of a nested AST node to make the 'outer'
AST node aware of the attributes and attribute target stored deeper in the token stream.
* When we need to perform eager cfg-expansion (either due to `#[derive]` or `#[cfg_eval]`),
we tokenize and reparse our target, capturing additional information about the locations of
`#[cfg]` and `#[cfg_attr]` attributes at any depth within the target.
This is a performance optimization, allowing us to perform less work
in the typical case where captured tokens never have eager cfg-expansion run.
expand: Do not ICE when a legacy AST-based macro attribute produces and empty expression
Fixes https://github.com/rust-lang/rust/issues/80251
The reported error is the same as for `let _ = #[cfg(FALSE)] EXPR;`
Found with https://github.com/est31/warnalyzer.
Dubious changes:
- Is anyone else using rustc_apfloat? I feel weird completely deleting
x87 support.
- Maybe some of the dead code in rustc_data_structures, in case someone
wants to use it in the future?
- Don't change rustc_serialize
I plan to scrap most of the json module in the near future (see
https://github.com/rust-lang/compiler-team/issues/418) and fixing the
tests needed more work than I expected.
TODO: check if any of the comments on the deleted code should be kept.
Add function core::iter::zip
This makes it a little easier to `zip` iterators:
```rust
for (x, y) in zip(xs, ys) {}
// vs.
for (x, y) in xs.into_iter().zip(ys) {}
```
You can `zip(&mut xs, &ys)` for the conventional `iter_mut()` and
`iter()`, respectively. This can also support arbitrary nesting, where
it's easier to see the item layout than with arbitrary `zip` chains:
```rust
for ((x, y), z) in zip(zip(xs, ys), zs) {}
for (x, (y, z)) in zip(xs, zip(ys, zs)) {}
// vs.
for ((x, y), z) in xs.into_iter().zip(ys).zip(xz) {}
for (x, (y, z)) in xs.into_iter().zip((ys.into_iter().zip(xz)) {}
```
It may also format more nicely, especially when the first iterator is a
longer chain of methods -- for example:
```rust
iter::zip(
trait_ref.substs.types().skip(1),
impl_trait_ref.substs.types().skip(1),
)
// vs.
trait_ref
.substs
.types()
.skip(1)
.zip(impl_trait_ref.substs.types().skip(1))
```
This replaces the tuple-pair `IntoIterator` in #78204.
There is prior art for the utility of this in [`itertools::zip`].
[`itertools::zip`]: https://docs.rs/itertools/0.10.0/itertools/fn.zip.html
With this PR, we now lint for all cases where we perform some kind of
proc-macro back-compat hack.
The `js-sys` had an internal fix made to properly handle
`None`-delimited groups, so we need to manually check the version in the
filename. As a result, we no longer apply the back-compat hack to cases
where the version number is missing file the file path. This should not
affect any users of the `crates.io` crate.
Unlike the other cases of this lint, there's no simple way to detect if
an old version of the relevant crate (`syn`) is in use. The `actix-web`
crate only depends on `pin-project` v1.0.0, so checking the version of
`actix-web` does not guarantee that a new enough version of
`pin-project` (and therefore `syn`) is in use.
Instead, we rely on the fact that virtually all of the regressed crates
are pinned to a pre-1.0 version of `pin-project`. When this is the case,
bumping the `actix-web` dependency will pull in the *latest* version of
`pin-project`, which has an explicit dependency on a newer v dependency
on a newer version of `syn`.
The lint message tells users to update `actix-web`, since that's what
they're most likely to have control over. We could potentially tell them
to run `cargo update -p syn`, but I think it's more straightforward to
suggest an explicit change to the `Cargo.toml`
The `actori-web` fork had its last commit over a year ago, and appears
to just be a renamed fork of `actix-web`. Therefore, I've removed the
`actori-web` check entirely - any crates that actually get broken can
simply update `syn` themselves.
Extend `proc_macro_back_compat` lint to `procedural-masquerade`
We now lint on *any* use of `procedural-masquerade` crate. While this
crate still exists, its main reverse dependency (`cssparser`) no longer
depends on it. Any crates still depending off should stop doing so, as
it only exists to support very old Rust versions.
If a crate actually needs to support old versions of rustc via
`procedural-masquerade`, then they'll just need to accept the warning
until we remove it entirely (at the same time as the back-compat hack).
The latest version of `procedural-masquerade` does work with the
latest rustc, but trying to check for the version seems like more
trouble than it's worth.
While working on this, I realized that the `proc-macro-hack` check was
never actually doing anything. The corresponding enum variant in
`proc-macro-hack` is named `Value` or `Nested` - it has never been
called `Input`. Due to a strange Crater issue, the Crater run that
tested adding this did *not* end up testing it - some of the crates that
would have failed did not actually have their tests checked, making it
seem as though the `proc-macro-hack` check was working.
The Crater issue is being discussed at
https://rust-lang.zulipchat.com/#narrow/stream/242791-t-infra/topic/Nearly.20identical.20Crater.20runs.20processed.20a.20crate.20differently/near/230406661
Despite the `proc-macro-hack` check not actually doing anything, we
haven't gotten any reports from users about their build being broken.
I went ahead and removed it entirely, since it's clear that no one is
being affected by the `proc-macro-hack` regression in practice.
StructField -> FieldDef ("field definition")
Field -> ExprField ("expression field", not "field expression")
FieldPat -> PatField ("pattern field", not "field pattern")
Also rename visiting and other methods working on them.
We now lint on *any* use of `procedural-masquerade` crate. While this
crate still exists, its main reverse dependency (`cssparser`) no longer
depends on it. Any crates still depending off should stop doing so, as
it only exists to support very old Rust versions.
If a crate actually needs to support old versions of rustc via
`procedural-masquerade`, then they'll just need to accept the warning
until we remove it entirely (at the same time as the back-compat hack).
The latest version of `procedural-masquerade` does not work with the
latest rustc, but trying to check for the version seems like more
trouble than it's worth.
While working on this, I realized that the `proc-macro-hack` check was
never actually doing anything. The corresponding enum variant in
`proc-macro-hack` is named `Value` or `Nested` - it has never been
called `Input`. Due to a strange Crater issue, the Crater run that
tested adding this did *not* end up testing it - some of the crates that
would have failed did not actually have their tests checked, making it
seem as though the `proc-macro-hack` check was working.
The Crater issue is being discussed at
https://rust-lang.zulipchat.com/#narrow/stream/242791-t-infra/topic/Nearly.20identical.20Crater.20runs.20processed.20a.20crate.20differently/near/230406661
Despite the `proc-macro-hack` check not actually doing anything, we
haven't gotten any reports from users about their build being broken.
I went ahead and removed it entirely, since it's clear that no one is
being affected by the `proc-macro-hack` regression in practice.
Now that future-incompat-report support has landed in nightly Cargo, we
can start to make progress towards removing the various proc-macro
back-compat hacks that have accumulated in the compiler.
This PR introduces a new lint `proc_macro_back_compat`, which results in
a future-incompat-report entry being generated. All proc-macro
back-compat warnings will be grouped under this lint. Note that this
lint will never actually become a hard error - instead, we will remove
the special cases for various macros, which will cause older versions of
those crates to emit some other error.
I've added code to fire this lint for the `time-macros-impl` case. This
is the easiest case out of all of our current back-compat hacks - the
crate was renamed to `time-macros`, so seeing a filename with
`time-macros-impl` guarantees that an older version of the parent `time`
crate is in use.
When Cargo's future-incompat-report feature gets stabilized, affected
users will start to see future-incompat warnings when they build their
crates.
expand: Do not allocate `Lrc` for `allow_internal_unstable` list unless necessary
This allocation is done for any macro defined in the current crate, or used from a different crate.
EDIT: This also removes an `Lrc` increment from each *use* of such macro, which may be more significant.
Noticed when reviewing https://github.com/rust-lang/rust/pull/82367.
This probably doesn't matter, but let's do a perf run.
Implement built-in attribute macro `#[cfg_eval]` + some refactoring
This PR implements a built-in attribute macro `#[cfg_eval]` as it was suggested in https://github.com/rust-lang/rust/pull/79078 to avoid `#[derive()]` without arguments being abused as a way to configure input for other attributes.
The macro is used for eagerly expanding all `#[cfg]` and `#[cfg_attr]` attributes in its input ("fully configuring" the input).
The effect is identical to effect of `#[derive(Foo, Bar)]` which also fully configures its input before passing it to macros `Foo` and `Bar`, but unlike `#[derive]` `#[cfg_eval]` can be applied to any syntax nodes supporting macro attributes, not only certain items.
`cfg_eval` was the first name suggested in https://github.com/rust-lang/rust/pull/79078, but other alternatives are also possible, e.g. `cfg_expand`.
```rust
#[cfg_eval]
#[my_attr] // Receives `struct S {}` as input, the field is configured away by `#[cfg_eval]`
struct S {
#[cfg(FALSE)]
field: u8,
}
```
Tracking issue: https://github.com/rust-lang/rust/issues/82679
expand: Refactor module loading
This is an accompanying PR to https://github.com/rust-lang/rust/pull/82399, but they can be landed independently.
See individual commits for more details.
Anyone should be able to review this equally well because all people actually familiar with this code left the project.
Inherit `#[stable(..)]` annotations in enum variants and fields from its item
Lint changes for #65515. The stdlib will have to be updated once this lands in beta and that version is promoted in master.
This ownership kind is only constructed in the case of path attributes like `#[path = ".."]` without a file name segment, which always represent some kind of directories and will produce and error on attempt to parse them as a module file.
When token-based attribute handling is implemeneted in #80689,
we will need to access tokens from `HasAttrs` (to perform
cfg-stripping), and we will to access attributes from `HasTokens` (to
construct a `PreexpTokenStream`).
This PR merges the `HasAttrs` and `HasTokens` traits into a new
`AstLike` trait. The previous `HasAttrs` impls from `Vec<Attribute>` and `AttrVec`
are removed - they aren't attribute targets, so the impls never really
made sense.
Crate root is sufficiently different from `mod` items, at least at syntactic level.
Also remove customization point for "`mod` item or crate root" from AST visitors.
Along the way, we also implement a handful of diagnostics improvements
and fixes, particularly with respect to the special handling of `||` in
place of `|` and when there are leading verts in function params, which
don't allow top-level or-patterns anyway.
Fixes#81543
After we expand a macro, we try to parse the resulting tokens as a AST
node. This commit makes several improvements to how we handle spans when
an error occurs:
* Only ovewrite the original `Span` if it's a dummy span. This preserves
a more-specific span if one is available.
* Use `self.prev_token` instead of `self.token` when emitting an error
message after encountering EOF, since an EOF token always has a dummy
span
* Make `SourceMap::next_point` leave dummy spans unused. A dummy span
does not have a logical 'next point', since it's a zero-length span.
Re-using the span span preserves its 'dummy-ness' for other checks
cc #79813
This PR adds an allow-by-default future-compatibility lint
`SEMICOLON_IN_EXPRESSIONS_FROM_MACROS`. It fires when a trailing semicolon in a
macro body is ignored due to the macro being used in expression
position:
```rust
macro_rules! foo {
() => {
true; // WARN
}
}
fn main() {
let val = match true {
true => false,
_ => foo!()
};
}
```
The lint takes its level from the macro call site, and
can be allowed for a particular macro by adding
`#[allow(semicolon_in_expressions_from_macros)]`.
The lint is set to warn for all internal rustc crates (when being built
by a stage1 compiler). After the next beta bump, we can enable
the lint for the bootstrap compiler as well.
Make `-Z time-passes` less noisy
- Add the module name to `pre_AST_expansion_passes` and don't make it a
verbose event (since it normally doesn't take very long, and it's
emitted many times)
- Don't make the following rustdoc events verbose; they're emitted many times.
+ build_extern_trait_impl
+ build_local_trait_impl
+ build_primitive_trait_impl
+ get_auto_trait_impls
+ get_blanket_trait_impls
- Remove the `get_auto_trait_and_blanket_synthetic_impls` rustdoc event; it's wholly
covered by get_{auto,blanket}_trait_impls and not very useful.
I found this while working on https://github.com/rust-lang/rust/pull/81275 but it's independent of those changes.
- Add the module name to `pre_AST_expansion_passes` and don't make it a
verbose event (since it normally doesn't take very long, and it's
emitted many times)
- Don't make the following rustdoc events verbose; they're emitted many times.
+ build_extern_trait_impl
+ build_local_trait_impl
+ build_primitive_trait_impl
+ get_auto_trait_impls
+ get_blanket_trait_impls
- Remove `get_auto_trait_and_blanket_synthetic_impls`; it's wholly
covered by get_{auto,blanket}_trait_impls and not very useful.
Fixes#81007
Previously, we would fail to collect tokens in the proper place when
only builtin attributes were present. As a result, we would end up with
attribute tokens in the collected `TokenStream`, leading to duplication
when we attempted to prepend the attributes from the AST node.
We now explicitly track when token collection must be performed due to
nomterminal parsing.
Allow #[rustc_builtin_macro = "name"]
This adds the option of specifying the name of a builtin macro in the `#[rustc_builtin_macro]` attribute: `#[rustc_builtin_macro = "name"]`.
This makes it possible to have both `std::panic!` and `core::panic!` as a builtin macro, by using different builtin macro names for each. This is needed to implement the edition-specific behaviour of the panic macros of RFC 3007.
Also removes `SyntaxExtension::is_derive_copy`, as the macro name (e.g. `sym::Copy`) is now tracked and provides that information directly.
r? ``@petrochenkov``
This makes it possible to have both std::panic and core::panic as a
builtin macro, by using different builtin macro names for each.
Also removes SyntaxExtension::is_derive_copy, as the macro name (e.g.
sym::Copy) is now tracked and provides that information directly.
Fix some clippy lints
Happy to revert these if you think they're less readable, but personally I like them better now (especially the `else { if { ... } }` to `else if { ... }` change).
We now collect tokens for the underlying node wrapped by `StmtKind`
instead of storing tokens directly in `Stmt`.
`LazyTokenStream` now supports capturing a trailing semicolon after it
is initially constructed. This allows us to avoid refactoring statement
parsing to wrap the parsing of the semicolon in `parse_tokens`.
Attributes on item statements
(e.g. `fn foo() { #[bar] struct MyStruct; }`) are now treated as
item attributes, not statement attributes, which is consistent with how
we handle attributes on other kinds of statements. The feature-gating
code is adjusted so that proc-macro attributes are still allowed on item
statements on stable.
Two built-in macros (`#[global_allocator]` and `#[test]`) needed to be
adjusted to support being passed `Annotatable::Stmt`.
Add lint for panic!("{}")
This adds a lint that warns about `panic!("{}")`.
`panic!(msg)` invocations with a single argument use their argument as panic payload literally, without using it as a format string. The same holds for `assert!(expr, msg)`.
This lints checks if `msg` is a string literal (after expansion), and warns in case it contained braces. It suggests to insert `"{}", ` to use the message literally, or to add arguments to use it as a format string.

This lint is also a good starting point for adding warnings about `panic!(not_a_string)` later, once [`panic_any()`](https://github.com/rust-lang/rust/pull/74622) becomes a stable alternative.
Implement destructuring assignment for structs and slices
This is the second step towards implementing destructuring assignment (RFC: rust-lang/rfcs#2909, tracking issue: #71126). This PR is the second part of #71156, which was split up to allow for easier review.
Note that the first PR (#78748) is not merged yet, so it is included as the first commit in this one. I thought this would allow the review to start earlier because I have some time this weekend to respond to reviews. If ``@petrochenkov`` prefers to wait until the first PR is merged, I totally understand, of course.
This PR implements destructuring assignment for (tuple) structs and slices. In order to do this, the following *parser change* was necessary: struct expressions are not required to have a base expression, i.e. `Struct { a: 1, .. }` becomes legal (in order to act like a struct pattern).
Unfortunately, this PR slightly regresses the diagnostics implemented in #77283. However, it is only a missing help message in `src/test/ui/issues/issue-77218.rs`. Other instances of this diagnostic are not affected. Since I don't exactly understand how this help message works and how to fix it yet, I was hoping it's OK to regress this temporarily and fix it in a follow-up PR.
Thanks to ``@varkor`` who helped with the implementation, particularly around the struct rest changes.
r? ``@petrochenkov``
Do not collect tokens for doc comments
Doc comment is a single token and AST has all the information to re-create it precisely.
Doc comments are also responsible for majority of calls to `collect_tokens` (with `num_calls == 1` and `num_calls == 0`, cc https://github.com/rust-lang/rust/pull/78736).
(I also moved token collection into `fn parse_attribute` to deduplicate code a bit.)
r? `@Aaron1011`
rustc_ast: Do not panic by default when visiting macro calls
Panicking by default made sense when we didn't have HIR or MIR and everything worked on AST, but now all AST visitors run early and majority of them have to deal with macro calls, often by ignoring them.
The second commit renames `visit_mac` to `visit_mac_call`, the corresponding structures were renamed earlier in https://github.com/rust-lang/rust/pull/69589.
Improve errors about #[deprecated] attribute
This change:
1. Turns `#[deprecated]` on a trait impl block into an error, which fixes#78625;
2. Changes these and other errors about `#[deprecated]` to use the span of the attribute instead of the item; and
3. Turns this error into a lint, to make sure it can be capped with `--cap-lints` and doesn't break any existing dependencies.
Can be reviewed per commit.
---
Example:
```rust
struct X;
#[deprecated = "a"]
impl Default for X {
#[deprecated = "b"]
fn default() -> Self {
X
}
}
```
Before:
```
error: This deprecation annotation is useless
--> src/main.rs:6:5
|
6 | / fn default() -> Self {
7 | | X
8 | | }
| |_____^
```
After:
```
error: this `#[deprecated]' annotation has no effect
--> src/main.rs:3:1
|
3 | #[deprecated = "a"]
| ^^^^^^^^^^^^^^^^^^^ help: try removing the deprecation attribute
|
= note: `#[deny(useless_deprecated)]` on by default
error: this `#[deprecated]' annotation has no effect
--> src/main.rs:5:5
|
5 | #[deprecated = "b"]
| ^^^^^^^^^^^^^^^^^^^ help: try removing the deprecation attribute
```
Treat trailing semicolon as a statement in macro call
See #61733 (comment)
We now preserve the trailing semicolon in a macro invocation, even if
the macro expands to nothing. As a result, the following code no longer
compiles:
```rust
macro_rules! empty {
() => { }
}
fn foo() -> bool { //~ ERROR mismatched
{ true } //~ ERROR mismatched
empty!();
}
```
Previously, `{ true }` would be considered the trailing expression, even
though there's a semicolon in `empty!();`
This makes macro expansion more token-based.
See https://github.com/rust-lang/rust/issues/61733#issuecomment-716188981
We now preserve the trailing semicolon in a macro invocation, even if
the macro expands to nothing. As a result, the following code no longer
compiles:
```rust
macro_rules! empty {
() => { }
}
fn foo() -> bool { //~ ERROR mismatched
{ true } //~ ERROR mismatched
empty!();
}
```
Previously, `{ true }` would be considered the trailing expression, even
though there's a semicolon in `empty!();`
This makes macro expansion more token-based.
expand: Tweak a comment in implementation of `macro_rules`
The answer to the removed FIXME is that we don't apply mark to the span `sp` just because that span is no longer used. We could apply it, but that would just be unnecessary extra work.
The comments in code tell why the span is unused, it's a span of `$var` literally, which is lost for `tt` variables because their tokens are outputted directly, but kept for other variables which are outputted as [groups](https://doc.rust-lang.org/nightly/proc_macro/struct.Group.html) and `sp` is kept as the group's span.
Closes https://github.com/rust-lang/rust/issues/2887
Split out statement attributes changes from #78306
This is the same as PR https://github.com/rust-lang/rust/pull/78306, but `unused_doc_comments` is modified to explicitly ignore statement items (which preserves the current behavior).
This shouldn't have any user-visible effects, so it can be landed without lang team discussion.
---------
When the 'early' and 'late' visitors visit an attribute target, they
activate any lint attributes (e.g. `#[allow]`) that apply to it.
This can affect warnings emitted on sibiling attributes. For example,
the following code does not produce an `unused_attributes` for
`#[inline]`, since the sibiling `#[allow(unused_attributes)]` suppressed
the warning.
```rust
trait Foo {
#[allow(unused_attributes)] #[inline] fn first();
#[inline] #[allow(unused_attributes)] fn second();
}
```
However, we do not do this for statements - instead, the lint attributes
only become active when we visit the struct nested inside `StmtKind`
(e.g. `Item`).
Currently, this is difficult to observe due to another issue - the
`HasAttrs` impl for `StmtKind` ignores attributes for `StmtKind::Item`.
As a result, the `unused_doc_comments` lint will never see attributes on
item statements.
This commit makes two interrelated fixes to the handling of inert
(non-proc-macro) attributes on statements:
* The `HasAttr` impl for `StmtKind` now returns attributes for
`StmtKind::Item`, treating it just like every other `StmtKind`
variant. The only place relying on the old behavior was macro
which has been updated to explicitly ignore attributes on item
statements. This allows the `unused_doc_comments` lint to fire for
item statements.
* The `early` and `late` lint visitors now activate lint attributes when
invoking the callback for `Stmt`. This ensures that a lint
attribute (e.g. `#[allow(unused_doc_comments)]`) can be applied to
sibiling attributes on an item statement.
For now, the `unused_doc_comments` lint is explicitly disabled on item
statements, which preserves the current behavior. The exact locatiosn
where this lint should fire are being discussed in PR #78306
When the 'early' and 'late' visitors visit an attribute target, they
activate any lint attributes (e.g. `#[allow]`) that apply to it.
This can affect warnings emitted on sibiling attributes. For example,
the following code does not produce an `unused_attributes` for
`#[inline]`, since the sibiling `#[allow(unused_attributes)]` suppressed
the warning.
```rust
trait Foo {
#[allow(unused_attributes)] #[inline] fn first();
#[inline] #[allow(unused_attributes)] fn second();
}
```
However, we do not do this for statements - instead, the lint attributes
only become active when we visit the struct nested inside `StmtKind`
(e.g. `Item`).
Currently, this is difficult to observe due to another issue - the
`HasAttrs` impl for `StmtKind` ignores attributes for `StmtKind::Item`.
As a result, the `unused_doc_comments` lint will never see attributes on
item statements.
This commit makes two interrelated fixes to the handling of inert
(non-proc-macro) attributes on statements:
* The `HasAttr` impl for `StmtKind` now returns attributes for
`StmtKind::Item`, treating it just like every other `StmtKind`
variant. The only place relying on the old behavior was macro
which has been updated to explicitly ignore attributes on item
statements. This allows the `unused_doc_comments` lint to fire for
item statements.
* The `early` and `late` lint visitors now activate lint attributes when
invoking the callback for `Stmt`. This ensures that a lint
attribute (e.g. `#[allow(unused_doc_comments)]`) can be applied to
sibiling attributes on an item statement.
For now, the `unused_doc_comments` lint is explicitly disabled on item
statements, which preserves the current behavior. The exact locatiosn
where this lint should fire are being discussed in PR #78306
This allows us to avoid synthesizing tokens in `prepend_attr`, since we
have the original tokens available.
We still need to synthesize tokens when expanding `cfg_attr`,
but this is an unavoidable consequence of the syntax of `cfg_attr` -
the user does not supply the `#` and `[]` tokens that a `cfg_attr`
expands to.
This approach lives exclusively in the parser, so struct expr bodies
that are syntactically correct on their own but are otherwise incorrect
will still emit confusing errors, like in the following case:
```rust
fn foo() -> Foo {
bar: Vec::new()
}
```
```
error[E0425]: cannot find value `bar` in this scope
--> src/file.rs:5:5
|
5 | bar: Vec::new()
| ^^^ expecting a type here because of type ascription
error[E0214]: parenthesized type parameters may only be used with a `Fn` trait
--> src/file.rs:5:15
|
5 | bar: Vec::new()
| ^^^^^ only `Fn` traits may use parentheses
error[E0107]: wrong number of type arguments: expected 1, found 0
--> src/file.rs:5:10
|
5 | bar: Vec::new()
| ^^^^^^^^^^ expected 1 type argument
```
If that field had a trailing comma, that would be a parse error and it
would trigger the new, more targetted, error:
```
error: struct literal body without path
--> file.rs:4:17
|
4 | fn foo() -> Foo {
| _________________^
5 | | bar: Vec::new(),
6 | | }
| |_^
|
help: you might have forgotten to add the struct literal inside the block
|
4 | fn foo() -> Foo { Path {
5 | bar: Vec::new(),
6 | } }
|
```
Partially address last part of #34255.
Detect overflow in proc_macro_server subspan
* Detect overflow in proc_macro_server subspan
* Add tests for overflow in Vec::drain
* Add tests for overflow in String / VecDeque operations using ranges
We currently only attach tokens when parsing a `:stmt` matcher for a
`macro_rules!` macro. Proc-macro attributes on statements are still
unstable, and need additional work.
remove public visibility previously needed for rustfmt
`submod_path_from_attr` in rustc_expand::module was previously public because it was also consumed by rustfmt. However, we've done a bit of refactoring in rustfmt and no longer need to use this function.
This changes the visibility to the parent mod as was originally going to be done before the rustfmt dependency was realized (c189565edc (diff-cd1b379893bae95f7991d5a3f3c6d337R201))
Proc-macro API currently exposes jointness in `Punct` tokens. That is,
`+` in `+one` is **non** joint.
Our lexer produces jointness info for all tokens, so we need to censor
it *somewhere*
Previously we did this in a lexer, but it makes more sense to do this
in a proc-macro server.
Run cfg-stripping on generic parameters before invoking derive macros
Fixes#75930
This changes the tokens seen by a proc-macro. However, ising a `#[cfg]` attribute
on a generic paramter is unusual, and combining it with a proc-macro
derive is probably even more unusual. I don't expect this to cause any
breakage.