rustc_errors: let `DiagnosticBuilder::emit` return a "guarantee of emission".
That is, `DiagnosticBuilder` is now generic over the return type of `.emit()`, so we'll now have:
* `DiagnosticBuilder<ErrorReported>` for error (incl. fatal/bug) diagnostics
* can only be created via a `const L: Level`-generic constructor, that limits allowed variants via a `where` clause, so not even `rustc_errors` can accidentally bypass this limitation
* asserts `diagnostic.is_error()` on emission, just in case the construction restriction was bypassed (e.g. by replacing the whole `Diagnostic` inside `DiagnosticBuilder`)
* `.emit()` returns `ErrorReported`, as a "proof" token that `.emit()` was called
(though note that this isn't a real guarantee until after completing the work on
#69426)
* `DiagnosticBuilder<()>` for everything else (warnings, notes, etc.)
* can also be obtained from other `DiagnosticBuilder`s by calling `.forget_guarantee()`
This PR is a companion to other ongoing work, namely:
* #69426
and it's ongoing implementation:
#93222
the API changes in this PR are needed to get statically-checked "only errors produce `ErrorReported` from `.emit()`", but doesn't itself provide any really strong guarantees without those other `ErrorReported` changes
* #93244
would make the choices of API changes (esp. naming) in this PR fit better overall
In order to be able to let `.emit()` return anything trustable, several changes had to be made:
* `Diagnostic`'s `level` field is now private to `rustc_errors`, to disallow arbitrary "downgrade"s from "some kind of error" to "warning" (or anything else that doesn't cause compilation to fail)
* it's still possible to replace the whole `Diagnostic` inside the `DiagnosticBuilder`, sadly, that's harder to fix, but it's unlikely enough that we can paper over it with asserts on `.emit()`
* `.cancel()` now consumes `DiagnosticBuilder`, preventing `.emit()` calls on a cancelled diagnostic
* it's also now done internally, through `DiagnosticBuilder`-private state, instead of having a `Level::Cancelled` variant that can be read (or worse, written) by the user
* this removes a hazard of calling `.cancel()` on an error then continuing to attach details to it, and even expect to be able to `.emit()` it
* warnings were switched to *only* `can_emit_warnings` on emission (instead of pre-cancelling early)
* `struct_dummy` was removed (as it relied on a pre-`Cancelled` `Diagnostic`)
* since `.emit()` doesn't consume the `DiagnosticBuilder` <sub>(I tried and gave up, it's much more work than this PR)</sub>,
we have to make `.emit()` idempotent wrt the guarantees it returns
* thankfully, `err.emit(); err.emit();` can return `ErrorReported` both times, as the second `.emit()` call has no side-effects *only* because the first one did do the appropriate emission
* `&mut Diagnostic` is now used in a lot of function signatures, which used to take `&mut DiagnosticBuilder` (in the interest of not having to make those functions generic)
* the APIs were already mostly identical, allowing for low-effort porting to this new setup
* only some of the suggestion methods needed some rework, to have the extra `DiagnosticBuilder` functionality on the `Diagnostic` methods themselves (that change is also present in #93259)
* `.emit()`/`.cancel()` aren't available, but IMO calling them from an "error decorator/annotator" function isn't a good practice, and can lead to strange behavior (from the caller's perspective)
* `.downgrade_to_delayed_bug()` was added, letting you convert any `.is_error()` diagnostic into a `delay_span_bug` one (which works because in both cases the guarantees available are the same)
This PR should ideally be reviewed commit-by-commit, since there is a lot of fallout in each.
r? `@estebank` cc `@Manishearth` `@nikomatsakis` `@mark-i-m`
Require const stability attribute on all stable functions that are `const`
This PR requires all stable functions (of all kinds) that are `const fn` to have a `#[rustc_const_stable]` or `#[rustc_const_unstable]` attribute. Stability was previously implied if omitted; a follow-up PR is planned to change the fallback to be unstable.
Make dead code check a query.
Dead code check is run for each invocation of the compiler, even if no modifications were involved.
This PR makes dead code check a query keyed on the module. This allows to skip the check when a module has not changed.
To perform this, a query `live_symbols_and_ignored_derived_traits` is introduced to encapsulate the global analysis of finding live symbols. The second query `check_mod_deathness` outputs diagnostics for each module based on this first query's results.
Transition unsupported naked functions future incompatibility lint into
an error:
* Naked functions must contain a single inline assembly block.
Introduced as future incompatibility lint in 1.50 #79653.
Change into an error fixes a soundness issue described in #32489.
* Naked functions must not use any forms of inline attribute.
Introduced as future incompatibility lint in 1.56 #87652.
Introduce drop range tracking to generator interior analysis
This PR addresses cases such as this one from #57478:
```rust
struct Foo;
impl !Send for Foo {}
let _: impl Send = || {
let guard = Foo;
drop(guard);
yield;
};
```
Previously, the `generator_interior` pass would unnecessarily include the type `Foo` in the generator because it was not aware of the behavior of `drop`. We fix this issue by introducing a drop range analysis that finds portions of the code where a value is guaranteed to be dropped. If a value is dropped at all suspend points, then it is no longer included in the generator type. Note that we are using "dropped" in a generic sense to include any case in which a value has been moved. That is, we do not only look at calls to the `drop` function.
There are several phases to the drop tracking algorithm, and we'll go into more detail below.
1. Use `ExprUseVisitor` to find values that are consumed and borrowed.
2. `DropRangeVisitor` uses consume and borrow information to gather drop and reinitialization events, as well as build a control flow graph.
3. We then propagate drop and reinitialization information through the CFG until we reach a fix point (see `DropRanges::propagate_to_fixpoint`).
4. When recording a type (see `InteriorVisitor::record`), we check the computed drop ranges to see if that value is definitely dropped at the suspend point. If so, we skip including it in the type.
## 1. Use `ExprUseVisitor` to find values that are consumed and borrowed.
We use `ExprUseVisitor` to identify the places where values are consumed. We track both the `hir_id` of the value, and the `hir_id` of the expression that consumes it. For example, in the expression `[Foo]`, the `Foo` is consumed by the array expression, so after the array expression we can consider the `Foo` temporary to be dropped.
In this process, we also collect values that are borrowed. The reason is that the MIR transform for generators conservatively assumes anything borrowed is live across a suspend point (see `rustc_mir_transform::generator::locals_live_across_suspend_points`). We match this behavior here as well.
## 2. Gather drop events, reinitialization events, and control flow graph
After finding the values of interest, we perform a post-order traversal over the HIR tree to find the points where these values are dropped or reinitialized. We use the post-order index of each event because this is how the existing generator interior analysis refers to the position of suspend points and the scopes of variables.
During this traversal, we also record branching and merging information to handle control flow constructs such as `if`, `match`, and `loop`. This is necessary because values may be dropped along some control flow paths but not others.
## 3. Iterate to fixed point
The previous pass found the interesting events and locations, but now we need to find the actual ranges where things are dropped. Upon entry, we have a list of nodes ordered by their position in the post-order traversal. Each node has a set of successors. For each node we additionally keep a bitfield with one bit per potentially consumed value. The bit is set if we the value is dropped along all paths entering this node.
To compute the drop information, we first reverse the successor edges to find each node's predecessors. Then we iterate through each node, and for each node we set its dropped value bitfield to the intersection of all incoming dropped value bitfields.
If any bitfield for any node changes, we re-run the propagation loop again.
## 4. Ignore dropped values across suspend points
At this point we have a data structure where we can ask whether a value is guaranteed to be dropped at any post order index for the HIR tree. We use this information in `InteriorVisitor` to check whether a value in question is dropped at a particular suspend point. If it is, we do not include that value's type in the generator type.
Note that we had to augment the region scope tree to include all yields in scope, rather than just the last one as we did before.
r? `@nikomatsakis`
This change adds the basic infrastructure for tracking drop ranges in
generator interior analysis, which allows us to exclude dropped types
from the generator type.
Not yet complete, but many of the async/await and generator tests pass.
The main missing piece is tracking branching control flow (e.g. around
an `if` expression). The patch does include support, however, for
multiple yields in th e same block.
Issue #57478
Avoid unnecessary monomorphization of inline asm related functions
This should reduce build time for codegen backends by avoiding duplicated monomorphization of certain inline asm related functions for each passed in closure type.
Add some more attribute validation
This adds some more validation for the position of attributes:
* `link` is only valid on an `extern` block
* `windows_subsystem` and `no_builtins` are only valid at the crate level
Replace `NestedVisitorMap` with generic `NestedFilter`
This is an attempt to make the `intravisit::Visitor` API simpler and "more const" with regard to nested visiting.
With this change, `intravisit::Visitor` does not visit nested things by default, unless you specify `type NestedFilter = nested_filter::OnlyBodies` (or `All`). `nested_visit_map` returns `Self::Map` instead of `NestedVisitorMap<Self::Map>`. It panics by default (unreachable if `type NestedFilter` is omitted).
One somewhat trixty thing here is that `nested_filter::{OnlyBodies, All}` live in `rustc_middle` so that they may have `type Map = map::Map` and so that `impl Visitor`s never need to specify `type Map` - it has a default of `Self::NestedFilter::Map`.
Remove deprecated LLVM-style inline assembly
The `llvm_asm!` was deprecated back in #87590 1.56.0, with intention to remove
it once `asm!` was stabilized, which already happened in #91728 1.59.0. Now it
is time to remove `llvm_asm!` to avoid continued maintenance cost.
Closes#70173.
Closes#92794.
Closes#87612.
Closes#82065.
cc `@rust-lang/wg-inline-asm`
r? `@Amanieu`
expand: Pick `cfg`s and `cfg_attrs` one by one, like other attributes
This is a rebase of https://github.com/rust-lang/rust/pull/83354, but without any language-changing parts ~(except for https://github.com/rust-lang/rust/pull/84110)~, i.e. the attribute expansion order is the same.
This is a pre-requisite for any other changes making cfg attributes closer to regular macro attributes
- Possibly changing their expansion order (https://github.com/rust-lang/rust/issues/83331)
- Keeping macro backtraces for cfg attributes, or otherwise making them visible after expansion without keeping them in place literally (https://github.com/rust-lang/rust/pull/84110).
Two exceptions to the "one by one" behavior are:
- cfgs eagerly expanded by `derive` and `cfg_eval`, they are still expanded in a batch, that's by design.
- cfgs at the crate root, they are currently expanded not during the main expansion pass, but before that, during `#![feature]` collection. I'll try to disentangle that logic later in a separate PR.
r? `@Aaron1011`
Closure capture cleanup & refactor
Follow up of #89648
Each commit is self-contained and the rationale/changes are documented in the commit message, so it's advisable to review commit by commit.
The code is significantly cleaner (at least IMO), but that could have some perf implication, so I'd suggest a perf run.
r? `@wesleywiser`
cc `@arora-aman`