It's only implemented for analyses that implement `Copy`, which means
it's basically a complicated synonym for `Copy`. So this commit removes
it and uses `Copy` directly. (That direct use will be removed in a later
commit.)
Currently we always do this:
```
use rustc_fluent_macro::fluent_messages;
...
fluent_messages! { "./example.ftl" }
```
But there is no need, we can just do this everywhere:
```
rustc_fluent_macro::fluent_messages! { "./example.ftl" }
```
which is shorter.
The `fluent_messages!` macro produces uses of
`crate::{D,Subd}iagnosticMessage`, which means that every crate using
the macro must have this import:
```
use rustc_errors::{DiagnosticMessage, SubdiagnosticMessage};
```
This commit changes the macro to instead use
`rustc_errors::{D,Subd}iagnosticMessage`, which avoids the need for the
imports.
`on_all_children_bits` has two arguments that are unused: `tcx` and
`body`. This was not detected by the compiler because it's a recursive
function.
This commit removes them, and removes lots of other arguments and fields
that are no longer necessary.
Some types have a `body: &'mir Body<'tcx>` and some have `body: &'a
Body<'tcx>`. The former is more readable, so this commit converts some
fo the latter to the former.
By default, `newtype_index!` types get a default `Encodable`/`Decodable`
impl. You can opt out of this with `custom_encodable`. Opting out is the
opposite to how Rust normally works with autogenerated (derived) impls.
This commit inverts the behaviour, replacing `custom_encodable` with
`encodable` which opts into the default `Encodable`/`Decodable` impl.
Only 23 of the 59 `newtype_index!` occurrences need `encodable`.
Even better, there were eight crates with a dependency on
`rustc_serialize` just from unused default `Encodable`/`Decodable`
impls. This commit removes that dependency from those eight crates.
- Sort dependencies and features sections.
- Add `tidy` markers to the sorted sections so they stay sorted.
- Remove empty `[lib`] sections.
- Remove "See more keys..." comments.
Excluded files:
- rustc_codegen_{cranelift,gcc}, because they're external.
- rustc_lexer, because it has external use.
- stable_mir, because it has external use.
Separate move path tracking between borrowck and drop elaboration.
The primary goal of this PR is to skip creating a `MovePathIndex` for path that do not need dropping in drop elaboration.
The 2 first commits are cleanups.
The next 2 commits displace `move` errors from move-path builder to borrowck. Move-path builder keeps the same logic, but does not carry error information any more.
The remaining commits allow to filter `MovePathIndex` creation according to types. This is used in drop elaboration, to avoid computing dataflow for paths that do not need dropping.
report `unused_import` for empty reexports even it is pub
Fixes#116032
An easy fix. r? `@petrochenkov`
(Discovered this issue while reviewing #115993.)
Match on elem first while building move paths
While working on https://github.com/rust-lang/rust/pull/115025 `@lcnr` and I observed "move_paths_for" function matched on the `Ty` instead of `Projection` which seems flawed as it's the `Projection`s that cause the problem not the type.
r? `@lcnr`
Add documentation to has_deref
Documentation of `has_deref` needed some polish to be more clear about where it should be used and what's it's purpose.
cc https://github.com/rust-lang/rust/issues/114401
r? `@RalfJung`
Rewrite `UnDerefer`, again
This PR is intended to improve the perf regression introduced by #112882.
`UnDerefer` has been separated out again for borrowck reasons. It was a bit overzealous to remove it in the previous PR.
r? `@oli-obk`
- moved work from `find_local` to `gather_statement`
- created custom iterator for `iter_projections`
- reverted change from `IndexVec` to `FxIndexMap`
make mir dataflow graphviz dumps opt-in
This should save some MIR traversals and allocations that are not really needed.
Small win but noticeable locally. Let's see what LTO/PGO say.
r? `@ghost`
Take MIR dataflow analyses by mutable reference
The main motivation here is any analysis requiring dynamically sized scratch memory to work. One concrete example would be pointer target tracking, where tracking the results of a dereference can result in multiple possible targets. This leads to processing multi-level dereferences requiring the ability to handle a changing number of potential targets per step. A (simplified) function for this would be `fn apply_deref(potential_targets: &mut Vec<Target>)` which would use the scratch space contained in the analysis to send arguments and receive the results.
The alternative to this would be to wrap everything in a `RefCell`, which is what `MaybeRequiresStorage` currently does. This comes with a small perf cost and loses the compiler's guarantee that we don't try to take multiple borrows at the same time.
For the implementation:
* `AnalysisResults` is an unfortunate requirement to avoid an unconstrained type parameter error.
* `CloneAnalysis` could just be `Clone` instead, but that would result in more work than is required to have multiple cursors over the same result set.
* `ResultsVisitor` now takes the results type on in each function as there's no other way to have access to the analysis without cloning it. This could use an associated type rather than a type parameter, but the current approach makes it easier to not care about the type when it's not necessary.
* `MaybeRequiresStorage` now no longer uses a `RefCell`, but the graphviz formatter now does. It could be removed, but that would require even more changes and doesn't really seem necessary.
Optimize dataflow-const-prop place-tracking infra
Optimization opportunities found while investigating https://github.com/rust-lang/rust/pull/110719
Computing places breadth-first ensures that we create short projections before deep projections, since the former are more likely to be propagated.
The most relevant is the pre-computation of flooded places. Callgrind showed `flood_*` methods and especially `preorder_preinvoke` were especially hot. This PR attempts to pre-compute the set of `ValueIndex` that `preorder_invoke` would visit.
Using this information, we make some `PlaceIndex` inaccessible when they contain no `ValueIndex`, allowing to skip computations for those places.
cc `@jachris` as original author
Add `rustc_fluent_macro` to decouple fluent from `rustc_macros`
Fluent, with all the icu4x it brings in, takes quite some time to compile. `fluent_messages!` is only needed in further downstream rustc crates, but is blocking more upstream crates like `rustc_index`. By splitting it out, we allow `rustc_macros` to be compiled earlier, which speeds up `x check compiler` by about 5 seconds (and even more after the needless dependency on `serde_json` is removed from `rustc_data_structures`).
Fluent, with all the icu4x it brings in, takes quite some time to
compile. `fluent_messages!` is only needed in further downstream rustc
crates, but is blocking more upstream crates like `rustc_index`. By
splitting it out, we allow `rustc_macros` to be compiled earlier, which
speeds up `x check compiler` by about 5 seconds (and even more after the
needless dependency on `serde_json` is removed from
`rustc_data_structures`).
Unify terminology used in unwind action and terminator, and reflect
the fact that a nounwind panic is triggered instead of an immediate
abort is triggered for this terminator.
Drop array patterns using subslices
Fixes#109004
Drops contiguous subslices of an array when moving elements out with a pattern, which improves perf for large arrays
r? `@compiler-errors`
Update `ty::VariantDef` to use `IndexVec<FieldIdx, FieldDef>`
And while doing the updates for that, also uses `FieldIdx` in `ProjectionKind::Field` and `TypeckResults::field_indices`.
There's more places that could use it (like `rustc_const_eval` and `LayoutS`), but I tried to keep this PR from exploding to *even more* places.
Part 2/? of https://github.com/rust-lang/compiler-team/issues/606
And while doing the updates for that, also uses `FieldIdx` in `ProjectionKind::Field` and `TypeckResults::field_indices`.
There's more places that could use it (like `rustc_const_eval` and `LayoutS`), but I tried to keep this PR from exploding to *even more* places.
Part 2/? of https://github.com/rust-lang/compiler-team/issues/606
Partial stabilization of `once_cell`
This PR aims to stabilize a portion of the `once_cell` feature:
- `core::cell::OnceCell`
- `std::cell::OnceCell` (re-export of the above)
- `std::sync::OnceLock`
This will leave `LazyCell` and `LazyLock` unstabilized, which have been moved to the `lazy_cell` feature flag.
Tracking issue: https://github.com/rust-lang/rust/issues/74465 (does not fully close, but it may make sense to move to a new issue)
Future steps for separate PRs:
- ~~Add `#[inline]` to many methods~~ #105651
- Update cranelift usage of the `once_cell` crate
- Update rust-analyzer usage of the `once_cell` crate
- Update error messages discussing once_cell
## To be stabilized API summary
```rust
// core::cell (in core/cell/once.rs)
pub struct OnceCell<T> { .. }
impl<T> OnceCell<T> {
pub const fn new() -> OnceCell<T>;
pub fn get(&self) -> Option<&T>;
pub fn get_mut(&mut self) -> Option<&mut T>;
pub fn set(&self, value: T) -> Result<(), T>;
pub fn get_or_init<F>(&self, f: F) -> &T where F: FnOnce() -> T;
pub fn into_inner(self) -> Option<T>;
pub fn take(&mut self) -> Option<T>;
}
impl<T: Clone> Clone for OnceCell<T>;
impl<T: Debug> Debug for OnceCell<T>
impl<T> Default for OnceCell<T>;
impl<T> From<T> for OnceCell<T>;
impl<T: PartialEq> PartialEq for OnceCell<T>;
impl<T: Eq> Eq for OnceCell<T>;
```
```rust
// std::sync (in std/sync/once_lock.rs)
impl<T> OnceLock<T> {
pub const fn new() -> OnceLock<T>;
pub fn get(&self) -> Option<&T>;
pub fn get_mut(&mut self) -> Option<&mut T>;
pub fn set(&self, value: T) -> Result<(), T>;
pub fn get_or_init<F>(&self, f: F) -> &T where F: FnOnce() -> T;
pub fn into_inner(self) -> Option<T>;
pub fn take(&mut self) -> Option<T>;
}
impl<T: Clone> Clone for OnceLock<T>;
impl<T: Debug> Debug for OnceLock<T>;
impl<T> Default for OnceLock<T>;
impl<#[may_dangle] T> Drop for OnceLock<T>;
impl<T> From<T> for OnceLock<T>;
impl<T: PartialEq> PartialEq for OnceLock<T>
impl<T: Eq> Eq for OnceLock<T>;
impl<T: RefUnwindSafe + UnwindSafe> RefUnwindSafe for OnceLock<T>;
unsafe impl<T: Send> Send for OnceLock<T>;
unsafe impl<T: Sync + Send> Sync for OnceLock<T>;
impl<T: UnwindSafe> UnwindSafe for OnceLock<T>;
```
No longer planned as part of this PR, and moved to the `rust_cell_try` feature gate:
```rust
impl<T> OnceCell<T> {
pub fn get_or_try_init<F, E>(&self, f: F) -> Result<&T, E> where F: FnOnce() -> Result<T, E>;
}
impl<T> OnceLock<T> {
pub fn get_or_try_init<F, E>(&self, f: F) -> Result<&T, E> where F: FnOnce() -> Result<T, E>;
}
```
I am new to this process so would appreciate mentorship wherever needed.
The first PR for https://github.com/rust-lang/compiler-team/issues/606
This is just the move-and-rename, because it's plenty big-and-bitrotty already. Future PRs will start using `FieldIdx` more broadly, and concomitantly removing `FieldIdx::new`s.
Since structs are always `VariantIdx(0)`, there's a bunch of files where the only reason they had `VariantIdx` or `vec::Idx` imported at all was to get the first variant.
So this uses a constant for that, and adds some doc-comments to `VariantIdx` while I'm there, since it doesn't have any today.
Instead of building two kinds of drop pair loops, of which only one will
be eventually used at runtime in a given monomorphization, always use
index based loop.
This makes it easier to open the messages file while developing on features.
The commit was the result of automatted changes:
for p in compiler/rustc_*; do mv $p/locales/en-US.ftl $p/messages.ftl; rmdir $p/locales; done
for p in compiler/rustc_*; do sed -i "s#\.\./locales/en-US.ftl#../messages.ftl#" $p/src/lib.rs; done