Commit Graph

279 Commits

Author SHA1 Message Date
Florian Groult
3224ea4424 Export AnalysisResults trait in rustc_mir_dataflow 2023-06-27 11:35:32 +02:00
bors
b9ad9b78a2 Auto merge of #112693 - ericmarkmartin:use-more-placeref, r=spastorino
Use PlaceRef abstractions more often

Associated issue: https://github.com/rust-lang/rust/issues/80647

r? `@spastorino`
2023-06-27 00:34:49 +00:00
Eric Mark Martin
c07c10d1e4 use PlaceRef abstractions more consistently 2023-06-25 20:38:01 -04:00
Ziru Niu
8fb4c41f35 merge BorrowKind::Unique into BorrowKind::Mut 2023-06-20 20:55:31 +08:00
Michael Goulet
31d1fbf8d2
Rollup merge of #112232 - fee1-dead-contrib:match-eq-const-msg, r=b-naber
Better error for non const `PartialEq` call generated by `match`

Resolves #90237
2023-06-19 17:53:33 -07:00
bors
c911e08514 Auto merge of #112617 - lqd:dump-mir-dataflow, r=tmiasko
make mir dataflow graphviz dumps opt-in

This should save some MIR traversals and allocations that are not really needed.

Small win but noticeable locally. Let's see what LTO/PGO say.

r? `@ghost`
2023-06-19 01:17:40 +00:00
Deadbeef
89c24af133 Better error for non const PartialEq call generated by match 2023-06-18 05:24:38 +00:00
DrMeepster
a5c6cb888e remove box_free and replace with drop impl 2023-06-16 13:41:06 -07:00
Rémy Rakic
9395e2771a make mir dataflow graphviz dumps opt-in 2023-06-14 12:30:13 +00:00
bors
68c8fdaac0 Auto merge of #108293 - Jarcho:mut_analyses, r=eholk
Take MIR dataflow analyses by mutable reference

The main motivation here is any analysis requiring dynamically sized scratch memory to work. One concrete example would be pointer target tracking, where tracking the results of a dereference can result in multiple possible targets. This leads to processing multi-level dereferences requiring the ability to handle a changing number of potential targets per step. A (simplified) function for this would be `fn apply_deref(potential_targets: &mut Vec<Target>)` which would use the scratch space contained in the analysis to send arguments and receive the results.

The alternative to this would be to wrap everything in a `RefCell`, which is what `MaybeRequiresStorage` currently does. This comes with a small perf cost and loses the compiler's guarantee that we don't try to take multiple borrows at the same time.

For the implementation:
* `AnalysisResults` is an unfortunate requirement to avoid an unconstrained type parameter error.
* `CloneAnalysis` could just be `Clone` instead, but that would result in more work than is required to have multiple cursors over the same result set.
* `ResultsVisitor` now takes the results type on in each function as there's no other way to have access to the analysis without cloning it. This could use an associated type rather than a type parameter, but the current approach makes it easier to not care about the type when it's not necessary.
* `MaybeRequiresStorage` now no longer uses a `RefCell`, but the graphviz formatter now does. It could be removed, but that would require even more changes and doesn't really seem necessary.
2023-06-08 23:58:44 +00:00
lcnr
cfd0623411 unique borrows are mutating uses 2023-05-29 17:15:48 +02:00
Guillaume Gomez
ddb5424569
Rollup merge of #111952 - cjgillot:drop-replace, r=WaffleLapkin
Remove DesugaringKind::Replace.

A simple boolean flag is enough.
2023-05-27 13:38:31 +02:00
clubby789
f97fddab91 Ensure Fluent messages are in alphabetical order 2023-05-25 23:49:35 +00:00
Camille GILLOT
844c1cc5fe Remove DesugaringKind::Replace. 2023-05-25 17:40:46 +00:00
Jason Newcomb
eaddc37075 Take MIR dataflow analyses by mutable reference. 2023-05-18 17:46:39 -04:00
Dylan DPC
828caa80a9
Rollup merge of #110930 - b-naber:normalize-elaborate-drops, r=cjgillot
Don't expect normalization to succeed in elaborate_drops

Fixes https://github.com/rust-lang/rust/issues/110682

This was exposed through the changes in https://github.com/rust-lang/rust/pull/109247, which causes more things to be inlined. Inlining can happen before monomorphization, so we can't expect normalization to succeed. In the elaborate_drops analysis we currently have [this call](033aa092ab/compiler/rustc_mir_dataflow/src/elaborate_drops.rs (L278)) to `normalize_erasing_regions`, which ICEs when normalization fails. The types are used to infer [whether the type needs a drop](033aa092ab/compiler/rustc_mir_dataflow/src/elaborate_drops.rs (L374)), where `needs_drop` itself [uses `try_normalize_erasing_regions`](033aa092ab/compiler/rustc_middle/src/ty/util.rs (L1121)).

~[`instance_mir`](https://doc.rust-lang.org/stable/nightly-rustc/rustc_middle/ty/context/struct.TyCtxt.html#method.instance_mir) isn't explicit about whether it expects the instances corresponding to the `InstanceDef`s to be monomorphized (though I think in all other contexts the function is used post-monomorphization), so the use of `instance_mir` in inlining doesn't necessarily seem wrong to me.~
2023-05-17 19:11:53 +05:30
bors
9a767b6b9e Auto merge of #110820 - cjgillot:faster-dcp, r=oli-obk
Optimize dataflow-const-prop place-tracking infra

Optimization opportunities found while investigating https://github.com/rust-lang/rust/pull/110719

Computing places breadth-first ensures that we create short projections before deep projections, since the former are more likely to be propagated.

The most relevant is the pre-computation of flooded places. Callgrind showed `flood_*` methods and especially `preorder_preinvoke` were especially hot. This PR attempts to pre-compute the set of `ValueIndex` that `preorder_invoke` would visit.

Using this information, we make some `PlaceIndex` inaccessible when they contain no `ValueIndex`, allowing to skip computations for those places.

cc `@jachris` as original author
2023-05-10 20:54:31 +00:00
b-naber
e7a2f52ba1 don't inline polymorphic adt instances whose fields contain projections
in DropGlue.
2023-05-10 16:03:52 +00:00
Camille GILLOT
38612f5ec7 Explicitly skip arguments. 2023-05-09 17:59:35 +00:00
Camille GILLOT
3490375570 Implement SSA-based reference propagation. 2023-05-09 17:59:34 +00:00
Camille GILLOT
ccc1da247b Prevent stack overflow. 2023-05-09 17:27:58 +00:00
Camille GILLOT
2aa1c23fed Add a few comments. 2023-05-09 17:27:58 +00:00
Camille GILLOT
79c073746b Do not flood on copy_nonoverlapping. 2023-05-09 17:27:58 +00:00
Camille GILLOT
add5124dce Extract handle_set_discriminant. 2023-05-09 17:27:58 +00:00
Camille GILLOT
2b0bf3cf59 Trim the places that will not be used. 2023-05-09 17:27:58 +00:00
Camille GILLOT
38fa676330 Precompute values to flood. 2023-05-09 17:27:58 +00:00
Camille GILLOT
7c3d55150d Create tracked places breadth first. 2023-05-09 17:27:58 +00:00
Camille GILLOT
71138e9933 Make HasTop and HasBottom consts. 2023-05-09 17:27:58 +00:00
Camille GILLOT
9325a254f0 Make PlaceMention a non-mutating use. 2023-04-29 16:14:33 +00:00
b-naber
930c39aa8f dont expect normalization to succeed in elaborate_drops 2023-04-28 07:17:19 +00:00
Maybe Waffle
e496fbec92 Split {Idx, IndexVec, IndexSlice} into their own modules 2023-04-24 13:53:35 +00:00
DrMeepster
511e457c4b offset_of 2023-04-21 02:14:02 -07:00
bors
d7f9e81650 Auto merge of #110407 - Nilstrieb:fluent-macro, r=davidtwco
Add `rustc_fluent_macro` to decouple fluent from `rustc_macros`

Fluent, with all the icu4x it brings in, takes quite some time to compile. `fluent_messages!` is only needed in further downstream rustc crates, but is blocking more upstream crates like `rustc_index`. By splitting it out, we allow `rustc_macros` to be compiled earlier, which speeds up `x check compiler` by about 5 seconds (and even more after the needless dependency on `serde_json` is removed from `rustc_data_structures`).
2023-04-19 08:26:47 +00:00
Nilstrieb
b5d3d970fa Add rustc_fluent_macro to decouple fluent from rustc_macros
Fluent, with all the icu4x it brings in, takes quite some time to
compile. `fluent_messages!` is only needed in further downstream rustc
crates, but is blocking more upstream crates like `rustc_index`. By
splitting it out, we allow `rustc_macros` to be compiled earlier, which
speeds up `x check compiler` by about 5 seconds (and even more after the
needless dependency on `serde_json` is removed from
`rustc_data_structures`).
2023-04-18 18:56:22 +00:00
Josh Soref
e09d0d2a29 Spelling - compiler
* account
* achieved
* advising
* always
* ambiguous
* analysis
* annotations
* appropriate
* build
* candidates
* cascading
* category
* character
* clarification
* compound
* conceptually
* constituent
* consts
* convenience
* corresponds
* debruijn
* debug
* debugable
* debuggable
* deterministic
* discriminant
* display
* documentation
* doesn't
* ellipsis
* erroneous
* evaluability
* evaluate
* evaluation
* explicitly
* fallible
* fulfill
* getting
* has
* highlighting
* illustrative
* imported
* incompatible
* infringing
* initialized
* into
* intrinsic
* introduced
* javascript
* liveness
* metadata
* monomorphization
* nonexistent
* nontrivial
* obligation
* obligations
* offset
* opaque
* opportunities
* opt-in
* outlive
* overlapping
* paragraph
* parentheses
* poisson
* precisely
* predecessors
* predicates
* preexisting
* propagated
* really
* reentrant
* referent
* responsibility
* rustonomicon
* shortcircuit
* simplifiable
* simplifications
* specify
* stabilized
* structurally
* suggestibility
* translatable
* transmuting
* two
* unclosed
* uninhabited
* visibility
* volatile
* workaround

Signed-off-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
2023-04-17 16:09:18 -04:00
DaniPopes
677357d32b
Fix typos in compiler 2023-04-10 22:02:52 +02:00
Gary Guo
5ae3a53a44 Revert box_free unwind action 2023-04-06 09:34:16 +01:00
Gary Guo
bf6b84b10a Fix new usage of old api 2023-04-06 09:34:16 +01:00
Gary Guo
e3f2edc75b Rename Abort terminator to Terminate
Unify terminology used in unwind action and terminator, and reflect
the fact that a nounwind panic is triggered instead of an immediate
abort is triggered for this terminator.
2023-04-06 09:34:16 +01:00
Gary Guo
0a5dac3062 Add UnwindAction::Terminate 2023-04-06 09:34:16 +01:00
Gary Guo
5e6ed132fa Add UnwindAction::Unreachable
This also makes eval machine's `StackPopUnwind`
redundant so that is replaced.
2023-04-06 09:34:16 +01:00
Gary Guo
daeb844e0c Refactor unwind from Option to a new enum 2023-04-06 09:34:16 +01:00
Yuki Okushi
4e0662c8a7
Rollup merge of #109847 - clubby789:graphviz-reachable, r=oli-obk
Only create graphviz nodes for reachable MIR bb's

Fixes #109832
2023-04-05 20:47:22 +09:00
clubby789
422c33030f Disable path trimming during graphviz output 2023-04-05 03:06:37 +01:00
Scott McMurray
a2ee7592d6 Use &IndexSlice instead of &IndexVec where possible
All the same reasons as for `[T]`: more general, less pointer chasing, and `&mut IndexSlice` emphasizes that it doesn't change *length*.
2023-04-02 17:35:37 -07:00
bors
a5a690cf4b Auto merge of #109008 - clubby789:drop-elaborate-array, r=davidtwco
Drop array patterns using subslices

Fixes #109004
Drops contiguous subslices of an array when moving elements out with a pattern, which improves perf for large arrays
r? `@compiler-errors`
2023-04-02 12:17:52 +00:00
clubby789
47ae42ee10 Only create graphviz nodes for reachable MIR bb's 2023-04-01 23:14:18 +01:00
bors
eb3e9c1f45 Auto merge of #109762 - scottmcm:variantdef-indexvec, r=WaffleLapkin
Update `ty::VariantDef` to use `IndexVec<FieldIdx, FieldDef>`

And while doing the updates for that, also uses `FieldIdx` in `ProjectionKind::Field` and `TypeckResults::field_indices`.

There's more places that could use it (like `rustc_const_eval` and `LayoutS`), but I tried to keep this PR from exploding to *even more* places.

Part 2/? of https://github.com/rust-lang/compiler-team/issues/606
2023-03-31 03:36:18 +00:00
Scott McMurray
4abb455529 Update ty::VariantDef to use IndexVec<FieldIdx, FieldDef>
And while doing the updates for that, also uses `FieldIdx` in `ProjectionKind::Field` and `TypeckResults::field_indices`.

There's more places that could use it (like `rustc_const_eval` and `LayoutS`), but I tried to keep this PR from exploding to *even more* places.

Part 2/? of https://github.com/rust-lang/compiler-team/issues/606
2023-03-30 09:23:40 -07:00
bors
8a7ca936e6 Auto merge of #105587 - tgross35:once-cell-min, r=m-ou-se
Partial stabilization of `once_cell`

This PR aims to stabilize a portion of the `once_cell` feature:

- `core::cell::OnceCell`
- `std::cell::OnceCell` (re-export of the above)
- `std::sync::OnceLock`

This will leave `LazyCell` and `LazyLock` unstabilized, which have been moved to the `lazy_cell` feature flag.

Tracking issue: https://github.com/rust-lang/rust/issues/74465 (does not fully close, but it may make sense to move to a new issue)

Future steps for separate PRs:
- ~~Add `#[inline]` to many methods~~ #105651
- Update cranelift usage of the `once_cell` crate
- Update rust-analyzer usage of the `once_cell` crate
- Update error messages discussing once_cell

## To be stabilized API summary

```rust
// core::cell (in core/cell/once.rs)

pub struct OnceCell<T> { .. }

impl<T> OnceCell<T> {
    pub const fn new() -> OnceCell<T>;
    pub fn get(&self) -> Option<&T>;
    pub fn get_mut(&mut self) -> Option<&mut T>;
    pub fn set(&self, value: T) -> Result<(), T>;
    pub fn get_or_init<F>(&self, f: F) -> &T where F: FnOnce() -> T;
    pub fn into_inner(self) -> Option<T>;
    pub fn take(&mut self) -> Option<T>;
}

impl<T: Clone> Clone for OnceCell<T>;
impl<T: Debug> Debug for OnceCell<T>
impl<T> Default for OnceCell<T>;
impl<T> From<T> for OnceCell<T>;
impl<T: PartialEq> PartialEq for OnceCell<T>;
impl<T: Eq> Eq for OnceCell<T>;
```

```rust
// std::sync (in std/sync/once_lock.rs)

impl<T> OnceLock<T> {
    pub const fn new() -> OnceLock<T>;
    pub fn get(&self) -> Option<&T>;
    pub fn get_mut(&mut self) -> Option<&mut T>;
    pub fn set(&self, value: T) -> Result<(), T>;
    pub fn get_or_init<F>(&self, f: F) -> &T where F: FnOnce() -> T;
    pub fn into_inner(self) -> Option<T>;
    pub fn take(&mut self) -> Option<T>;
}

impl<T: Clone> Clone for OnceLock<T>;
impl<T: Debug> Debug for OnceLock<T>;
impl<T> Default for OnceLock<T>;
impl<#[may_dangle] T> Drop for OnceLock<T>;
impl<T> From<T> for OnceLock<T>;
impl<T: PartialEq> PartialEq for OnceLock<T>
impl<T: Eq> Eq for OnceLock<T>;
impl<T: RefUnwindSafe + UnwindSafe> RefUnwindSafe for OnceLock<T>;
unsafe impl<T: Send> Send for OnceLock<T>;
unsafe impl<T: Sync + Send> Sync for OnceLock<T>;
impl<T: UnwindSafe> UnwindSafe for OnceLock<T>;
```

No longer planned as part of this PR, and moved to the `rust_cell_try` feature gate:

```rust
impl<T> OnceCell<T> {
    pub fn get_or_try_init<F, E>(&self, f: F) -> Result<&T, E> where F: FnOnce() -> Result<T, E>;
}

impl<T> OnceLock<T> {
    pub fn get_or_try_init<F, E>(&self, f: F) -> Result<&T, E> where F: FnOnce() -> Result<T, E>;
}
```

I am new to this process so would appreciate mentorship wherever needed.
2023-03-30 10:12:23 +00:00