Run various queries from other queries instead of explicitly in phases
These are just legacy leftovers from when rustc didn't have a query system. While there are more cleanups of this sort that can be done here, I want to land them in smaller steps.
This phased order of query invocations was already a lie, as any query that looks at types (e.g. the wf checks run before) can invoke e.g. const eval which invokes borrowck, which invokes typeck, ...
This takes a whole 3 lines in `compiler/` since it lowers to `CastKind::Transmute` in MIR *exactly* the same as the existing `intrinsics::transmute` does, it just doesn't have the fancy checking in `hir_typeck`.
Added to enable experimenting with the request in <https://github.com/rust-lang/rust/pull/106281#issuecomment-1496648190> and because the portable-simd folks might be interested for dependently-sized array-vector conversions.
It also simplifies a couple places in `core`.
Evaluate place expression in `PlaceMention`
https://github.com/rust-lang/rust/pull/102256 introduces a `PlaceMention(place)` MIR statement which keep trace of `let _ = place` statements from surface rust, but without semantics.
This PR proposes to change the behaviour of `let _ =` patterns with respect to the borrow-checker to verify that the bound place is live.
Specifically, consider this code:
```rust
let _ = {
let a = 5;
&a
};
```
This passes borrowck without error on stable. Meanwhile, replacing `_` by `_: _` or `_p` errors with "error[E0597]: `a` does not live long enough", [see playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=c448d25a7c205dc95a0967fe96bccce8).
This PR *does not* change how `_` patterns behave with respect to initializedness: it remains ok to bind a moved-from place to `_`.
The relevant test is `tests/ui/borrowck/let_underscore_temporary.rs`. Crater check found no regression.
For consistency, this PR changes miri to evaluate the place found in `PlaceMention`, and report eventual dangling pointers found within it.
r? `@RalfJung`
Add offset_of! macro (RFC 3308)
Implements https://github.com/rust-lang/rfcs/pull/3308 (tracking issue #106655) by adding the built in macro `core::mem::offset_of`. Two of the future possibilities are also implemented:
* Nested field accesses (without array indexing)
* DST support (for `Sized` fields)
I wrote this a few months ago, before the RFC merged. Now that it's merged, I decided to rebase and finish it.
cc `@thomcc` (RFC author)
Deduplicate unreachable blocks, for real this time
In https://github.com/rust-lang/rust/pull/106428 (in particular 41eda69516) we noticed that inlining `unreachable_unchecked` can produce duplicate unreachable blocks. So we improved two MIR optimizations: `SimplifyCfg` was given a simplify to deduplicate unreachable blocks, then `InstCombine` was given a combiner to deduplicate switch targets that point at the same block. The problem is that change doesn't actually work.
Our current pass order is
```
SimplifyCfg (does nothing relevant to this situation)
Inline (produces multiple unreachable blocks)
InstCombine (doesn't do anything here, oops)
SimplifyCfg (produces the duplicate SwitchTargets that InstCombine is looking for)
```
So in here, I have factored out the specific function from `InstCombine` and placed it inside the simplify that produces the case it is looking for. This should ensure that it runs in the scenario it was designed for.
Fixes https://github.com/rust-lang/rust/issues/110551
r? `@cjgillot`
Don't allocate on SimplifyCfg/Locals/Const on every MIR pass
Hey! 👋🏾 This is a first PR attempt to see if I could speed up some rustc internals.
Thought process:
```rust
pub struct SimplifyCfg {
label: String,
}
```
in [compiler/src/rustc_mir_transform/simplify.rs](7908a1d654/compiler/rustc_mir_transform/src/simplify.rs (L39)) fires multiple times per MIR analysis. This means that a likely string allocation is happening in each of these runs, which may add up, as they are not being lazily allocated or cached in between the different passes.
...yes, I know that adding a global static array is probably not the future-proof solution, but I wanted to lob this now as a proof of concept to see if it's worth shaving off a few cycles and then making more robust.
Encode hashes as bytes, not varint
In a few places, we store hashes as `u64` or `u128` and then apply `derive(Decodable, Encodable)` to the enclosing struct/enum. It is more efficient to encode hashes directly than try to apply some varint encoding. This PR adds two new types `Hash64` and `Hash128` which are produced by `StableHasher` and replace every use of storing a `u64` or `u128` that represents a hash.
Distribution of the byte lengths of leb128 encodings, from `x build --stage 2` with `incremental = true`
Before:
```
( 1) 373418203 (53.7%, 53.7%): 1
( 2) 196240113 (28.2%, 81.9%): 3
( 3) 108157958 (15.6%, 97.5%): 2
( 4) 17213120 ( 2.5%, 99.9%): 4
( 5) 223614 ( 0.0%,100.0%): 9
( 6) 216262 ( 0.0%,100.0%): 10
( 7) 15447 ( 0.0%,100.0%): 5
( 8) 3633 ( 0.0%,100.0%): 19
( 9) 3030 ( 0.0%,100.0%): 8
( 10) 1167 ( 0.0%,100.0%): 18
( 11) 1032 ( 0.0%,100.0%): 7
( 12) 1003 ( 0.0%,100.0%): 6
( 13) 10 ( 0.0%,100.0%): 16
( 14) 10 ( 0.0%,100.0%): 17
( 15) 5 ( 0.0%,100.0%): 12
( 16) 4 ( 0.0%,100.0%): 14
```
After:
```
( 1) 372939136 (53.7%, 53.7%): 1
( 2) 196240140 (28.3%, 82.0%): 3
( 3) 108014969 (15.6%, 97.5%): 2
( 4) 17192375 ( 2.5%,100.0%): 4
( 5) 435 ( 0.0%,100.0%): 5
( 6) 83 ( 0.0%,100.0%): 18
( 7) 79 ( 0.0%,100.0%): 10
( 8) 50 ( 0.0%,100.0%): 9
( 9) 6 ( 0.0%,100.0%): 19
```
The remaining 9 or 10 and 18 or 19 are `u64` and `u128` respectively that have the high bits set. As far as I can tell these are coming primarily from `SwitchTargets`.
Check freeze with right param-env in `deduced_param_attrs`
We're checking if a trait (`Freeze`) holds in a polymorphic function, but not using that function's own (reveal-all) param-env. This causes us to try to eagerly normalize a specializable projection type that has no default value, which causes an ICE.
Fixes#110171
Permit MIR inlining without #[inline]
I noticed that there are at least a handful of portable-simd functions that have no `#[inline]` but compile to an assign + return.
I locally benchmarked inlining thresholds between 0 and 50 in increments of 5, and 50 seems to be the best. Interesting. That didn't include check builds though, ~maybe perf will have something to say about that~.
Perf has little useful to say about this. We generally regress all the check builds, as best as I can tell, due to a number of small codegen changes in a particular hot function in the compiler. Probably this is because we've nudged the inlining outcomes all over, and uses of `#[inline(always)]`/`#[inline(never)]` might need to be adjusted.
This allows us to get rid of the `rustc_const_eval->rustc_borrowck`
dependency edge which was delaying the compilation of borrowck.
The added utils in `rustc_middle` are small and should not affect
compile times there.
Preserve argument indexes when inlining MIR
We store argument indexes on VarDebugInfo. Unlike the previous method of relying on the variable index to know whether a variable is an argument, this survives MIR inlining.
We also no longer check if var.source_info.scope is the outermost scope. When a function gets inlined, the arguments to the inner function will no longer be in the outermost scope. What we care about though is whether they were in the outermost scope prior to inlining, which we know by whether we assigned an argument index.
Fixes#83217
I considered using `Option<NonZeroU16>` instead of `Option<u16>` to store the index. I didn't because `TypeFoldable` isn't implemented for `NonZeroU16` and because it looks like due to padding, it currently wouldn't make any difference. But I indexed from 1 anyway because (a) it'll make it easier if later it becomes worthwhile to use a `NonZeroU16` and because the arguments were previously indexed from 1, so it made for a smaller change.
This is my first PR on rust-lang/rust, so apologies if I've gotten anything not quite right.
We store argument indexes on VarDebugInfo. Unlike the previous method of
relying on the variable index to know whether a variable is an argument,
this survives MIR inlining.
We also no longer check if var.source_info.scope is the outermost scope.
When a function gets inlined, the arguments to the inner function will
no longer be in the outermost scope. What we care about though is
whether they were in the outermost scope prior to inlining, which we
know by whether we assigned an argument index.
Make elaboration generic over input
Combines all the `elaborate_*` family of functions into just one, which is an iterator over the same type that you pass in (e.g. elaborating `Predicate` gives `Predicate`s, elaborating `Obligation`s gives `Obligation`s, etc.)
Refactor unwind in MIR
This makes unwinding from current `Option<BasicBlock>` into
```rust
enum UnwindAction {
Continue,
Cleanup(BasicBlock),
Unreachable,
Terminate,
}
```
cc `@JakobDegen` `@RalfJung` `@Amanieu`
Unify terminology used in unwind action and terminator, and reflect
the fact that a nounwind panic is triggered instead of an immediate
abort is triggered for this terminator.
Move a const-prop-lint specific hack from mir interpret to const-prop-lint and make it fallible
fixes#109743
This hack didn't need to live in the mir interpreter. For const-prop-lint it is entirely correct to avoid doing any const prop if normalization fails at this stage. Most likely we couldn't const propagate anything anyway, and if revealing was needed (so opaque types were involved), we wouldn't want to be too smart and leak the hidden type anyway.
Insert alignment checks for pointer dereferences when debug assertions are enabled
Closes https://github.com/rust-lang/rust/issues/54915
- [x] Jake tells me this sounds like a place to use `MirPatch`, but I can't figure out how to insert a new basic block with a new terminator in the middle of an existing basic block, using `MirPatch`. (if nobody else backs up this point I'm checking this as "not actually a good idea" because the code looks pretty clean to me after rearranging it a bit)
- [x] Using `CastKind::PointerExposeAddress` is definitely wrong, we don't want to expose. Calling a function to get the pointer address seems quite excessive. ~I'll see if I can add a new `CastKind`.~ `CastKind::Transmute` to the rescue!
- [x] Implement a more helpful panic message like slice bounds checking.
r? `@oli-obk`
Update `ty::VariantDef` to use `IndexVec<FieldIdx, FieldDef>`
And while doing the updates for that, also uses `FieldIdx` in `ProjectionKind::Field` and `TypeckResults::field_indices`.
There's more places that could use it (like `rustc_const_eval` and `LayoutS`), but I tried to keep this PR from exploding to *even more* places.
Part 2/? of https://github.com/rust-lang/compiler-team/issues/606
And while doing the updates for that, also uses `FieldIdx` in `ProjectionKind::Field` and `TypeckResults::field_indices`.
There's more places that could use it (like `rustc_const_eval` and `LayoutS`), but I tried to keep this PR from exploding to *even more* places.
Part 2/? of https://github.com/rust-lang/compiler-team/issues/606
Partial stabilization of `once_cell`
This PR aims to stabilize a portion of the `once_cell` feature:
- `core::cell::OnceCell`
- `std::cell::OnceCell` (re-export of the above)
- `std::sync::OnceLock`
This will leave `LazyCell` and `LazyLock` unstabilized, which have been moved to the `lazy_cell` feature flag.
Tracking issue: https://github.com/rust-lang/rust/issues/74465 (does not fully close, but it may make sense to move to a new issue)
Future steps for separate PRs:
- ~~Add `#[inline]` to many methods~~ #105651
- Update cranelift usage of the `once_cell` crate
- Update rust-analyzer usage of the `once_cell` crate
- Update error messages discussing once_cell
## To be stabilized API summary
```rust
// core::cell (in core/cell/once.rs)
pub struct OnceCell<T> { .. }
impl<T> OnceCell<T> {
pub const fn new() -> OnceCell<T>;
pub fn get(&self) -> Option<&T>;
pub fn get_mut(&mut self) -> Option<&mut T>;
pub fn set(&self, value: T) -> Result<(), T>;
pub fn get_or_init<F>(&self, f: F) -> &T where F: FnOnce() -> T;
pub fn into_inner(self) -> Option<T>;
pub fn take(&mut self) -> Option<T>;
}
impl<T: Clone> Clone for OnceCell<T>;
impl<T: Debug> Debug for OnceCell<T>
impl<T> Default for OnceCell<T>;
impl<T> From<T> for OnceCell<T>;
impl<T: PartialEq> PartialEq for OnceCell<T>;
impl<T: Eq> Eq for OnceCell<T>;
```
```rust
// std::sync (in std/sync/once_lock.rs)
impl<T> OnceLock<T> {
pub const fn new() -> OnceLock<T>;
pub fn get(&self) -> Option<&T>;
pub fn get_mut(&mut self) -> Option<&mut T>;
pub fn set(&self, value: T) -> Result<(), T>;
pub fn get_or_init<F>(&self, f: F) -> &T where F: FnOnce() -> T;
pub fn into_inner(self) -> Option<T>;
pub fn take(&mut self) -> Option<T>;
}
impl<T: Clone> Clone for OnceLock<T>;
impl<T: Debug> Debug for OnceLock<T>;
impl<T> Default for OnceLock<T>;
impl<#[may_dangle] T> Drop for OnceLock<T>;
impl<T> From<T> for OnceLock<T>;
impl<T: PartialEq> PartialEq for OnceLock<T>
impl<T: Eq> Eq for OnceLock<T>;
impl<T: RefUnwindSafe + UnwindSafe> RefUnwindSafe for OnceLock<T>;
unsafe impl<T: Send> Send for OnceLock<T>;
unsafe impl<T: Sync + Send> Sync for OnceLock<T>;
impl<T: UnwindSafe> UnwindSafe for OnceLock<T>;
```
No longer planned as part of this PR, and moved to the `rust_cell_try` feature gate:
```rust
impl<T> OnceCell<T> {
pub fn get_or_try_init<F, E>(&self, f: F) -> Result<&T, E> where F: FnOnce() -> Result<T, E>;
}
impl<T> OnceLock<T> {
pub fn get_or_try_init<F, E>(&self, f: F) -> Result<&T, E> where F: FnOnce() -> Result<T, E>;
}
```
I am new to this process so would appreciate mentorship wherever needed.
Move `mir::Field` → `abi::FieldIdx`
The first PR for https://github.com/rust-lang/compiler-team/issues/606
This is just the move-and-rename, because it's plenty big already. Future PRs will start using `FieldIdx` more broadly, and concomitantly removing `FieldIdx::new`s.
Support TLS access into dylibs on Windows
This allows access to `#[thread_local]` in upstream dylibs on Windows by introducing a MIR shim to return the address of the thread local. Accesses that go into an upstream dylib will call the MIR shim to get the address of it.
`convert_tls_rvalues` is introduced in `rustc_codegen_ssa` which rewrites MIR TLS accesses to dummy calls which are replaced with calls to the MIR shims when the dummy calls are lowered to backend calls.
A new `dll_tls_export` target option enables this behavior with a `false` value which is set for Windows platforms.
This fixes https://github.com/rust-lang/rust/issues/84933.
As I've been trying to replace a `Vec` with an `IndexVec`, having `last` exist on both but returning very different types makes the transition a bit awkward -- the errors are later, where you get things like "there's no `ty` method on `mir::Field`" rather than a more localized error like "hey, there's no `last` on `IndexVec`".
So I propose renaming `last` to `last_index` to help distinguish `Vec::last`, which returns an element, and `IndexVec::last_index`, which returns an index.
(Similarly, `Iterator::last` also returns an element, not an index.)
The first PR for https://github.com/rust-lang/compiler-team/issues/606
This is just the move-and-rename, because it's plenty big-and-bitrotty already. Future PRs will start using `FieldIdx` more broadly, and concomitantly removing `FieldIdx::new`s.
Thanks to the combination of #108246 and #108442 it could already remove identity transmutes.
With this PR, it can also simplify them to `IntToInt` casts, `Discriminant` reads, or `Field` projections.
Add a builtin `FnPtr` trait that is implemented for all function pointers
r? `@ghost`
Rebased version of https://github.com/rust-lang/rust/pull/99531 (plus adjustments mentioned in the PR).
If perf is happy with this version, I would like to land it, even if the diagnostics fix in 9df8e1befb5031a5bf9d8dfe25170620642d3c59 only works for `FnPtr` specifically, and does not generally improve blanket impls.
Don't elaborate non-obligations into obligations
It's suspicious to elaborate a `PolyTraitRef` or `Predicate` into an `Obligation`, since the former does not have a param-env associated with it, but the latter does. This is a footgun that, while not being misused *currently* in the compiler, easily could be misused by someone less familiar with the elaborator's inner workings.
This PR just changes the API -- ideally, the elaborator wouldn't even have to deal with obligations if we're not elaborating obligations, but that would require a bit more abstraction than I could be bothered with today.
Permit the MIR inliner to inline diverging functions
This heuristic prevents inlining of `hint::unreachable_unchecked`, which in turn makes `Option/Result::unwrap_unchecked` a bad inlining candidate. I looked through the changes to `core`, `alloc`, `std`, and `hashbrown` by hand and they all seem reasonable. Let's see how this looks in perf...
---
Based on rustc-perf it looks like this regresses ctfe-stress, and the cachegrind diff indicates that this regression is in `InterpCx::statement`. I don't know how to do any deeper analysis because that function is _enormous_ in the try toolchain, which has no debuginfo in it. And a local build produces significantly different codegen for that function, even with LTO.
Since structs are always `VariantIdx(0)`, there's a bunch of files where the only reason they had `VariantIdx` or `vec::Idx` imported at all was to get the first variant.
So this uses a constant for that, and adds some doc-comments to `VariantIdx` while I'm there, since it doesn't have any today.
Updates `interpret`, `codegen_ssa`, and `codegen_cranelift` to consume the new cast instead of the intrinsic.
Includes `CastTransmute` for custom MIR building, to be able to test the extra UB.
Detect uninhabited types early in const eval
r? `@RalfJung`
implements https://github.com/rust-lang/rust/pull/108442#discussion_r1143003840
this is a breaking change, as some UB during const eval is now detected instead of silently being ignored. Users can see this and other UB that may cause future breakage with `-Zextra-const-ub-checks` or just by running miri on their code, which sets that flag by default.
move Option::as_slice to intrinsic
````@scottmcm```` suggested on #109095 I use a direct approach of unpacking the operation in MIR lowering, so here's the implementation.
cc ````@nikic```` as this should hopefully unblock #107224 (though perhaps other changes to the prior implementation, which I left for bootstrapping, are needed).
Only clear written-to locals in ConstProp
This aims to revert the regression in https://github.com/rust-lang/rust/pull/108872
Clearing all locals at the end of each bb is very costly.
This PR goes back to the original implementation of recording which locals are modified.
Wrap the whole LocalInfo in ClearCrossCrate.
MIR contains a lot of information about locals. The primary purpose of this information is the quality of borrowck diagnostics.
This PR aims to drop this information after MIR analyses are finished, ie. starting from post-cleanup runtime MIR.
Ensure `ptr::read` gets all the same LLVM `load` metadata that dereferencing does
I was looking into `array::IntoIter` optimization, and noticed that it wasn't annotating the loads with `noundef` for simple things like `array::IntoIter<i32, N>`. Trying to narrow it down, it seems that was because `MaybeUninit::assume_init_read` isn't marking the load as initialized (<https://rust.godbolt.org/z/Mxd8TPTnv>), which is unfortunate since that's basically its reason to exist.
The root cause is that `ptr::read` is currently implemented via the *untyped* `copy_nonoverlapping`, and thus the `load` doesn't get any type-aware metadata: no `noundef`, no `!range`. This PR solves that by lowering `ptr::read(p)` to `copy *p` in MIR, for which the backends already do the right thing.
Fortuitiously, this also improves the IR we give to LLVM for things like `mem::replace`, and fixes a couple of long-standing bugs where `ptr::read` on `Copy` types was worse than `*`ing them.
Zulip conversation: <https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/Move.20array.3A.3AIntoIter.20to.20ManuallyDrop/near/341189936>
cc `@erikdesjardins` `@JakobDegen` `@workingjubilee` `@the8472`
Fixes#106369Fixes#73258
Remove `box_syntax`
r? `@Nilstrieb`
This removes the feature `box_syntax`, which allows the use of `box <expr>` to create a Box, and finalises removing use of the feature from the compiler. `box_patterns` (allowing the use of `box <pat>` in a pattern) is unaffected.
It also removes `ast::ExprKind::Box` - the only way to create a 'box' expression now is with the rustc-internal `#[rustc_box]` attribute.
As a temporary measure to help users move away, `box <expr>` now parses the inner expression, and emits a `MachineApplicable` lint to replace it with `Box::new`
Closes#49733
Strengthen state tracking in const-prop
Some/many of the changes are replicated between both the const-prop lint and the const-prop optimization.
Behaviour changes:
- const-prop opt does not give a span to propagated values. This was useless as that span's primary purpose is to diagnose evaluation failure in codegen.
- we remove the `OnlyPropagateInto` mode. It was only used for function arguments, which are better modeled by a write before entry.
- the tracking of assignments and discriminants make clearer that we do nothing in `NoPropagation` mode or on indirect places.
Ensure value is on the on-disk cache before returning from `ensure()`.
The current logic for `ensure()` a query just checks that the node is green in the dependency graph.
However, a lot of places use `ensure()` to prevent the query from being called later. This is the case before stealing a query result.
If the query is actually green but the value is not available in the on-disk cache, `ensure` would return, but a subsequent call to the full query would run the code, and attempt to read from a stolen value.
This PR conforms the query system to the usage by checking whether the queried value is loadable from disk before returning.
Sadly, I can't manage to craft a proper test...
Should fix all instances of "attempted to read from stolen value".
Avoid unnecessary hashing
I noticed some stable hashing being done in a non-incremental build. It turns out that some of this is necessary to compute the crate hash, but some of it is not. Removing the unnecessary hashing is a perf win.
r? `@cjgillot`
I was looking into `array::IntoIter` optimization, and noticed that it wasn't annotating the loads with `noundef` for simple things like `array::IntoIter<i32, N>`.
Turned out to be a more general problem as `MaybeUninit::assume_init_read` isn't marking the load as initialized (<https://rust.godbolt.org/z/Mxd8TPTnv>), which is unfortunate since that's basically its reason to exist.
This PR lowers `ptr::read(p)` to `copy *p` in MIR, which fortuitiously also improves the IR we give to LLVM for things like `mem::replace`.
Rollup of 8 pull requests
Successful merges:
- #108754 (Retry `pred_known_to_hold_modulo_regions` with fulfillment if ambiguous)
- #108759 (1.41.1 supported 32-bit Apple targets)
- #108839 (Canonicalize root var when making response from new solver)
- #108856 (Remove DropAndReplace terminator)
- #108882 (Tweak E0740)
- #108898 (Set `LIBC_CHECK_CFG=1` when building Rust code in bootstrap)
- #108911 (Improve rustdoc-gui/tester.js code a bit)
- #108916 (Remove an unused return value in `rustc_hir_typeck`)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Do not consider `&mut *x` as mutating `x` in `CopyProp`
This PR removes an unfortunate overly cautious case from the current implementation.
Found by https://github.com/rust-lang/rust/pull/105274 cc `@saethlin`
Do not implement HashStable for HashSet (MCP 533)
This PR removes all occurrences of `HashSet` in query results, replacing it either with `FxIndexSet` or with `UnordSet`, and then removes the `HashStable` implementation of `HashSet`. This is part of implementing [MCP 533](https://github.com/rust-lang/compiler-team/issues/533), that is, removing the `HashStable` implementations of all collection types with unstable iteration order.
The changes are mostly mechanical. The only place where additional sorting is happening is in Miri's override implementation of the `exported_symbols` query.
The crate hash is needed:
- if debug assertions are enabled, or
- if incr. comp. is enabled, or
- if metadata is being generated, or
- if `-C instrumentation-coverage` is enabled.
This commit avoids computing the crate hash when these conditions are
all false, such as when doing a release build of a binary crate.
It uses `Option` to store the hashes when needed, rather than
computing them on demand, because some of them are needed in multiple
places and computing them on demand would make compilation slower.
The commit also removes `Owner::hash_without_bodies`. There is no
benefit to pre-computing that one, it can just be done in the normal
fashion.
Check for free regions in MIR validation
This turns https://github.com/rust-lang/rust/issues/108720 into a MIR validation failure that will reproduce without debug-assertions enabled.
```
error: internal compiler error: broken MIR in Item(WithOptConstParam { did: DefId(0:296 ~ futures_util[3805]::future::future::remote_handle::{impl#3}::poll), const_param_did: None }) (after pass ScalarReplacementOfAggregates) at bb0[0]:
Free regions in optimized runtime-post-cleanup MIR
--> /home/ben/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.26/src/future/future/remote_handle.rs:96:13
|
96 | let this = self.project();
| ^^^^
```
Desugaring of drop and replace at MIR build
This commit desugars the drop and replace deriving from an
assignment at MIR build, avoiding the construction of the
`DropAndReplace` terminator (which will be removed in a following PR).
In order to retain the same error messages for replaces a new
`DesugaringKind::Replace` variant is introduced.
The changes in the borrowck are also useful for future work in moving drop elaboration
before borrowck, as no `DropAndReplace` would be present there anymore.
Notes on test diffs:
* `tests/ui/borrowck/issue-58776-borrowck-scans-children`: the assignment deriving from the desugaring kills the borrow.
* `tests/ui/async-await/async-fn-size-uninit-locals.rs`, `tests/mir-opt/issue_41110.test.ElaborateDrops.after.mir`, `tests/mir-opt/issue_41888.main.ElaborateDrops.after.mir`: drop elaboration generates (or reads from) a useless drop flag due to an issue with the dataflow analysis. Will be fixed independently by https://github.com/rust-lang/rust/pull/106430.
See https://github.com/rust-lang/rust/pull/104488 for more context
This commit desugars the drop and replace deriving from an
assignment at MIR build, avoiding the construction of the
DropAndReplace terminator (which will be removed in a followign PR)
In order to retain the same error messages for replaces a new
DesugaringKind::Replace variant is introduced.
Unify validity checks into a single query
Previously, there were two queries to check whether a type allows the 0x01 or zeroed bitpattern.
I am planning on adding a further initness to check in #100423, truly uninit for MaybeUninit, which would make this three queries. This seems overkill for such a small feature, so this PR unifies them into one.
I am not entirely happy with the naming and key type and open for improvements.
r? oli-obk
(This is a large commit. The changes to
`compiler/rustc_middle/src/ty/context.rs` are the most important ones.)
The current naming scheme is a mess, with a mix of `_intern_`, `intern_`
and `mk_` prefixes, with little consistency. In particular, in many
cases it's easy to use an iterator interner when a (preferable) slice
interner is available.
The guiding principles of the new naming system:
- No `_intern_` prefixes.
- The `intern_` prefix is for internal operations.
- The `mk_` prefix is for external operations.
- For cases where there is a slice interner and an iterator interner,
the former is `mk_foo` and the latter is `mk_foo_from_iter`.
Also, `slice_interners!` and `direct_interners!` can now be `pub` or
non-`pub`, which helps enforce the internal/external operations
division.
It's not perfect, but I think it's a clear improvement.
The following lists show everything that was renamed.
slice_interners
- const_list
- mk_const_list -> mk_const_list_from_iter
- intern_const_list -> mk_const_list
- substs
- mk_substs -> mk_substs_from_iter
- intern_substs -> mk_substs
- check_substs -> check_and_mk_substs (this is a weird one)
- canonical_var_infos
- intern_canonical_var_infos -> mk_canonical_var_infos
- poly_existential_predicates
- mk_poly_existential_predicates -> mk_poly_existential_predicates_from_iter
- intern_poly_existential_predicates -> mk_poly_existential_predicates
- _intern_poly_existential_predicates -> intern_poly_existential_predicates
- predicates
- mk_predicates -> mk_predicates_from_iter
- intern_predicates -> mk_predicates
- _intern_predicates -> intern_predicates
- projs
- intern_projs -> mk_projs
- place_elems
- mk_place_elems -> mk_place_elems_from_iter
- intern_place_elems -> mk_place_elems
- bound_variable_kinds
- mk_bound_variable_kinds -> mk_bound_variable_kinds_from_iter
- intern_bound_variable_kinds -> mk_bound_variable_kinds
direct_interners
- region
- intern_region (unchanged)
- const
- mk_const_internal -> intern_const
- const_allocation
- intern_const_alloc -> mk_const_alloc
- layout
- intern_layout -> mk_layout
- adt_def
- intern_adt_def -> mk_adt_def_from_data (unusual case, hard to avoid)
- alloc_adt_def(!) -> mk_adt_def
- external_constraints
- intern_external_constraints -> mk_external_constraints
Other
- type_list
- mk_type_list -> mk_type_list_from_iter
- intern_type_list -> mk_type_list
- tup
- mk_tup -> mk_tup_from_iter
- intern_tup -> mk_tup
Previously, there were two queries to check whether a type allows the
0x01 or zeroed bitpattern.
I am planning on adding a further initness to check, truly uninit for
MaybeUninit, which would make this three queries. This seems overkill
for such a small feature, so this PR unifies them into one.
As a part of drop elaboration, we identify dead unwinds, i.e., unwind
edges on a drop terminators which are known to be unreachable, because
there is no need to drop anything.
Previously, the data flow framework was informed about the dead unwinds,
and it assumed those edges are absent from MIR. Unfortunately, the data
flow framework wasn't consistent in maintaining this assumption.
In particular, if a block was reachable only through a dead unwind edge,
its state was propagated to other blocks still. This became an issue in
the context of change removes DropAndReplace terminator, since it
introduces initialization into cleanup blocks.
To avoid this issue, remove unreachable unwind edges before the drop
elaboration, and elaborate only blocks that remain reachable.
Correctly handle aggregates in DataflowConstProp
The previous implementation from https://github.com/rust-lang/rust/pull/107411 flooded target of an aggregate assignment with `Bottom`, corresponding to the `deinit` that the interpreter does.
As a consequence, when assigning `target = Enum::Variant#i(...)` all the `(target as Variant#j)` were at `Bottom` while they should have been `Top`.
This PR replaces that flooding with `Top`.
Aside, it corrects a second bug where the wrong place would be used to assign to enum variant fields, resulting to nothing happening.
Fixes https://github.com/rust-lang/rust/issues/108166
Enable instcombine for mutable reborrows
`instcombine` used to contain this comment, which is no longer accurate because there it is fine to copy `&mut _` in MIR:
```rust
// The dereferenced place must have type `&_`, so that we don't copy `&mut _`.
```
So let's try replacing that check with something much more permissive...
There are several `mk_foo`/`intern_foo` pairs, where the former takes an
iterator and the latter takes a slice. (This naming convention is bad,
but that's a fix for another PR.)
This commit changes several `mk_foo` occurrences into `intern_foo`,
avoiding the need for some `.iter()`/`.into_iter()` calls. Affected
cases:
- mk_type_list
- mk_tup
- mk_substs
- mk_const_list
Switch to `EarlyBinder` for `type_of` query
Part of the work to finish #105779 and implement https://github.com/rust-lang/types-team/issues/78.
Several queries `X` have a `bound_X` variant that wraps the output in `EarlyBinder`. This adds `EarlyBinder` to the return type of the `type_of` query and removes `bound_type_of`.
r? `@lcnr`
Implement partial support for non-lifetime binders
This implements support for non-lifetime binders. It's pretty useless currently, but I wanted to put this up so the implementation can be discussed.
Specifically, this piggybacks off of the late-bound lifetime collection code in `rustc_hir_typeck::collect::lifetimes`. This seems like a necessary step given the fact we don't resolve late-bound regions until this point, and binders are sometimes merged.
Q: I'm not sure if I should go along this route, or try to modify the earlier nameres code to compute the right bound var indices for type and const binders eagerly... If so, I'll need to rename all these queries to something more appropriate (I've done this for `resolve_lifetime::Region` -> `resolve_lifetime::ResolvedArg`)
cc rust-lang/types-team#81
r? `@ghost`
Rollup of 7 pull requests
Successful merges:
- #106347 (More accurate spans for arg removal suggestion)
- #108057 (Prevent some attributes from being merged with others on reexports)
- #108090 (`if $c:expr { Some($r:expr) } else { None }` =>> `$c.then(|| $r)`)
- #108092 (note issue for feature(packed_bundled_libs))
- #108099 (use chars instead of strings where applicable)
- #108115 (Do not ICE on unmet trait alias bounds)
- #108125 (Add new people to the compiletest review rotation)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Optimize `mk_region`
PR #107869 avoiding some interning under `mk_ty` by special-casing `Ty` variants with simple (integer) bodies. This PR does something similar for regions.
r? `@compiler-errors`
Don't ICE in `might_permit_raw_init` if reference is polymorphic
Emitting optimized MIR for a polymorphic function may require computing layout of a type that isn't (yet) known. This happens in the instcombine pass, for example. Let's fail gracefully in that condition.
cc `@saethlin`
fixes#107999
Handle discriminant in DataflowConstProp
cc ``@jachris``
r? ``@JakobDegen``
This PR attempts to extend the DataflowConstProp pass to handle propagation of discriminants. We handle this by adding 2 new variants to `TrackElem`: `TrackElem::Variant` for enum variants and `TrackElem::Discriminant` for the enum discriminant pseudo-place.
The difficulty is that the enum discriminant and enum variants may alias each another. This is the issue of the `Option<NonZeroUsize>` test, which is the equivalent of https://github.com/rust-lang/unsafe-code-guidelines/issues/84 with a direct write.
To handle that, we generalize the flood process to flood all the potentially aliasing places. In particular:
- any write to `(PLACE as Variant)`, either direct or through a projection, floods `(PLACE as OtherVariant)` for all other variants and `discriminant(PLACE)`;
- `SetDiscriminant(PLACE)` floods `(PLACE as Variant)` for each variant.
This implies that flooding is not hierarchical any more, and that an assignment to a non-tracked place may need to flood a tracked place. This is handled by `for_each_aliasing_place` which generalizes `preorder_invoke`.
As we deaggregate enums by putting `SetDiscriminant` last, this allows to propagate the value of the discriminant.
This refactor will allow to make https://github.com/rust-lang/rust/pull/107009 able to handle discriminants too.
Mir-Opt for copying enums with large discrepancies
I have been meaning to make this for quite a while, based off of this [hackmd](https://hackmd.io/`@ft4bxUsFT5CEUBmRKYHr7w/rJM8BBPzD).`
I'm not sure where to put this opt now that I've made it, so I'd appreciate suggestions on that!
It's also one long chain of statements, not sure if there's a more friendly format to make it.
r? `@tmiasko`
I would `r` oli but he's on leave so he suggested I `r` tmiasko or wesleywiser.
Treat Drop as a rmw operation
Previously, a Drop terminator was considered a move in MIR. This commit changes the behavior to only treat Drop as a mutable access to the dropped place.
In order for this change to be correct, we need to guarantee that
1. A dropped value won't be used again
2. Places that appear in a drop won't be used again before a
subsequent initialization.
We can ensure this to be correct at MIR construction because Drop will only be emitted when a variable goes out of scope, thus having:
* (1) as there is no way of reaching the old value. drop-elaboration
will also remove any uninitialized drop.
* (2) as the place can't be named following the end of the scope.
However, the initialization status, previously tracked by moves, should also be tied to the execution of a Drop, hence the additional logic in the dataflow analyses.
From discussion in [this thread](https://rust-lang.zulipchat.com/#narrow/stream/233931-t-compiler.2Fmajor-changes/topic/.60DROP.60.20to.20.60DROP_IF.60.20compiler-team.23558), originating from https://github.com/rust-lang/compiler-team/issues/558.
See also https://github.com/rust-lang/rust/pull/104488#discussion_r1085556010
Turn projections into copies in CopyProp.
The current implementation can leave behind projections that are moved out several times.
This PR widens the check to turn such moves into copies: a move out of a projection of a copy is equivalent to a copy of the original projection.
There is a distinction between running this on wasm and i686, even though they should be
identical. This technically is not _incorrect_, it's just an unexpected difference, which is
worth investigating, but not for correctness.
Remove both StorageLive and StorageDead in CopyProp.
Fixes https://github.com/rust-lang/rust/issues/107511https://github.com/rust-lang/rust/pull/106908 removed StorageDead without the accompanying StorageLive. In loops, execution would see repeated StorageLive, without any StorageDead, which is UB.
So when removing storage statements, we have to remove both StorageLive and StorageDead.
~I also added a MIR validation pass for StorageLive. It may be a bit overzealous.~
Currently, deriving on packed structs has some non-trivial limitations,
related to the fact that taking references on unaligned fields is UB.
The current approach to field accesses in derived code:
- Normal case: `&self.0`
- In a packed struct that derives `Copy`: `&{self.0}`
- In a packed struct that doesn't derive `Copy`: `&self.0`
Plus, we disallow deriving any builtin traits other than `Default` for any
packed generic type, because it's possible that there might be
misaligned fields. This is a fairly broad restriction.
Plus, we disallow deriving any builtin traits other than `Default` for most
packed types that don't derive `Copy`. (The exceptions are those where the
alignments inherently satisfy the packing, e.g. in a type with
`repr(packed(N))` where all the fields have alignments of `N` or less
anyway. Such types are pretty strange, because the `packed` attribute is
not having any effect.)
This commit introduces a new, simpler approach to field accesses:
- Normal case: `&self.0`
- In a packed struct: `&{self.0}`
In the latter case, this requires that all fields impl `Copy`, which is
a new restriction. This means that the following example compiles under
the old approach and doesn't compile under the new approach.
```
#[derive(Debug)]
struct NonCopy(u8);
#[derive(Debug)
#[repr(packed)]
struct MyType(NonCopy);
```
(Note that the old approach's support for cases like this was brittle.
Changing the `u8` to a `u16` would be enough to stop it working. So not
much capability is lost here.)
However, the other constraints from the old rules are removed. We can now
derive builtin traits for packed generic structs like this:
```
trait Trait { type A; }
#[derive(Hash)]
#[repr(packed)]
pub struct Foo<T: Trait>(T, T::A);
```
To allow this, we add a `T: Copy` bound in the derived impl and a `T::A:
Copy` bound in where clauses. So `T` and `T::A` must impl `Copy`.
We can now also derive builtin traits for packed structs that don't derive
`Copy`, so long as the fields impl `Copy`:
```
#[derive(Hash)]
#[repr(packed)]
pub struct Foo(u32);
```
This includes types that hand-impl `Copy` rather than deriving it, such as the
following, that show up in winapi-0.2:
```
#[derive(Clone)]
#[repr(packed)]
struct MyType(i32);
impl Copy for MyType {}
```
The new approach is simpler to understand and implement, and it avoids
the need for the `unsafe_derive_on_repr_packed` check.
One exception is required for backwards-compatibility: we allow `[u8]`
fields for now. There is a new lint for this,
`byte_slice_in_packed_struct_with_derive`.
Previously, a Drop terminator was considered a move in MIR.
This commit changes the behavior to only treat Drop as a mutable
access to the dropped place.
In order for this change to be correct, we need to guarantee that
a) A dropped value won't be used again
b) Places that appear in a drop won't be used again before a
subsequent initialization.
We can ensure this to be correct at MIR construction because Drop
will only be emitted when a variable goes out of scope,
thus having:
(a) as there is no way of reaching the old value. drop-elaboration
will also remove any uninitialized drop.
(b) as the place can't be named following the end of the scope.
However, the initialization status, previously tracked by moves,
should also be tied to the execution of a Drop, hence the
additional logic in the dataflow analyses.
Implement simple CopyPropagation based on SSA analysis
This PR extracts the "copy propagation" logic from https://github.com/rust-lang/rust/pull/106285.
MIR may produce chains of assignment between locals, like `_x = move? _y`.
This PR attempts to remove such chains by unifying locals.
The current implementation is a bit overzealous in turning moves into copies, and in removing storage statements.
Use stable metric for const eval limit instead of current terminator-based logic
This patch adds a `MirPass` that inserts a new MIR instruction `ConstEvalCounter` to any loops and function calls in the CFG. This instruction is used during Const Eval to count against the `const_eval_limit`, and emit the `StepLimitReached` error, replacing the current logic which uses Terminators only.
The new method of counting loops and function calls should be more stable across compiler versions (i.e., not cause crates that compiled successfully before, to no longer compile when changes to the MIR generation/optimization are made).
Also see: #103877
InstCombine away intrinsic validity assertions
This optimization (currently) fires 246 times on the standard library. It seems to fire hardly at all on the big crates in the benchmark suite. Interesting.
This patch adds a `MirPass` that tracks the number of back-edges and
function calls in the CFG, adds a new MIR instruction to increment a
counter every time they are encountered during Const Eval, and emit a
warning if a configured limit is breached.
There is a number of APIs that answer dominance queries. Previously they
were named either "dominates" or "is_dominated_by". Consistently use the
"dominates" form.
No functional changes.
- Eliminates all the `get_context` calls that async lowering created.
- Replace all `Local` `ResumeTy` types with `&mut Context<'_>`.
The `Local`s that have their types replaced are:
- The `resume` argument itself.
- The argument to `get_context`.
- The yielded value of a `yield`.
The `ResumeTy` hides a `&mut Context<'_>` behind an unsafe raw pointer, and the
`get_context` function is being used to convert that back to a `&mut Context<'_>`.
Ideally the async lowering would not use the `ResumeTy`/`get_context` indirection,
but rather directly use `&mut Context<'_>`, however that would currently
lead to higher-kinded lifetime errors.
See <https://github.com/rust-lang/rust/issues/105501>.
The async lowering step and the type / lifetime inference / checking are
still using the `ResumeTy` indirection for the time being, and that indirection
is removed here. After this transform, the generator body only knows about `&mut Context<'_>`.
Always permit ConstProp to exploit arithmetic identities
Fixes https://github.com/rust-lang/rust/issues/72751
Initially, I thought I would need to enable operand propagation then do something else, but actually https://github.com/rust-lang/rust/pull/74491 already has the fix for the issue in question! It looks like this optimization was put under MIR opt level 3 due to possible soundness/stability implications, then demoted further to MIR opt level 4 when MIR opt level 2 became associated with `--release`.
Perhaps in the past we were doing CTFE on optimized MIR? We aren't anymore, so this optimization has no stability implications.
r? `@oli-obk`