Uplift `EarlyBinder` into `rustc_type_ir`
We also need to give `EarlyBinder` a `'tcx` param, so that we can carry the `Interner` in the `EarlyBinder` too. This is necessary because otherwise we have an unconstrained `I: Interner` parameter in many of the `EarlyBinder`'s inherent impls.
I also generally think that this is desirable to have, in case we later want to track some state in the `EarlyBinder`.
r? lcnr
Refactor float `Primitive`s to a separate `Float` type
Now there are 4 of them, it makes sense to refactor `F16`, `F32`, `F64` and `F128` out of `Primitive` and into a separate `Float` type (like integers already are). This allows patterns like `F16 | F32 | F64 | F128` to be simplified into `Float(_)`, and is consistent with `ty::FloatTy`.
As a side effect, this PR also makes the `Ty::primitive_size` method work with `f16` and `f128`.
Tracking issue: #116909
`@rustbot` label +F-f16_and_f128
deref patterns: lower deref patterns to MIR
This lowers deref patterns to MIR. This is a bit tricky because this is the first kind of pattern that requires storing a value in a temporary. Thanks to https://github.com/rust-lang/rust/pull/123324 false edges are no longer a problem.
The thing I'm not confident about is the handling of fake borrows. This PR ignores any fake borrows inside a deref pattern. We are guaranteed to at least fake borrow the place of the first pointer value, which could be enough, but I'm not certain.
Add simple async drop glue generation
This is a prototype of the async drop glue generation for some simple types. Async drop glue is intended to behave very similar to the regular drop glue except for being asynchronous. Currently it does not execute synchronous drops but only calls user implementations of `AsyncDrop::async_drop` associative function and awaits the returned future. It is not complete as it only recurses into arrays, slices, tuples, and structs and does not have same sensible restrictions as the old `Drop` trait implementation like having the same bounds as the type definition, while code assumes their existence (requires a future work).
This current design uses a workaround as it does not create any custom async destructor state machine types for ADTs, but instead uses types defined in the std library called future combinators (deferred_async_drop, chain, ready_unit).
Also I recommend reading my [explainer](https://zetanumbers.github.io/book/async-drop-design.html).
This is a part of the [MCP: Low level components for async drop](https://github.com/rust-lang/compiler-team/issues/727) work.
Feature completeness:
- [x] `AsyncDrop` trait
- [ ] `async_drop_in_place_raw`/async drop glue generation support for
- [x] Trivially destructible types (integers, bools, floats, string slices, pointers, references, etc.)
- [x] Arrays and slices (array pointer is unsized into slice pointer)
- [x] ADTs (enums, structs, unions)
- [x] tuple-like types (tuples, closures)
- [ ] Dynamic types (`dyn Trait`, see explainer's [proposed design](https://github.com/zetanumbers/posts/blob/main/async-drop-design.md#async-drop-glue-for-dyn-trait))
- [ ] coroutines (https://github.com/rust-lang/rust/pull/123948)
- [x] Async drop glue includes sync drop glue code
- [x] Cleanup branch generation for `async_drop_in_place_raw`
- [ ] Union rejects non-trivially async destructible fields
- [ ] `AsyncDrop` implementation requires same bounds as type definition
- [ ] Skip trivially destructible fields (optimization)
- [ ] New [`TyKind::AdtAsyncDestructor`](https://github.com/zetanumbers/posts/blob/main/async-drop-design.md#adt-async-destructor-types) and get rid of combinators
- [ ] [Synchronously undroppable types](https://github.com/zetanumbers/posts/blob/main/async-drop-design.md#exclusively-async-drop)
- [ ] Automatic async drop at the end of the scope in async context
Add support to intrinsics fallback body
Before this fix, the call to `body()` would crash, since `has_body()` would return true, but we would try to retrieve the body of an intrinsic which is not allowed.
Instead, the `Instance::body()` function will now convert an Intrinsic into an Item before retrieving its body.
Note: I also changed how we monomorphize the instance body. Unfortunately, the call still ICE for some shims.
r? `@oli-obk`
Before this fix, the call to `body()` would crash, since `has_body()`
would return true, but we would try to retrieve the body of an intrinsic
which is not allowed.
Instead, the `Instance::body()` function will now convert an Intrinsic
into an Item before retrieving its body.
rename ptr::from_exposed_addr -> ptr::with_exposed_provenance
As discussed on [Zulip](https://rust-lang.zulipchat.com/#narrow/stream/136281-t-opsem/topic/To.20expose.20or.20not.20to.20expose/near/427757066).
The old name, `from_exposed_addr`, makes little sense as it's not the address that is exposed, it's the provenance. (`ptr.expose_addr()` stays unchanged as we haven't found a better option yet. The intended interpretation is "expose the provenance and return the address".)
The new name nicely matches `ptr::without_provenance`.
Add `Ord::cmp` for primitives as a `BinOp` in MIR
Update: most of this OP was written months ago. See https://github.com/rust-lang/rust/pull/118310#issuecomment-2016940014 below for where we got to recently that made it ready for review.
---
There are dozens of reasonable ways to implement `Ord::cmp` for integers using comparison, bit-ops, and branches. Those differences are irrelevant at the rust level, however, so we can make things better by adding `BinOp::Cmp` at the MIR level:
1. Exactly how to implement it is left up to the backends, so LLVM can use whatever pattern its optimizer best recognizes and cranelift can use whichever pattern codegens the fastest.
2. By not inlining those details for every use of `cmp`, we drastically reduce the amount of MIR generated for `derive`d `PartialOrd`, while also making it more amenable to MIR-level optimizations.
Having extremely careful `if` ordering to μoptimize resource usage on broadwell (#63767) is great, but it really feels to me like libcore is the wrong place to put that logic. Similarly, using subtraction [tricks](https://graphics.stanford.edu/~seander/bithacks.html#CopyIntegerSign) (#105840) is arguably even nicer, but depends on the optimizer understanding it (https://github.com/llvm/llvm-project/issues/73417) to be practical. Or maybe [bitor is better than add](https://discourse.llvm.org/t/representing-in-ir/67369/2?u=scottmcm)? But maybe only on a future version that [has `or disjoint` support](https://discourse.llvm.org/t/rfc-add-or-disjoint-flag/75036?u=scottmcm)? And just because one of those forms happens to be good for LLVM, there's no guarantee that it'd be the same form that GCC or Cranelift would rather see -- especially given their very different optimizers. Not to mention that if LLVM gets a spaceship intrinsic -- [which it should](https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Suboptimal.20inlining.20in.20std.20function.20.60binary_search.60/near/404250586) -- we'll need at least a rustc intrinsic to be able to call it.
As for simplifying it in Rust, we now regularly inline `{integer}::partial_cmp`, but it's quite a large amount of IR. The best way to see that is with 8811efa88b (diff-d134c32d028fbe2bf835fef2df9aca9d13332dd82284ff21ee7ebf717bfa4765R113) -- I added a new pre-codegen MIR test for a simple 3-tuple struct, and this PR change it from 36 locals and 26 basic blocks down to 24 locals and 8 basic blocks. Even better, as soon as the construct-`Some`-then-match-it-in-same-BB noise is cleaned up, this'll expose the `Cmp == 0` branches clearly in MIR, so that an InstCombine (#105808) can simplify that to just a `BinOp::Eq` and thus fix some of our generated code perf issues. (Tracking that through today's `if a < b { Less } else if a == b { Equal } else { Greater }` would be *much* harder.)
---
r? `@ghost`
But first I should check that perf is ok with this
~~...and my true nemesis, tidy.~~
Rollup of 8 pull requests
Successful merges:
- #114009 (compiler: allow transmute of ZST arrays with generics)
- #122195 (Note that the caller chooses a type for type param)
- #122651 (Suggest `_` for missing generic arguments in turbofish)
- #122784 (Add `tag_for_variant` query)
- #122839 (Split out `PredicatePolarity` from `ImplPolarity`)
- #122873 (Merge my contributor emails into one using mailmap)
- #122885 (Adjust better spastorino membership to triagebot's adhoc_groups)
- #122888 (add a couple more tests)
r? `@ghost`
`@rustbot` modify labels: rollup
Add methods to create StableMIR constant
I've been experimenting with transforming the StableMIR to instrument the code with potential UB checks.
The modified body will only be used by our analysis tool, however, constants in StableMIR must be backed by rustc constants. Thus, I'm adding a few functions to build constants, such as building string and other primitives.
One question I have is whether we should create a global allocation instead for strings.
r? ``````@oli-obk``````
Add `intrinsic_name` to get plain intrinsic name
Add an `intrinsic_name` API to retrieve the plain intrinsic name. The plain name does not include type arguments (as `trimmed_name` does), which is more convenient to match with intrinsic symbols.
I've been experimenting with transforming the StableMIR to instrument
the code with potential UB checks. The modified body will only
be used by our analysis tool, however, constants in StableMIR must be
backed by rustc constants. Thus, I'm adding a few functions to build
constants, such as building string and other primitives.
Add asm goto support to `asm!`
Tracking issue: #119364
This PR implements asm-goto support, using the syntax described in "future possibilities" section of [RFC2873](https://rust-lang.github.io/rfcs/2873-inline-asm.html#asm-goto).
Currently I have only implemented the `label` part, not the `fallthrough` part (i.e. fallthrough is implicit). This doesn't reduce the expressive though, since you can use label-break to get arbitrary control flow or simply set a value and rely on jump threading optimisation to get the desired control flow. I can add that later if deemed necessary.
r? ``@Amanieu``
cc ``@ojeda``
`CompilerError` has `CompilationFailed` and `ICE` variants, which seems
reasonable at first. But the way it identifies them is flawed:
- If compilation errors out, i.e. `RunCompiler::run` returns an `Err`,
it uses `CompilationFailed`, which is reasonable.
- If compilation panics with `FatalError`, it catches the panic and uses
`ICE`. This is sometimes right, because ICEs do cause `FatalError`
panics, but sometimes wrong, because certain compiler errors also
cause `FatalError` panics. (The compiler/rustdoc/clippy/whatever just
catches the `FatalError` with `catch_with_exit_code` in `main`.)
In other words, certain non-ICE compilation failures get miscategorized
as ICEs. It's not possible to reliably distinguish the two cases, so
this commit merges them. It also renames the combined variant as just
`Failed`, to better match the existing `Interrupted` and `Skipped`
variants.
Here is an example of a non-ICE failure that causes a `FatalError`
panic, from `tests/ui/recursion_limit/issue-105700.rs`:
```
#![recursion_limit="4"]
#![invalid_attribute]
#![invalid_attribute]
#![invalid_attribute]
#![invalid_attribute]
#![invalid_attribute]
//~^ERROR recursion limit reached while expanding
fn main() {{}}
```
The internal function was unsound, it could cause UB in rare cases where
the user inadvertly stored the returned object in a location that could
outlive the TyCtxt.
In order to make it safe, we now take a type context as an argument to
the internal fn, and we ensure that interned items are lifted using the
provided context.
Thus, this change ensures that the compiler can properly enforce
that the object does not outlive the type context it was lifted to.
Make tcx optional from StableMIR run macro and extend it to accept closures
Change `run` macro to avoid sometimes unnecessary dependency on `TyCtxt`, and introduce `run_with_tcx` to capture use cases where `tcx` is required. Additionally, extend both macros to accept closures that may capture variables.
I've also modified the `internal()` method to make it safer, by accepting the type context to force the `'tcx` lifetime to match the context lifetime.
These are non-backward compatible changes, but they only affect internal APIs which are provided today as helper functions until we have a stable API to start the compiler.
I added `tcx` argument to `internal` to force 'tcx to be the same
lifetime as TyCtxt. The only other solution I could think is to change
this function to be `unsafe`.
Simplify the `run` macro to avoid sometimes unnecessary dependency
on `TyCtxt`. Instead, users can use the new internal method `tcx()`.
Additionally, extend the macro to accept closures that may capture
variables.
These are non-backward compatible changes, but they only affect
internal APIs which are provided today as helper functions until we
have a stable API to start the compiler.
Add method to get instance instantiation arguments
Add a method to get the instance instantiation arguments, and include that information in the instance debug.
Add function ABI and type layout to StableMIR
This change introduces a new module to StableMIR named `abi` with information from `rustc_target::abi` and `rustc_abi`, that allow users to retrieve more low level information required to perform bit-precise analysis.
The layout of a type can be retrieved via `Ty::layout`, and the instance ABI can be retrieved via `Instance::fn_abi()`.
To properly handle errors while retrieve layout information, we had to implement a few layout related traits.
r? ```@compiler-errors```
This change introduces a new module to StableMIR named `abi` with
information from `rustc_target::abi` and `rustc_abi`, that allow users
to retrieve more low level information required to perform
bit-precise analysis.
The layout of a type can be retrieved via `Ty::layout`, and the instance
ABI can be retrieved via `Instance::fn_abi()`.
To properly handle errors while retrieve layout information, we had
to implement a few layout related traits.