Commit Graph

1930 Commits

Author SHA1 Message Date
Michael Goulet
568703c4bd Use a helper to zip together parent and child captures for coroutine-closures 2024-04-10 13:39:52 -04:00
Michael Goulet
69b690f0f6 Only assert for child/parent projection compatibility AFTER checking that theyre coming from the same place 2024-04-10 10:13:24 -04:00
Matthias Krüger
df068dba0a
Rollup merge of #123668 - oli-obk:by_move_body_golfing, r=compiler-errors
async closure coroutine by move body MirPass refactoring

Unsure about the last commit, but I think the other changes help in simplifying the control flow
2024-04-10 04:27:41 +02:00
Oli Scherer
e14e7954ae Iterate over parent captures first, as there is a 1:N mapping of parent captures to child captures 2024-04-09 19:51:11 +00:00
Oli Scherer
bf497b8347 Add a FIXME 2024-04-09 19:51:11 +00:00
Guillaume Gomez
cfe1faa75d
Rollup merge of #123658 - compiler-errors:stop-assuming, r=oli-obk
Stop making any assumption about the projections applied to the upvars in the `ByMoveBody` pass

So it turns out that because of subtle optimizations like [`truncate_capture_for_optimization`](ab5bda1aa7/compiler/rustc_hir_typeck/src/upvar.rs (L2351)), we simply cannot make any assumptions about the shape of the projections applied to the upvar locals in a coroutine body.

So stop doing that -- the code is resilient to such projections, so the assertion really existed only to "protect against the unknown".

r? oli-obk
Fixes #123650
2024-04-09 13:39:23 +02:00
Oli Scherer
f014e91a38 Shrink a loop to its looping part and move out the part that runs after the loop 2024-04-09 07:53:12 +00:00
Oli Scherer
626fd4b258 prefer expect over let else bug! 2024-04-09 07:48:55 +00:00
Oli Scherer
688e531925 Split out a complex if condition into a named function 2024-04-09 07:47:40 +00:00
bors
59c808fcd9 Auto merge of #122387 - DianQK:re-enable-early-otherwise-branch, r=cjgillot
Re-enable the early otherwise branch optimization

Closes #95162. Fixes #119014.

This is the first part of #121397.

An invalid enum discriminant can come from anywhere. We have to check to see if all successors contain the discriminant statement. This should have a pass to hoist instructions.

r? cjgillot
2024-04-09 01:02:29 +00:00
Michael Goulet
54a93ab11e Actually, stop making any assumption about the projections applied to the upvar 2024-04-08 19:47:52 -04:00
bors
ab5bda1aa7 Auto merge of #123645 - matthiaskrgr:rollup-yd8d7f1, r=matthiaskrgr
Rollup of 9 pull requests

Successful merges:

 - #122781 (Fix argument ABI for overaligned structs on ppc64le)
 - #123367 (Safe Transmute: Compute transmutability from `rustc_target::abi::Layout`)
 - #123518 (Fix `ByMove` coroutine-closure shim (for 2021 precise closure capturing behavior))
 - #123547 (bootstrap: remove unused pub fns)
 - #123564 (Don't emit divide-by-zero panic paths in `StepBy::len`)
 - #123578 (Restore `pred_known_to_hold_modulo_regions`)
 - #123591 (Remove unnecessary cast from `LLVMRustGetInstrProfIncrementIntrinsic`)
 - #123632 (parser: reduce visibility of unnecessary public `UnmatchedDelim`)
 - #123635 (CFI: Fix ICE in KCFI non-associated function pointers)

r? `@ghost`
`@rustbot` modify labels: rollup
2024-04-08 20:31:08 +00:00
Matthias Krüger
36d1915449
Rollup merge of #123518 - compiler-errors:by-move-fixes, r=oli-obk
Fix `ByMove` coroutine-closure shim (for 2021 precise closure capturing behavior)

This PR reworks the way that we perform the `ByMove` coroutine-closure shim to account for the fact that the upvars of the outer coroutine-closure and the inner coroutine might not line up due to edition-2021 closure capture rules changes.

Specifically, the number of upvars may differ *and/or* the inner coroutine may have additional projections applied to an upvar. This PR reworks the information we pass into the `ByMoveBody` MIR visitor to account for both of these facts.

I tried to leave comments explaining exactly what everything is doing, but let me know if you have questions.

r? oli-obk
2024-04-08 22:06:21 +02:00
bors
211518e5fb Auto merge of #120614 - DianQK:simplify-switch-int, r=cjgillot
Transforms match into an assignment statement

Fixes #106459.

We should be able to do some similar transformations, like `enum` to `enum`.

r? mir-opt
2024-04-08 18:28:50 +00:00
Oli Scherer
84acfe86de Actually create ranged int types in the type system. 2024-04-08 12:02:19 +00:00
DianQK
166bb1bd46
Don't change the otherwise of the switch 2024-04-08 19:20:07 +08:00
DianQK
f4407370db
Change the return type of can_simplify to Option<()> 2024-04-08 19:09:57 +08:00
DianQK
032bb742ab
Add comments for CompareType 2024-04-08 19:07:53 +08:00
DianQK
254289a16e
Updating the MIR with MirPatch 2024-04-08 19:04:11 +08:00
DianQK
e752af765e
Transforms a match containing negative numbers into an assignment statement as well 2024-04-08 19:00:57 +08:00
DianQK
1f061f47e2
Transforms match into an assignment statement 2024-04-08 19:00:53 +08:00
DianQK
7af7458453
Refactor MatchBranchSimplification 2024-04-08 18:54:42 +08:00
DianQK
a334848ba3
Re-enable the early otherwise branch optimization 2024-04-07 21:14:29 +08:00
DianQK
31e74771f0
Resolve unsound hoisting of discriminant in EarlyOtherwiseBranch 2024-04-07 21:14:26 +08:00
Ben Kimock
a7912cb421 Put checks that detect UB under their own flag below debug_assertions 2024-04-06 11:21:47 -04:00
Michael Goulet
ad0fcac72b Account for an additional reborrow inserted by UniqueImmBorrow and MutBorrow 2024-04-05 17:35:03 -04:00
Michael Goulet
49c4ebcc40 Check the base of the place too! 2024-04-05 16:48:45 -04:00
Michael Goulet
0f13bd436b Add some helpful comments 2024-04-05 16:29:49 -04:00
Michael Goulet
3674032eb2 Rework the ByMoveBody shim to actually work correctly 2024-04-05 15:28:13 -04:00
Guillaume Gomez
02ee8a8cee
Rollup merge of #123350 - compiler-errors:async-closure-by-move, r=oli-obk
Actually use the inferred `ClosureKind` from signature inference in coroutine-closures

A follow-up to https://github.com/rust-lang/rust/pull/123349, which fixes another subtle bug: We were not taking into account the async closure kind we infer during closure signature inference.

When I pass a closure directly to an arg like `fn(x: impl async FnOnce())`, that should have the side-effect of artificially restricting the kind of the async closure to `ClosureKind::FnOnce`. We weren't doing this -- that's a quick fix; however, it uncovers a second, more subtle bug with the way that `move`, async closures, and `FnOnce` interact.

Specifically, when we have an async closure like:
```
let x = Struct;
let c = infer_as_fnonce(async move || {
  println!("{x:?}");
}
```

The outer closure captures `x` by move, but the inner coroutine still immutably borrows `x` from the outer closure. Since we've forced the closure to by `async FnOnce()`, we can't actually *do* a self borrow, since the signature of `AsyncFnOnce::call_once` doesn't have a borrowed lifetime. This means that all `async move` closures that are constrained to `FnOnce` will fail borrowck.

We can fix that by detecting this case specifically, and making the *inner* async closure `move` as well. This is always beneficial to closure analysis, since if we have an `async FnOnce()` that's `move`, there's no reason to ever borrow anything, so `move` isn't artificially restrictive.
2024-04-05 16:38:51 +02:00
bors
c0ddaef075 Auto merge of #123444 - saethlin:const-eval-inline-cycles, r=tmiasko
Teach MIR inliner query cycle avoidance about const_eval_select

Fixes https://github.com/rust-lang/rust/issues/122659

r? tmiasko
2024-04-05 04:34:05 +00:00
Jacob Pratt
58eb6e5803
Rollup merge of #123464 - fmease:rn-has-proj-to-has-aliases, r=compiler-errors
Cleanup: Rename `HAS_PROJECTIONS` to `HAS_ALIASES` etc.

The name of the bitflag `HAS_PROJECTIONS` and of its corresponding method `has_projections` is quite historical dating back to a time when projections were the only kind of alias type.

I think it's time to update it to clear up any potential confusion for newcomers and to reduce unnecessary friction during contributor onboarding.

r? types
2024-04-04 21:16:58 -04:00
Michael Goulet
55e46612c1 Force move async-closures that are FnOnce to make their inner coroutines also move 2024-04-04 19:44:51 -04:00
León Orell Valerian Liehr
6f17b7f0ab
Rename HAS_PROJECTIONS to HAS_ALIASES etc. 2024-04-04 19:26:17 +02:00
Matthias Krüger
4ba3f46be3
Rollup merge of #123439 - Zalathar:constants, r=oli-obk
coverage: Remove useless constants

After #122972 and #123419, these constants don't serve any useful purpose, so get rid of them.

`@rustbot` label +A-code-coverage
2024-04-04 14:51:18 +02:00
bors
29fe618f75 Auto merge of #123052 - maurer:addr-taken, r=compiler-errors
CFI: Support function pointers for trait methods

Adds support for both CFI and KCFI for function pointers to trait methods by attaching both concrete and abstract types to functions.

KCFI does this through generation of a `ReifyShim` on any function pointer for a method that could go into a vtable, and keeping this separate from `ReifyShim`s that are *intended* for vtable us by setting a `ReifyReason` on them.

CFI does this by setting both the concrete and abstract type on every instance.

This should land after #123024 or a similar PR, as it diverges the implementation of CFI vs KCFI.

r? `@compiler-errors`
2024-04-04 06:40:30 +00:00
Ben Kimock
b0b7c860e1 Teach MIR inliner query cycle avoidance about const_eval_select 2024-04-04 00:10:52 -04:00
Zalathar
e08fdb0f2f coverage: Remove useless constants 2024-04-04 11:07:59 +11:00
Matthias Krüger
25b0e84170
Rollup merge of #123419 - petrochenkov:zeroindex, r=compiler-errors
rustc_index: Add a `ZERO` constant to index types

It is commonly used.
2024-04-03 22:11:02 +02:00
Vadim Petrochenkov
b40ea03f8a rustc_index: Add a ZERO constant to index types
It is commonly used.
2024-04-03 19:06:22 +03:00
joboet
989660c3e6
rename expose_addr to expose_provenance 2024-04-03 16:00:38 +02:00
bors
99c42d2340 Auto merge of #123322 - matthewjasper:remove-mir-unsafeck, r=lcnr,compiler-errors
Remove MIR unsafe check

Now that THIR unsafeck is enabled by default in stable I think we can remove MIR unsafeck entirely. This PR also removes safety information from MIR.
2024-04-03 10:30:34 +00:00
Matthew Jasper
a277c901d9 Remove MIR unsafe check
This also remove safety information from MIR.
2024-04-03 08:50:12 +00:00
bors
c7491b9733 Auto merge of #123402 - workingjubilee:rollup-0j5ihn6, r=workingjubilee
Rollup of 4 pull requests

Successful merges:

 - #122411 ( Provide cabi_realloc on wasm32-wasip2 by default )
 - #123349 (Fix capture analysis for by-move closure bodies)
 - #123359 (Link against libc++abi and libunwind as well when building LLVM wrappers on AIX)
 - #123388 (use a consistent style for links)

r? `@ghost`
`@rustbot` modify labels: rollup
2024-04-03 08:28:48 +00:00
bors
76cf07d5df Auto merge of #122225 - DianQK:nits-120268, r=cjgillot
Rename `UninhabitedEnumBranching` to `UnreachableEnumBranching`

Per [#120268](https://github.com/rust-lang/rust/pull/120268#discussion_r1517492060), I rename `UninhabitedEnumBranching` to `UnreachableEnumBranching` .

I solved some nits to add some comments.

I adjusted the workaround restrictions. This should be useful for `a <= b` and `if let Some/Ok(v)`. For enum with few variants, `early-tailduplication` should not cause compile time overhead.

r? RalfJung
2024-04-03 06:22:23 +00:00
Michael Goulet
ec74a304bb Comments, comments, comments 2024-04-02 20:07:49 -04:00
Michael Goulet
a1a1f41027 Fix capture analysis for by-move closure bodies 2024-04-02 20:07:48 -04:00
bors
a77322c16f Auto merge of #118310 - scottmcm:three-way-compare, r=davidtwco
Add `Ord::cmp` for primitives as a `BinOp` in MIR

Update: most of this OP was written months ago.  See https://github.com/rust-lang/rust/pull/118310#issuecomment-2016940014 below for where we got to recently that made it ready for review.

---

There are dozens of reasonable ways to implement `Ord::cmp` for integers using comparison, bit-ops, and branches.  Those differences are irrelevant at the rust level, however, so we can make things better by adding `BinOp::Cmp` at the MIR level:

1. Exactly how to implement it is left up to the backends, so LLVM can use whatever pattern its optimizer best recognizes and cranelift can use whichever pattern codegens the fastest.
2. By not inlining those details for every use of `cmp`, we drastically reduce the amount of MIR generated for `derive`d `PartialOrd`, while also making it more amenable to MIR-level optimizations.

Having extremely careful `if` ordering to μoptimize resource usage on broadwell (#63767) is great, but it really feels to me like libcore is the wrong place to put that logic.  Similarly, using subtraction [tricks](https://graphics.stanford.edu/~seander/bithacks.html#CopyIntegerSign) (#105840) is arguably even nicer, but depends on the optimizer understanding it (https://github.com/llvm/llvm-project/issues/73417) to be practical.  Or maybe [bitor is better than add](https://discourse.llvm.org/t/representing-in-ir/67369/2?u=scottmcm)?  But maybe only on a future version that [has `or disjoint` support](https://discourse.llvm.org/t/rfc-add-or-disjoint-flag/75036?u=scottmcm)?  And just because one of those forms happens to be good for LLVM, there's no guarantee that it'd be the same form that GCC or Cranelift would rather see -- especially given their very different optimizers.  Not to mention that if LLVM gets a spaceship intrinsic -- [which it should](https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Suboptimal.20inlining.20in.20std.20function.20.60binary_search.60/near/404250586) -- we'll need at least a rustc intrinsic to be able to call it.

As for simplifying it in Rust, we now regularly inline `{integer}::partial_cmp`, but it's quite a large amount of IR.  The best way to see that is with 8811efa88b (diff-d134c32d028fbe2bf835fef2df9aca9d13332dd82284ff21ee7ebf717bfa4765R113) -- I added a new pre-codegen MIR test for a simple 3-tuple struct, and this PR change it from 36 locals and 26 basic blocks down to 24 locals and 8 basic blocks.  Even better, as soon as the construct-`Some`-then-match-it-in-same-BB noise is cleaned up, this'll expose the `Cmp == 0` branches clearly in MIR, so that an InstCombine (#105808) can simplify that to just a `BinOp::Eq` and thus fix some of our generated code perf issues.  (Tracking that through today's `if a < b { Less } else if a == b { Equal } else { Greater }` would be *much* harder.)

---

r? `@ghost`
But first I should check that perf is ok with this
~~...and my true nemesis, tidy.~~
2024-04-02 19:21:44 +00:00
Matthew Maurer
6aa89f684e Track reason for creating a ReifyShim
KCFI needs to be able to tell which kind of `ReifyShim` it is examining
in order to decide whether to use a concrete type (`FnPtr` case) or an
abstract case (`Vtable` case). You can *almost* tell this from context,
but there is one case where you can't - if a trait has a method which is
*not* `#[track_caller]`, with an impl that *is* `#[track_caller]`, both
the vtable and a function pointer created from that method will be
`ReifyShim(def_id)`.

Currently, the reason is optional to ensure no additional unique
`ReifyShim`s are added without KCFI on. However, the case in which an
extra `ReifyShim` is created is sufficiently rare that this may be worth
revisiting to reduce complexity.
2024-04-02 19:11:16 +00:00
Oli Scherer
6f3cc0903c Avoid an is_empty() followed by an index op in favor of a single fallible op 2024-04-02 14:13:05 +00:00