Commit Graph

1814 Commits

Author SHA1 Message Date
Ralf Jung
dba1849c22 interpret: hide some reexports in rustdoc 2024-05-02 18:47:36 +02:00
Ralf Jung
173d1bd36b properly fill a promoted's required_consts
then we can also make all_required_consts_are_checked a constant instead of a function
2024-04-23 23:02:54 +02:00
Ralf Jung
bf021ea625 interpret: sanity-check that required_consts captures all consts that can fail 2024-04-23 22:52:44 +02:00
bors
40dcd796d0 Auto merge of #124302 - matthiaskrgr:rollup-2aya8n8, r=matthiaskrgr
Rollup of 3 pull requests

Successful merges:

 - #124003 (Dellvmize some intrinsics (use `u32` instead of `Self` in some integer intrinsics))
 - #124169 (Don't fatal when calling `expect_one_of` when recovering arg in `parse_seq`)
 - #124286 (Subtree sync for rustc_codegen_cranelift)

r? `@ghost`
`@rustbot` modify labels: rollup
2024-04-23 18:23:46 +00:00
Matthias Krüger
918304b190
Rollup merge of #124003 - WaffleLapkin:dellvmization, r=scottmcm,RalfJung,antoyo
Dellvmize some intrinsics (use `u32` instead of `Self` in some integer intrinsics)

This implements https://github.com/rust-lang/compiler-team/issues/693 minus what was implemented in #123226.

Note: I decided to _not_ change `shl`/... builder methods, as it just doesn't seem worth it.

r? ``@scottmcm``
2024-04-23 20:17:51 +02:00
León Orell Valerian Liehr
332cac2c6d
Rollup merge of #122598 - Nadrieril:full-derefpats, r=matthewjasper
deref patterns: lower deref patterns to MIR

This lowers deref patterns to MIR. This is a bit tricky because this is the first kind of pattern that requires storing a value in a temporary. Thanks to https://github.com/rust-lang/rust/pull/123324 false edges are no longer a problem.

The thing I'm not confident about is the handling of fake borrows. This PR ignores any fake borrows inside a deref pattern. We are guaranteed to at least fake borrow the place of the first pointer value, which could be enough, but I'm not certain.
2024-04-23 17:25:15 +02:00
Matthias Krüger
8039488e59
Rollup merge of #124220 - RalfJung:interpret-wrong-vtable, r=oli-obk
Miri: detect wrong vtables in wide pointers

Fixes https://github.com/rust-lang/miri/issues/3497.
Needed to catch the UB that https://github.com/rust-lang/rust/pull/123572 will start exploiting.

r? `@oli-obk`
2024-04-23 06:24:57 +02:00
bors
aca749eefc Auto merge of #121801 - zetanumbers:async_drop_glue, r=oli-obk
Add simple async drop glue generation

This is a prototype of the async drop glue generation for some simple types. Async drop glue is intended to behave very similar to the regular drop glue except for being asynchronous. Currently it does not execute synchronous drops but only calls user implementations of `AsyncDrop::async_drop` associative function and awaits the returned future. It is not complete as it only recurses into arrays, slices, tuples, and structs and does not have same sensible restrictions as the old `Drop` trait implementation like having the same bounds as the type definition, while code assumes their existence (requires a future work).

This current design uses a workaround as it does not create any custom async destructor state machine types for ADTs, but instead uses types defined in the std library called future combinators (deferred_async_drop, chain, ready_unit).

Also I recommend reading my [explainer](https://zetanumbers.github.io/book/async-drop-design.html).

This is a part of the [MCP: Low level components for async drop](https://github.com/rust-lang/compiler-team/issues/727) work.

Feature completeness:

 - [x] `AsyncDrop` trait
 - [ ] `async_drop_in_place_raw`/async drop glue generation support for
   - [x] Trivially destructible types (integers, bools, floats, string slices, pointers, references, etc.)
   - [x] Arrays and slices (array pointer is unsized into slice pointer)
   - [x] ADTs (enums, structs, unions)
   - [x] tuple-like types (tuples, closures)
   - [ ] Dynamic types (`dyn Trait`, see explainer's [proposed design](https://github.com/zetanumbers/posts/blob/main/async-drop-design.md#async-drop-glue-for-dyn-trait))
   - [ ] coroutines (https://github.com/rust-lang/rust/pull/123948)
 - [x] Async drop glue includes sync drop glue code
 - [x] Cleanup branch generation for `async_drop_in_place_raw`
 - [ ] Union rejects non-trivially async destructible fields
 - [ ] `AsyncDrop` implementation requires same bounds as type definition
 - [ ] Skip trivially destructible fields (optimization)
 - [ ] New [`TyKind::AdtAsyncDestructor`](https://github.com/zetanumbers/posts/blob/main/async-drop-design.md#adt-async-destructor-types) and get rid of combinators
 - [ ] [Synchronously undroppable types](https://github.com/zetanumbers/posts/blob/main/async-drop-design.md#exclusively-async-drop)
 - [ ] Automatic async drop at the end of the scope in async context
2024-04-23 02:10:23 +00:00
Guillaume Gomez
6a326d889a
Rollup merge of #124230 - reitermarkus:generic-nonzero-stable, r=dtolnay
Stabilize generic `NonZero`.

Tracking issue: https://github.com/rust-lang/rust/issues/120257

r? `@dtolnay`
2024-04-22 20:26:00 +02:00
Markus Reiter
33e68aadc9
Stabilize generic NonZero. 2024-04-22 18:48:47 +02:00
Scott McMurray
bb8d6f790b Address PR feedback 2024-04-21 11:08:37 -07:00
Scott McMurray
de64ff76f8 Use it in the library, and InstSimplify it away in the easy places 2024-04-21 11:08:37 -07:00
Scott McMurray
4f4442655e Add an intrinsic that lowers to AggregateKind::RawPtr 2024-04-21 11:08:37 -07:00
Scott McMurray
e6b2b764ec Add AggregateKind::RawPtr and enough support to compile 2024-04-21 11:08:37 -07:00
Ralf Jung
875f0c2da0 Miri: detect wrong vtables in wide pointers 2024-04-21 13:04:51 +02:00
Nadrieril
50531806ee Add a non-shallow fake borrow 2024-04-20 16:01:35 +02:00
bors
ce3263e60e Auto merge of #124113 - RalfJung:interpret-scalar-ops, r=oli-obk
interpret: use ScalarInt for bin-ops; avoid PartialOrd for ScalarInt

Best reviewed commit-by-commit

r? `@oli-obk`
2024-04-19 17:00:28 +00:00
Ralf Jung
42220f0930 ScalarInt: add methods to assert being a (u)int of given size 2024-04-19 13:51:52 +02:00
Ralf Jung
5e6184cdb7 interpret/binary_int_op: avoid dropping to raw ints until we determined the sign 2024-04-18 14:25:06 +02:00
bors
c25473ff62 Auto merge of #124008 - nnethercote:simpler-static_assert_size, r=Nilstrieb
Simplify `static_assert_size`s.

We want to run them on all 64-bit platforms.

r? `@ghost`
2024-04-18 09:47:45 +00:00
Nicholas Nethercote
0d97669a17 Simplify static_assert_sizes.
We want to run them on all 64-bit platforms.
2024-04-18 15:36:25 +10:00
bors
5260893724 Auto merge of #122684 - oli-obk:delay_interning_errors_to_after_validaiton, r=RalfJung
Delay interning errors to after validation

fixes https://github.com/rust-lang/rust/issues/122398
fixes #122548

This improves diagnostics since validation errors are usually more helpful compared with interning errors that just make broad statements about the entire constant

r? `@RalfJung`
2024-04-18 02:34:04 +00:00
Matthias Krüger
6388167811
Rollup merge of #124030 - RalfJung:adjust_alloc_base_pointer, r=oli-obk
interpret: pass MemoryKind to adjust_alloc_base_pointer

Another puzzle piece for https://github.com/rust-lang/miri/pull/3475.

The 2nd commit renames base_pointer -> root_pointer; that's how Tree Borrows already calls them and I think the term is more clear than "base pointer". In particular, this distinguishes it from "base address", since a root pointer can point anywhere into an allocation, not just its base address.

https://github.com/rust-lang/rust/pull/124018 has been rolled up already so I couldn't add it there any more.

r? ```@oli-obk```
2024-04-17 18:01:38 +02:00
Oli Scherer
126dcc618d Use less fragile error handling 2024-04-17 09:50:44 +00:00
Oli Scherer
77fe9f0a72 Validate before reporting interning errors.
validation produces much higher quality errors and already handles most of the cases
2024-04-17 09:50:44 +00:00
Oli Scherer
8b2a4f8b43 Simplify alloc id mutability check 2024-04-17 09:50:44 +00:00
Oli Scherer
140c9e10bb Deduplicate logic for checking the mutability of allocations 2024-04-17 09:50:44 +00:00
Oli Scherer
d87e9636d5 Run the "is this static mutable" logic the same way as in in_mutable_memory 2024-04-17 09:50:44 +00:00
Oli Scherer
8c9cba2be7 Validate nested static items 2024-04-17 09:50:15 +00:00
Ralf Jung
ae7b07f2dc interpret: rename base_pointer -> root_pointer
also in Miri, "base tag" -> "root tag"
2024-04-17 07:35:48 +02:00
Ralf Jung
9e239bdc76 interpret: pass MemoryKind to adjust_alloc_base_pointer 2024-04-17 07:35:48 +02:00
Matthias Krüger
45940fe6d8
Rollup merge of #122813 - nnethercote:nicer-quals, r=compiler-errors
Qualifier tweaking

Adding and removing qualifiers in some cases that make things nicer. Details in individual commits.

r? `@compiler-errors`
2024-04-17 05:44:52 +02:00
Guillaume Gomez
4885ddfa92
Rollup merge of #123675 - oli-obk:static_wf_ice, r=compiler-errors
Taint const qualifs if a static is referenced that didn't pass wfcheck

It is correct to only check the signature here, as the ICE is caused by `USE_WITH_ERROR` trying to allocate memory to store the result of `WITH_ERROR` before evaluating it.

fixes #123153
2024-04-17 00:00:22 +02:00
zetanumbers
24a24ec6ba Add simple async drop glue generation
Explainer: https://zetanumbers.github.io/book/async-drop-design.html

https://github.com/rust-lang/rust/pull/121801
2024-04-16 20:45:07 +03:00
Matthias Krüger
4971d9ffe4
Rollup merge of #124024 - RalfJung:interpret-comment, r=oli-obk
interpret: remove outdated comment

In https://github.com/rust-lang/rust/pull/107756, allocation became generally fallible, so the "only panic if there is provenance" no longer applies.

r? ``@oli-obk``
2024-04-16 17:54:46 +02:00
Ralf Jung
5b8b9cfaaa interpret: remove outdated comment 2024-04-16 17:33:12 +02:00
Ralf Jung
18bfca50f1 interpret: pass MemoryKind to before_memory_deallocation 2024-04-16 16:37:34 +02:00
Maybe Waffle
7ce867f552 Add an assertion in const eval 2024-04-16 11:56:21 +00:00
Maybe Waffle
ceead1bda6 Change intrinsic types to use u32 instead of T to match stable reexports 2024-04-16 11:53:04 +00:00
Oli Scherer
801413ecd1 Taint const qualifs if a static is referenced that didn't pass wfcheck 2024-04-16 10:43:41 +00:00
Nicholas Nethercote
b03ae74c84 Rename a tiny module.
So it doesn't clash with `rustc_middle::ty`.
2024-04-16 16:37:28 +10:00
Nicholas Nethercote
e93f754289 Always use ty:: qualifier for TyKind enum variants.
Because that's the way it should be done.
2024-04-16 16:29:13 +10:00
Oli Scherer
84acfe86de Actually create ranged int types in the type system. 2024-04-08 12:02:19 +00:00
Ben Kimock
a7912cb421 Put checks that detect UB under their own flag below debug_assertions 2024-04-06 11:21:47 -04:00
Jacob Pratt
4332498a6d
Rollup merge of #123401 - Zalathar:assert-size-aarch64, r=fmease
Check `x86_64` size assertions on `aarch64`, too

(Context: https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Checking.20size.20assertions.20on.20aarch64.3F)

Currently the compiler has around 30 sets of `static_assert_size!` for various size-critical data structures (e.g. various IR nodes), guarded by `#[cfg(all(target_arch = "x86_64", target_pointer_width = "64"))]`.

(Presumably this cfg avoids having to maintain separate size values for 32-bit targets and unusual 64-bit targets. Apparently it may have been necessary before the i128/u128 alignment changes, too.)

This is slightly incovenient for people on aarch64 workstations (e.g. Macs), because the assertions normally aren't checked until we push to a PR. So this PR adds `aarch64` to the `#[cfg(..)]` guarding all of those assertions in the compiler.

---

Implemented with a simple find/replace. Verified by manually inspecting each `static_assert_size!` in `compiler/`, and checking that either the replacement succeeded, or adding aarch64 wouldn't have been appropriate.
2024-04-03 20:17:06 -04:00
joboet
989660c3e6
rename expose_addr to expose_provenance 2024-04-03 16:00:38 +02:00
Zalathar
2d47cd77ac Check x86_64 size assertions on aarch64, too
This makes it easier for contributors on aarch64 workstations (e.g. Macs) to
notice when these assertions have been violated.
2024-04-03 16:53:03 +11:00
Jacob Pratt
e9ef8e1efa
Rollup merge of #122935 - RalfJung:with-exposed-provenance, r=Amanieu
rename ptr::from_exposed_addr -> ptr::with_exposed_provenance

As discussed on [Zulip](https://rust-lang.zulipchat.com/#narrow/stream/136281-t-opsem/topic/To.20expose.20or.20not.20to.20expose/near/427757066).

The old name, `from_exposed_addr`, makes little sense as it's not the address that is exposed, it's the provenance. (`ptr.expose_addr()` stays unchanged as we haven't found a better option yet. The intended interpretation is "expose the provenance and return the address".)

The new name nicely matches `ptr::without_provenance`.
2024-04-02 20:37:39 -04:00
bors
88c2f4f5f5 Auto merge of #123385 - matthiaskrgr:rollup-v69vjbn, r=matthiaskrgr
Rollup of 8 pull requests

Successful merges:

 - #123198 (Add fn const BuildHasherDefault::new)
 - #123226 (De-LLVM the unchecked shifts [MCP#693])
 - #123302 (Make sure to insert `Sized` bound first into clauses list)
 - #123348 (rustdoc: add a couple of regression tests)
 - #123362 (Check that nested statics in thread locals are duplicated per thread.)
 - #123368 (CFI: Support non-general coroutines)
 - #123375 (rustdoc: synthetic auto trait impls: accept unresolved region vars for now)
 - #123378 (Update sysinfo to 0.30.8)

Failed merges:

 - #123349 (Fix capture analysis for by-move closure bodies)

r? `@ghost`
`@rustbot` modify labels: rollup
2024-04-02 21:23:53 +00:00
bors
a77322c16f Auto merge of #118310 - scottmcm:three-way-compare, r=davidtwco
Add `Ord::cmp` for primitives as a `BinOp` in MIR

Update: most of this OP was written months ago.  See https://github.com/rust-lang/rust/pull/118310#issuecomment-2016940014 below for where we got to recently that made it ready for review.

---

There are dozens of reasonable ways to implement `Ord::cmp` for integers using comparison, bit-ops, and branches.  Those differences are irrelevant at the rust level, however, so we can make things better by adding `BinOp::Cmp` at the MIR level:

1. Exactly how to implement it is left up to the backends, so LLVM can use whatever pattern its optimizer best recognizes and cranelift can use whichever pattern codegens the fastest.
2. By not inlining those details for every use of `cmp`, we drastically reduce the amount of MIR generated for `derive`d `PartialOrd`, while also making it more amenable to MIR-level optimizations.

Having extremely careful `if` ordering to μoptimize resource usage on broadwell (#63767) is great, but it really feels to me like libcore is the wrong place to put that logic.  Similarly, using subtraction [tricks](https://graphics.stanford.edu/~seander/bithacks.html#CopyIntegerSign) (#105840) is arguably even nicer, but depends on the optimizer understanding it (https://github.com/llvm/llvm-project/issues/73417) to be practical.  Or maybe [bitor is better than add](https://discourse.llvm.org/t/representing-in-ir/67369/2?u=scottmcm)?  But maybe only on a future version that [has `or disjoint` support](https://discourse.llvm.org/t/rfc-add-or-disjoint-flag/75036?u=scottmcm)?  And just because one of those forms happens to be good for LLVM, there's no guarantee that it'd be the same form that GCC or Cranelift would rather see -- especially given their very different optimizers.  Not to mention that if LLVM gets a spaceship intrinsic -- [which it should](https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Suboptimal.20inlining.20in.20std.20function.20.60binary_search.60/near/404250586) -- we'll need at least a rustc intrinsic to be able to call it.

As for simplifying it in Rust, we now regularly inline `{integer}::partial_cmp`, but it's quite a large amount of IR.  The best way to see that is with 8811efa88b (diff-d134c32d028fbe2bf835fef2df9aca9d13332dd82284ff21ee7ebf717bfa4765R113) -- I added a new pre-codegen MIR test for a simple 3-tuple struct, and this PR change it from 36 locals and 26 basic blocks down to 24 locals and 8 basic blocks.  Even better, as soon as the construct-`Some`-then-match-it-in-same-BB noise is cleaned up, this'll expose the `Cmp == 0` branches clearly in MIR, so that an InstCombine (#105808) can simplify that to just a `BinOp::Eq` and thus fix some of our generated code perf issues.  (Tracking that through today's `if a < b { Less } else if a == b { Equal } else { Greater }` would be *much* harder.)

---

r? `@ghost`
But first I should check that perf is ok with this
~~...and my true nemesis, tidy.~~
2024-04-02 19:21:44 +00:00