Commit Graph

1001 Commits

Author SHA1 Message Date
Ralf Jung
173d1bd36b properly fill a promoted's required_consts
then we can also make all_required_consts_are_checked a constant instead of a function
2024-04-23 23:02:54 +02:00
Ralf Jung
bf021ea625 interpret: sanity-check that required_consts captures all consts that can fail 2024-04-23 22:52:44 +02:00
Matthias Krüger
918304b190
Rollup merge of #124003 - WaffleLapkin:dellvmization, r=scottmcm,RalfJung,antoyo
Dellvmize some intrinsics (use `u32` instead of `Self` in some integer intrinsics)

This implements https://github.com/rust-lang/compiler-team/issues/693 minus what was implemented in #123226.

Note: I decided to _not_ change `shl`/... builder methods, as it just doesn't seem worth it.

r? ``@scottmcm``
2024-04-23 20:17:51 +02:00
Matthias Krüger
8039488e59
Rollup merge of #124220 - RalfJung:interpret-wrong-vtable, r=oli-obk
Miri: detect wrong vtables in wide pointers

Fixes https://github.com/rust-lang/miri/issues/3497.
Needed to catch the UB that https://github.com/rust-lang/rust/pull/123572 will start exploiting.

r? `@oli-obk`
2024-04-23 06:24:57 +02:00
bors
aca749eefc Auto merge of #121801 - zetanumbers:async_drop_glue, r=oli-obk
Add simple async drop glue generation

This is a prototype of the async drop glue generation for some simple types. Async drop glue is intended to behave very similar to the regular drop glue except for being asynchronous. Currently it does not execute synchronous drops but only calls user implementations of `AsyncDrop::async_drop` associative function and awaits the returned future. It is not complete as it only recurses into arrays, slices, tuples, and structs and does not have same sensible restrictions as the old `Drop` trait implementation like having the same bounds as the type definition, while code assumes their existence (requires a future work).

This current design uses a workaround as it does not create any custom async destructor state machine types for ADTs, but instead uses types defined in the std library called future combinators (deferred_async_drop, chain, ready_unit).

Also I recommend reading my [explainer](https://zetanumbers.github.io/book/async-drop-design.html).

This is a part of the [MCP: Low level components for async drop](https://github.com/rust-lang/compiler-team/issues/727) work.

Feature completeness:

 - [x] `AsyncDrop` trait
 - [ ] `async_drop_in_place_raw`/async drop glue generation support for
   - [x] Trivially destructible types (integers, bools, floats, string slices, pointers, references, etc.)
   - [x] Arrays and slices (array pointer is unsized into slice pointer)
   - [x] ADTs (enums, structs, unions)
   - [x] tuple-like types (tuples, closures)
   - [ ] Dynamic types (`dyn Trait`, see explainer's [proposed design](https://github.com/zetanumbers/posts/blob/main/async-drop-design.md#async-drop-glue-for-dyn-trait))
   - [ ] coroutines (https://github.com/rust-lang/rust/pull/123948)
 - [x] Async drop glue includes sync drop glue code
 - [x] Cleanup branch generation for `async_drop_in_place_raw`
 - [ ] Union rejects non-trivially async destructible fields
 - [ ] `AsyncDrop` implementation requires same bounds as type definition
 - [ ] Skip trivially destructible fields (optimization)
 - [ ] New [`TyKind::AdtAsyncDestructor`](https://github.com/zetanumbers/posts/blob/main/async-drop-design.md#adt-async-destructor-types) and get rid of combinators
 - [ ] [Synchronously undroppable types](https://github.com/zetanumbers/posts/blob/main/async-drop-design.md#exclusively-async-drop)
 - [ ] Automatic async drop at the end of the scope in async context
2024-04-23 02:10:23 +00:00
Scott McMurray
bb8d6f790b Address PR feedback 2024-04-21 11:08:37 -07:00
Scott McMurray
de64ff76f8 Use it in the library, and InstSimplify it away in the easy places 2024-04-21 11:08:37 -07:00
Ralf Jung
875f0c2da0 Miri: detect wrong vtables in wide pointers 2024-04-21 13:04:51 +02:00
bors
ce3263e60e Auto merge of #124113 - RalfJung:interpret-scalar-ops, r=oli-obk
interpret: use ScalarInt for bin-ops; avoid PartialOrd for ScalarInt

Best reviewed commit-by-commit

r? `@oli-obk`
2024-04-19 17:00:28 +00:00
Ralf Jung
42220f0930 ScalarInt: add methods to assert being a (u)int of given size 2024-04-19 13:51:52 +02:00
Ralf Jung
5e6184cdb7 interpret/binary_int_op: avoid dropping to raw ints until we determined the sign 2024-04-18 14:25:06 +02:00
bors
c25473ff62 Auto merge of #124008 - nnethercote:simpler-static_assert_size, r=Nilstrieb
Simplify `static_assert_size`s.

We want to run them on all 64-bit platforms.

r? `@ghost`
2024-04-18 09:47:45 +00:00
Nicholas Nethercote
0d97669a17 Simplify static_assert_sizes.
We want to run them on all 64-bit platforms.
2024-04-18 15:36:25 +10:00
bors
5260893724 Auto merge of #122684 - oli-obk:delay_interning_errors_to_after_validaiton, r=RalfJung
Delay interning errors to after validation

fixes https://github.com/rust-lang/rust/issues/122398
fixes #122548

This improves diagnostics since validation errors are usually more helpful compared with interning errors that just make broad statements about the entire constant

r? `@RalfJung`
2024-04-18 02:34:04 +00:00
Oli Scherer
126dcc618d Use less fragile error handling 2024-04-17 09:50:44 +00:00
Oli Scherer
77fe9f0a72 Validate before reporting interning errors.
validation produces much higher quality errors and already handles most of the cases
2024-04-17 09:50:44 +00:00
Oli Scherer
8b2a4f8b43 Simplify alloc id mutability check 2024-04-17 09:50:44 +00:00
Oli Scherer
140c9e10bb Deduplicate logic for checking the mutability of allocations 2024-04-17 09:50:44 +00:00
Oli Scherer
d87e9636d5 Run the "is this static mutable" logic the same way as in in_mutable_memory 2024-04-17 09:50:44 +00:00
Oli Scherer
8c9cba2be7 Validate nested static items 2024-04-17 09:50:15 +00:00
Ralf Jung
ae7b07f2dc interpret: rename base_pointer -> root_pointer
also in Miri, "base tag" -> "root tag"
2024-04-17 07:35:48 +02:00
Ralf Jung
9e239bdc76 interpret: pass MemoryKind to adjust_alloc_base_pointer 2024-04-17 07:35:48 +02:00
zetanumbers
24a24ec6ba Add simple async drop glue generation
Explainer: https://zetanumbers.github.io/book/async-drop-design.html

https://github.com/rust-lang/rust/pull/121801
2024-04-16 20:45:07 +03:00
Matthias Krüger
4971d9ffe4
Rollup merge of #124024 - RalfJung:interpret-comment, r=oli-obk
interpret: remove outdated comment

In https://github.com/rust-lang/rust/pull/107756, allocation became generally fallible, so the "only panic if there is provenance" no longer applies.

r? ``@oli-obk``
2024-04-16 17:54:46 +02:00
Ralf Jung
5b8b9cfaaa interpret: remove outdated comment 2024-04-16 17:33:12 +02:00
Ralf Jung
18bfca50f1 interpret: pass MemoryKind to before_memory_deallocation 2024-04-16 16:37:34 +02:00
Maybe Waffle
7ce867f552 Add an assertion in const eval 2024-04-16 11:56:21 +00:00
Maybe Waffle
ceead1bda6 Change intrinsic types to use u32 instead of T to match stable reexports 2024-04-16 11:53:04 +00:00
Oli Scherer
84acfe86de Actually create ranged int types in the type system. 2024-04-08 12:02:19 +00:00
Ben Kimock
a7912cb421 Put checks that detect UB under their own flag below debug_assertions 2024-04-06 11:21:47 -04:00
Jacob Pratt
4332498a6d
Rollup merge of #123401 - Zalathar:assert-size-aarch64, r=fmease
Check `x86_64` size assertions on `aarch64`, too

(Context: https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Checking.20size.20assertions.20on.20aarch64.3F)

Currently the compiler has around 30 sets of `static_assert_size!` for various size-critical data structures (e.g. various IR nodes), guarded by `#[cfg(all(target_arch = "x86_64", target_pointer_width = "64"))]`.

(Presumably this cfg avoids having to maintain separate size values for 32-bit targets and unusual 64-bit targets. Apparently it may have been necessary before the i128/u128 alignment changes, too.)

This is slightly incovenient for people on aarch64 workstations (e.g. Macs), because the assertions normally aren't checked until we push to a PR. So this PR adds `aarch64` to the `#[cfg(..)]` guarding all of those assertions in the compiler.

---

Implemented with a simple find/replace. Verified by manually inspecting each `static_assert_size!` in `compiler/`, and checking that either the replacement succeeded, or adding aarch64 wouldn't have been appropriate.
2024-04-03 20:17:06 -04:00
joboet
989660c3e6
rename expose_addr to expose_provenance 2024-04-03 16:00:38 +02:00
Zalathar
2d47cd77ac Check x86_64 size assertions on aarch64, too
This makes it easier for contributors on aarch64 workstations (e.g. Macs) to
notice when these assertions have been violated.
2024-04-03 16:53:03 +11:00
Jacob Pratt
e9ef8e1efa
Rollup merge of #122935 - RalfJung:with-exposed-provenance, r=Amanieu
rename ptr::from_exposed_addr -> ptr::with_exposed_provenance

As discussed on [Zulip](https://rust-lang.zulipchat.com/#narrow/stream/136281-t-opsem/topic/To.20expose.20or.20not.20to.20expose/near/427757066).

The old name, `from_exposed_addr`, makes little sense as it's not the address that is exposed, it's the provenance. (`ptr.expose_addr()` stays unchanged as we haven't found a better option yet. The intended interpretation is "expose the provenance and return the address".)

The new name nicely matches `ptr::without_provenance`.
2024-04-02 20:37:39 -04:00
bors
88c2f4f5f5 Auto merge of #123385 - matthiaskrgr:rollup-v69vjbn, r=matthiaskrgr
Rollup of 8 pull requests

Successful merges:

 - #123198 (Add fn const BuildHasherDefault::new)
 - #123226 (De-LLVM the unchecked shifts [MCP#693])
 - #123302 (Make sure to insert `Sized` bound first into clauses list)
 - #123348 (rustdoc: add a couple of regression tests)
 - #123362 (Check that nested statics in thread locals are duplicated per thread.)
 - #123368 (CFI: Support non-general coroutines)
 - #123375 (rustdoc: synthetic auto trait impls: accept unresolved region vars for now)
 - #123378 (Update sysinfo to 0.30.8)

Failed merges:

 - #123349 (Fix capture analysis for by-move closure bodies)

r? `@ghost`
`@rustbot` modify labels: rollup
2024-04-02 21:23:53 +00:00
bors
a77322c16f Auto merge of #118310 - scottmcm:three-way-compare, r=davidtwco
Add `Ord::cmp` for primitives as a `BinOp` in MIR

Update: most of this OP was written months ago.  See https://github.com/rust-lang/rust/pull/118310#issuecomment-2016940014 below for where we got to recently that made it ready for review.

---

There are dozens of reasonable ways to implement `Ord::cmp` for integers using comparison, bit-ops, and branches.  Those differences are irrelevant at the rust level, however, so we can make things better by adding `BinOp::Cmp` at the MIR level:

1. Exactly how to implement it is left up to the backends, so LLVM can use whatever pattern its optimizer best recognizes and cranelift can use whichever pattern codegens the fastest.
2. By not inlining those details for every use of `cmp`, we drastically reduce the amount of MIR generated for `derive`d `PartialOrd`, while also making it more amenable to MIR-level optimizations.

Having extremely careful `if` ordering to μoptimize resource usage on broadwell (#63767) is great, but it really feels to me like libcore is the wrong place to put that logic.  Similarly, using subtraction [tricks](https://graphics.stanford.edu/~seander/bithacks.html#CopyIntegerSign) (#105840) is arguably even nicer, but depends on the optimizer understanding it (https://github.com/llvm/llvm-project/issues/73417) to be practical.  Or maybe [bitor is better than add](https://discourse.llvm.org/t/representing-in-ir/67369/2?u=scottmcm)?  But maybe only on a future version that [has `or disjoint` support](https://discourse.llvm.org/t/rfc-add-or-disjoint-flag/75036?u=scottmcm)?  And just because one of those forms happens to be good for LLVM, there's no guarantee that it'd be the same form that GCC or Cranelift would rather see -- especially given their very different optimizers.  Not to mention that if LLVM gets a spaceship intrinsic -- [which it should](https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Suboptimal.20inlining.20in.20std.20function.20.60binary_search.60/near/404250586) -- we'll need at least a rustc intrinsic to be able to call it.

As for simplifying it in Rust, we now regularly inline `{integer}::partial_cmp`, but it's quite a large amount of IR.  The best way to see that is with 8811efa88b (diff-d134c32d028fbe2bf835fef2df9aca9d13332dd82284ff21ee7ebf717bfa4765R113) -- I added a new pre-codegen MIR test for a simple 3-tuple struct, and this PR change it from 36 locals and 26 basic blocks down to 24 locals and 8 basic blocks.  Even better, as soon as the construct-`Some`-then-match-it-in-same-BB noise is cleaned up, this'll expose the `Cmp == 0` branches clearly in MIR, so that an InstCombine (#105808) can simplify that to just a `BinOp::Eq` and thus fix some of our generated code perf issues.  (Tracking that through today's `if a < b { Less } else if a == b { Equal } else { Greater }` would be *much* harder.)

---

r? `@ghost`
But first I should check that perf is ok with this
~~...and my true nemesis, tidy.~~
2024-04-02 19:21:44 +00:00
Oli Scherer
64b75f736d Forbid implicit nested statics in thread local statics 2024-04-02 13:00:46 +00:00
Michael Goulet
4ff8a9bd6b Don't inherit codegen attrs from parent static 2024-03-31 22:34:00 -04:00
Scott McMurray
3da115a93b Add+Use mir::BinOp::Cmp 2024-03-23 23:23:41 -07:00
Ralf Jung
f2cff5ebb9 also rename the SIMD intrinsic 2024-03-23 23:03:37 +01:00
bors
2f090c30dd Auto merge of #122629 - RalfJung:assert-unsafe-precondition, r=saethlin
refactor check_{lang,library}_ub: use a single intrinsic

This enacts the plan I laid out [here](https://github.com/rust-lang/rust/pull/122282#issuecomment-1996917998): use a single intrinsic, called `ub_checks` (in aniticpation of https://github.com/rust-lang/compiler-team/issues/725), that just exposes the value of `debug_assertions` (consistently implemented in both codegen and the interpreter). Put the language vs library UB logic into the library.

This makes it easier to do something like https://github.com/rust-lang/rust/pull/122282 in the future: that just slightly alters the semantics of `ub_checks` (making it more approximating when crates built with different flags are mixed), but it no longer affects whether these checks can happen in Miri or compile-time.

The first commit just moves things around; I don't think these macros and functions belong into `intrinsics.rs` as they are not intrinsics.

r? `@saethlin`
2024-03-23 21:11:00 +00:00
Ralf Jung
6177530420 refactor check_{lang,library}_ub: use a single intrinsic, put policy into library 2024-03-23 18:45:05 +01:00
bors
020bbe46bd Auto merge of #122947 - matthiaskrgr:rollup-10j7orh, r=matthiaskrgr
Rollup of 11 pull requests

Successful merges:

 - #120577 (Stabilize slice_split_at_unchecked)
 - #122698 (Cancel `cargo update` job if there's no updates)
 - #122780 (Rename `hir::Local` into `hir::LetStmt`)
 - #122915 (Delay a bug if no RPITITs were found)
 - #122916 (docs(sync): normalize dot in fn summaries)
 - #122921 (Enable more mir-opt tests in debug builds)
 - #122922 (-Zprint-type-sizes: print the types of awaitees and unnamed coroutine locals.)
 - #122927 (Change an ICE regression test to use the original reproducer)
 - #122930 (add panic location to 'panicked while processing panic')
 - #122931 (Fix some typos in the pin.rs)
 - #122933 (tag_for_variant follow-ups)

r? `@ghost`
`@rustbot` modify labels: rollup
2024-03-23 15:58:17 +00:00
bors
d6eb0f5a09 Auto merge of #122582 - scottmcm:swap-intrinsic-v2, r=oli-obk
Let codegen decide when to `mem::swap` with immediates

Making `libcore` decide this is silly; the backend has so much better information about when it's a good idea.

Thus this PR introduces a new `typed_swap` intrinsic with a fallback body, and replaces that fallback implementation when swapping immediates or scalar pairs.

r? oli-obk

Replaces #111744, and means we'll never need more libs PRs like #111803 or #107140
2024-03-23 13:57:55 +00:00
Ralf Jung
038e7c6c38 rename MIR int2ptr casts to match library name 2024-03-23 13:18:33 +01:00
Ralf Jung
928bd3b4e0 tag_for_variant follow-ups 2024-03-23 10:45:42 +01:00
bors
0ad5e0d2de Auto merge of #122900 - matthiaskrgr:rollup-nls90mb, r=matthiaskrgr
Rollup of 8 pull requests

Successful merges:

 - #114009 (compiler: allow transmute of ZST arrays with generics)
 - #122195 (Note that the caller chooses a type for type param)
 - #122651 (Suggest `_` for missing generic arguments in turbofish)
 - #122784 (Add `tag_for_variant` query)
 - #122839 (Split out `PredicatePolarity` from `ImplPolarity`)
 - #122873 (Merge my contributor emails into one using mailmap)
 - #122885 (Adjust better spastorino membership to triagebot's adhoc_groups)
 - #122888 (add a couple more tests)

r? `@ghost`
`@rustbot` modify labels: rollup
2024-03-22 22:35:11 +00:00
Matthias Krüger
96be3e7cc8
Rollup merge of #122784 - jswrenn:tag_for_variant, r=compiler-errors
Add `tag_for_variant` query

This query allows for sharing code between `rustc_const_eval` and `rustc_transmutability`. It's a precursor to a PR I'm working on to entirely replace the bespoke layout computations in `rustc_transmutability`.

r? `@compiler-errors`
2024-03-22 20:31:29 +01:00
Jack Wrenn
2de9010f66 Add tag_for_variant query
This query allows for sharing code between `rustc_const_eval` and
`rustc_transmutability`.

Also moves `DummyMachine` to `rustc_const_eval`.
2024-03-22 17:01:49 +00:00
Michael Goulet
7be0dbe772 Make RawPtr take Ty and Mutbl separately 2024-03-22 11:13:29 -04:00