Commit Graph

1797 Commits

Author SHA1 Message Date
Scott McMurray
4f4442655e Add an intrinsic that lowers to AggregateKind::RawPtr 2024-04-21 11:08:37 -07:00
Scott McMurray
e6b2b764ec Add AggregateKind::RawPtr and enough support to compile 2024-04-21 11:08:37 -07:00
bors
ce3263e60e Auto merge of #124113 - RalfJung:interpret-scalar-ops, r=oli-obk
interpret: use ScalarInt for bin-ops; avoid PartialOrd for ScalarInt

Best reviewed commit-by-commit

r? `@oli-obk`
2024-04-19 17:00:28 +00:00
Ralf Jung
42220f0930 ScalarInt: add methods to assert being a (u)int of given size 2024-04-19 13:51:52 +02:00
Ralf Jung
5e6184cdb7 interpret/binary_int_op: avoid dropping to raw ints until we determined the sign 2024-04-18 14:25:06 +02:00
bors
c25473ff62 Auto merge of #124008 - nnethercote:simpler-static_assert_size, r=Nilstrieb
Simplify `static_assert_size`s.

We want to run them on all 64-bit platforms.

r? `@ghost`
2024-04-18 09:47:45 +00:00
Nicholas Nethercote
0d97669a17 Simplify static_assert_sizes.
We want to run them on all 64-bit platforms.
2024-04-18 15:36:25 +10:00
bors
5260893724 Auto merge of #122684 - oli-obk:delay_interning_errors_to_after_validaiton, r=RalfJung
Delay interning errors to after validation

fixes https://github.com/rust-lang/rust/issues/122398
fixes #122548

This improves diagnostics since validation errors are usually more helpful compared with interning errors that just make broad statements about the entire constant

r? `@RalfJung`
2024-04-18 02:34:04 +00:00
Matthias Krüger
6388167811
Rollup merge of #124030 - RalfJung:adjust_alloc_base_pointer, r=oli-obk
interpret: pass MemoryKind to adjust_alloc_base_pointer

Another puzzle piece for https://github.com/rust-lang/miri/pull/3475.

The 2nd commit renames base_pointer -> root_pointer; that's how Tree Borrows already calls them and I think the term is more clear than "base pointer". In particular, this distinguishes it from "base address", since a root pointer can point anywhere into an allocation, not just its base address.

https://github.com/rust-lang/rust/pull/124018 has been rolled up already so I couldn't add it there any more.

r? ```@oli-obk```
2024-04-17 18:01:38 +02:00
Oli Scherer
126dcc618d Use less fragile error handling 2024-04-17 09:50:44 +00:00
Oli Scherer
77fe9f0a72 Validate before reporting interning errors.
validation produces much higher quality errors and already handles most of the cases
2024-04-17 09:50:44 +00:00
Oli Scherer
8b2a4f8b43 Simplify alloc id mutability check 2024-04-17 09:50:44 +00:00
Oli Scherer
140c9e10bb Deduplicate logic for checking the mutability of allocations 2024-04-17 09:50:44 +00:00
Oli Scherer
d87e9636d5 Run the "is this static mutable" logic the same way as in in_mutable_memory 2024-04-17 09:50:44 +00:00
Oli Scherer
8c9cba2be7 Validate nested static items 2024-04-17 09:50:15 +00:00
Ralf Jung
ae7b07f2dc interpret: rename base_pointer -> root_pointer
also in Miri, "base tag" -> "root tag"
2024-04-17 07:35:48 +02:00
Ralf Jung
9e239bdc76 interpret: pass MemoryKind to adjust_alloc_base_pointer 2024-04-17 07:35:48 +02:00
Matthias Krüger
45940fe6d8
Rollup merge of #122813 - nnethercote:nicer-quals, r=compiler-errors
Qualifier tweaking

Adding and removing qualifiers in some cases that make things nicer. Details in individual commits.

r? `@compiler-errors`
2024-04-17 05:44:52 +02:00
Guillaume Gomez
4885ddfa92
Rollup merge of #123675 - oli-obk:static_wf_ice, r=compiler-errors
Taint const qualifs if a static is referenced that didn't pass wfcheck

It is correct to only check the signature here, as the ICE is caused by `USE_WITH_ERROR` trying to allocate memory to store the result of `WITH_ERROR` before evaluating it.

fixes #123153
2024-04-17 00:00:22 +02:00
Matthias Krüger
4971d9ffe4
Rollup merge of #124024 - RalfJung:interpret-comment, r=oli-obk
interpret: remove outdated comment

In https://github.com/rust-lang/rust/pull/107756, allocation became generally fallible, so the "only panic if there is provenance" no longer applies.

r? ``@oli-obk``
2024-04-16 17:54:46 +02:00
Ralf Jung
5b8b9cfaaa interpret: remove outdated comment 2024-04-16 17:33:12 +02:00
Ralf Jung
18bfca50f1 interpret: pass MemoryKind to before_memory_deallocation 2024-04-16 16:37:34 +02:00
Oli Scherer
801413ecd1 Taint const qualifs if a static is referenced that didn't pass wfcheck 2024-04-16 10:43:41 +00:00
Nicholas Nethercote
b03ae74c84 Rename a tiny module.
So it doesn't clash with `rustc_middle::ty`.
2024-04-16 16:37:28 +10:00
Nicholas Nethercote
e93f754289 Always use ty:: qualifier for TyKind enum variants.
Because that's the way it should be done.
2024-04-16 16:29:13 +10:00
Oli Scherer
84acfe86de Actually create ranged int types in the type system. 2024-04-08 12:02:19 +00:00
Ben Kimock
a7912cb421 Put checks that detect UB under their own flag below debug_assertions 2024-04-06 11:21:47 -04:00
Jacob Pratt
4332498a6d
Rollup merge of #123401 - Zalathar:assert-size-aarch64, r=fmease
Check `x86_64` size assertions on `aarch64`, too

(Context: https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Checking.20size.20assertions.20on.20aarch64.3F)

Currently the compiler has around 30 sets of `static_assert_size!` for various size-critical data structures (e.g. various IR nodes), guarded by `#[cfg(all(target_arch = "x86_64", target_pointer_width = "64"))]`.

(Presumably this cfg avoids having to maintain separate size values for 32-bit targets and unusual 64-bit targets. Apparently it may have been necessary before the i128/u128 alignment changes, too.)

This is slightly incovenient for people on aarch64 workstations (e.g. Macs), because the assertions normally aren't checked until we push to a PR. So this PR adds `aarch64` to the `#[cfg(..)]` guarding all of those assertions in the compiler.

---

Implemented with a simple find/replace. Verified by manually inspecting each `static_assert_size!` in `compiler/`, and checking that either the replacement succeeded, or adding aarch64 wouldn't have been appropriate.
2024-04-03 20:17:06 -04:00
joboet
989660c3e6
rename expose_addr to expose_provenance 2024-04-03 16:00:38 +02:00
Zalathar
2d47cd77ac Check x86_64 size assertions on aarch64, too
This makes it easier for contributors on aarch64 workstations (e.g. Macs) to
notice when these assertions have been violated.
2024-04-03 16:53:03 +11:00
Jacob Pratt
e9ef8e1efa
Rollup merge of #122935 - RalfJung:with-exposed-provenance, r=Amanieu
rename ptr::from_exposed_addr -> ptr::with_exposed_provenance

As discussed on [Zulip](https://rust-lang.zulipchat.com/#narrow/stream/136281-t-opsem/topic/To.20expose.20or.20not.20to.20expose/near/427757066).

The old name, `from_exposed_addr`, makes little sense as it's not the address that is exposed, it's the provenance. (`ptr.expose_addr()` stays unchanged as we haven't found a better option yet. The intended interpretation is "expose the provenance and return the address".)

The new name nicely matches `ptr::without_provenance`.
2024-04-02 20:37:39 -04:00
bors
88c2f4f5f5 Auto merge of #123385 - matthiaskrgr:rollup-v69vjbn, r=matthiaskrgr
Rollup of 8 pull requests

Successful merges:

 - #123198 (Add fn const BuildHasherDefault::new)
 - #123226 (De-LLVM the unchecked shifts [MCP#693])
 - #123302 (Make sure to insert `Sized` bound first into clauses list)
 - #123348 (rustdoc: add a couple of regression tests)
 - #123362 (Check that nested statics in thread locals are duplicated per thread.)
 - #123368 (CFI: Support non-general coroutines)
 - #123375 (rustdoc: synthetic auto trait impls: accept unresolved region vars for now)
 - #123378 (Update sysinfo to 0.30.8)

Failed merges:

 - #123349 (Fix capture analysis for by-move closure bodies)

r? `@ghost`
`@rustbot` modify labels: rollup
2024-04-02 21:23:53 +00:00
bors
a77322c16f Auto merge of #118310 - scottmcm:three-way-compare, r=davidtwco
Add `Ord::cmp` for primitives as a `BinOp` in MIR

Update: most of this OP was written months ago.  See https://github.com/rust-lang/rust/pull/118310#issuecomment-2016940014 below for where we got to recently that made it ready for review.

---

There are dozens of reasonable ways to implement `Ord::cmp` for integers using comparison, bit-ops, and branches.  Those differences are irrelevant at the rust level, however, so we can make things better by adding `BinOp::Cmp` at the MIR level:

1. Exactly how to implement it is left up to the backends, so LLVM can use whatever pattern its optimizer best recognizes and cranelift can use whichever pattern codegens the fastest.
2. By not inlining those details for every use of `cmp`, we drastically reduce the amount of MIR generated for `derive`d `PartialOrd`, while also making it more amenable to MIR-level optimizations.

Having extremely careful `if` ordering to μoptimize resource usage on broadwell (#63767) is great, but it really feels to me like libcore is the wrong place to put that logic.  Similarly, using subtraction [tricks](https://graphics.stanford.edu/~seander/bithacks.html#CopyIntegerSign) (#105840) is arguably even nicer, but depends on the optimizer understanding it (https://github.com/llvm/llvm-project/issues/73417) to be practical.  Or maybe [bitor is better than add](https://discourse.llvm.org/t/representing-in-ir/67369/2?u=scottmcm)?  But maybe only on a future version that [has `or disjoint` support](https://discourse.llvm.org/t/rfc-add-or-disjoint-flag/75036?u=scottmcm)?  And just because one of those forms happens to be good for LLVM, there's no guarantee that it'd be the same form that GCC or Cranelift would rather see -- especially given their very different optimizers.  Not to mention that if LLVM gets a spaceship intrinsic -- [which it should](https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Suboptimal.20inlining.20in.20std.20function.20.60binary_search.60/near/404250586) -- we'll need at least a rustc intrinsic to be able to call it.

As for simplifying it in Rust, we now regularly inline `{integer}::partial_cmp`, but it's quite a large amount of IR.  The best way to see that is with 8811efa88b (diff-d134c32d028fbe2bf835fef2df9aca9d13332dd82284ff21ee7ebf717bfa4765R113) -- I added a new pre-codegen MIR test for a simple 3-tuple struct, and this PR change it from 36 locals and 26 basic blocks down to 24 locals and 8 basic blocks.  Even better, as soon as the construct-`Some`-then-match-it-in-same-BB noise is cleaned up, this'll expose the `Cmp == 0` branches clearly in MIR, so that an InstCombine (#105808) can simplify that to just a `BinOp::Eq` and thus fix some of our generated code perf issues.  (Tracking that through today's `if a < b { Less } else if a == b { Equal } else { Greater }` would be *much* harder.)

---

r? `@ghost`
But first I should check that perf is ok with this
~~...and my true nemesis, tidy.~~
2024-04-02 19:21:44 +00:00
Oli Scherer
64b75f736d Forbid implicit nested statics in thread local statics 2024-04-02 13:00:46 +00:00
Michael Goulet
4ff8a9bd6b Don't inherit codegen attrs from parent static 2024-03-31 22:34:00 -04:00
Matthias Krüger
45cec32ac3
Rollup merge of #123133 - xiaoxiangxianzi:master, r=fmease
chore: fix some comments
2024-03-27 23:27:24 +01:00
xiaoxiangxianzi
3157114f0b chore: fix some comments
Signed-off-by: xiaoxiangxianzi <zhaoyizheng@outlook.com>
2024-03-27 22:32:53 +08:00
Guillaume Gomez
bffeb052d1
Rollup merge of #123021 - compiler-errors:coroutine-layout-lol, r=oli-obk
Make `TyCtxt::coroutine_layout` take coroutine's kind parameter

For coroutines that come from coroutine-closures (i.e. async closures), we may have two kinds of bodies stored in the coroutine; one that takes the closure's captures by reference, and one that takes the captures by move.

These currently have identical layouts, but if we do any optimization for these layouts that are related to the upvars, then they will diverge -- e.g. https://github.com/rust-lang/rust/pull/120168#discussion_r1536943728.

This PR relaxes the assertion I added in #121122, and instead make the `TyCtxt::coroutine_layout` method take the `coroutine_kind_ty` argument from the coroutine, which will allow us to differentiate these by-move and by-ref bodies.
2024-03-27 10:13:43 +01:00
bors
c98ea0d808 Auto merge of #111769 - saethlin:ctfe-backtrace-ctrlc, r=RalfJung
Print a backtrace in const eval if interrupted

Demo:
```rust
#![feature(const_eval_limit)]
#![const_eval_limit = "0"]

const OW: u64 = {
    let mut res: u64 = 0;
    let mut i = 0;
    while i < u64::MAX {
        res = res.wrapping_add(i);
        i += 1;
    }
    res
};

fn main() {
    println!("{}", OW);
}
```
```
╭ ➜ ben@archlinux:~/rust
╰ ➤ rustc +stage1 spin.rs
^Cerror[E0080]: evaluation of constant value failed
 --> spin.rs:8:33
  |
8 |         res = res.wrapping_add(i);
  |                                 ^ Compilation was interrupted

note: erroneous constant used
  --> spin.rs:15:20
   |
15 |     println!("{}", OW);
   |                    ^^

note: erroneous constant used
  --> spin.rs:15:20
   |
15 |     println!("{}", OW);
   |                    ^^
   |
   = note: this note originates in the macro `$crate::format_args_nl` which comes from the expansion of the macro `println` (in Nightly builds, run with -Z macro-backtrace for more info)

error: aborting due to previous error

For more information about this error, try `rustc --explain E0080`.
```
2024-03-26 00:04:03 +00:00
Michael Goulet
9bda9ac76e Relax validation now 2024-03-24 21:15:23 -04:00
Michael Goulet
b7d67eace7 Require coroutine kind type to be passed to TyCtxt::coroutine_layout 2024-03-24 21:12:49 -04:00
Matthias Krüger
19d3827efe
Rollup merge of #122937 - Zalathar:unbox, r=oli-obk
Unbox and unwrap the contents of `StatementKind::Coverage`

The payload of coverage statements was historically a structure with several fields, so it was boxed to avoid bloating `StatementKind`.

Now that the payload is a single relatively-small enum, we can replace `Box<Coverage>` with just `CoverageKind`.

This patch also adds a size assertion for `StatementKind`, to avoid accidentally bloating it in the future.

``@rustbot`` label +A-code-coverage
2024-03-24 17:08:16 +01:00
Scott McMurray
3da115a93b Add+Use mir::BinOp::Cmp 2024-03-23 23:23:41 -07:00
Matthias Krüger
3d9ee88ea2
Rollup merge of #122168 - compiler-errors:inline-coroutine-body-validation, r=cjgillot
Fix validation on substituted callee bodies in MIR inliner

When inlining a coroutine, we will substitute the MIR body with the args of the call. There is code in the MIR validator that attempts to prevent query cycles, and will use the coroutine body directly when it detects that's the body that's being validated. That means that when inlining a coroutine body that has been substituted, it may no longer be parameterized over the original args of the coroutine, which will lead to substitution ICEs.

Fixes #119064
2024-03-24 01:05:51 +01:00
Ralf Jung
f2cff5ebb9 also rename the SIMD intrinsic 2024-03-23 23:03:37 +01:00
bors
2f090c30dd Auto merge of #122629 - RalfJung:assert-unsafe-precondition, r=saethlin
refactor check_{lang,library}_ub: use a single intrinsic

This enacts the plan I laid out [here](https://github.com/rust-lang/rust/pull/122282#issuecomment-1996917998): use a single intrinsic, called `ub_checks` (in aniticpation of https://github.com/rust-lang/compiler-team/issues/725), that just exposes the value of `debug_assertions` (consistently implemented in both codegen and the interpreter). Put the language vs library UB logic into the library.

This makes it easier to do something like https://github.com/rust-lang/rust/pull/122282 in the future: that just slightly alters the semantics of `ub_checks` (making it more approximating when crates built with different flags are mixed), but it no longer affects whether these checks can happen in Miri or compile-time.

The first commit just moves things around; I don't think these macros and functions belong into `intrinsics.rs` as they are not intrinsics.

r? `@saethlin`
2024-03-23 21:11:00 +00:00
Ralf Jung
6177530420 refactor check_{lang,library}_ub: use a single intrinsic, put policy into library 2024-03-23 18:45:05 +01:00
bors
020bbe46bd Auto merge of #122947 - matthiaskrgr:rollup-10j7orh, r=matthiaskrgr
Rollup of 11 pull requests

Successful merges:

 - #120577 (Stabilize slice_split_at_unchecked)
 - #122698 (Cancel `cargo update` job if there's no updates)
 - #122780 (Rename `hir::Local` into `hir::LetStmt`)
 - #122915 (Delay a bug if no RPITITs were found)
 - #122916 (docs(sync): normalize dot in fn summaries)
 - #122921 (Enable more mir-opt tests in debug builds)
 - #122922 (-Zprint-type-sizes: print the types of awaitees and unnamed coroutine locals.)
 - #122927 (Change an ICE regression test to use the original reproducer)
 - #122930 (add panic location to 'panicked while processing panic')
 - #122931 (Fix some typos in the pin.rs)
 - #122933 (tag_for_variant follow-ups)

r? `@ghost`
`@rustbot` modify labels: rollup
2024-03-23 15:58:17 +00:00
bors
d6eb0f5a09 Auto merge of #122582 - scottmcm:swap-intrinsic-v2, r=oli-obk
Let codegen decide when to `mem::swap` with immediates

Making `libcore` decide this is silly; the backend has so much better information about when it's a good idea.

Thus this PR introduces a new `typed_swap` intrinsic with a fallback body, and replaces that fallback implementation when swapping immediates or scalar pairs.

r? oli-obk

Replaces #111744, and means we'll never need more libs PRs like #111803 or #107140
2024-03-23 13:57:55 +00:00
Ralf Jung
038e7c6c38 rename MIR int2ptr casts to match library name 2024-03-23 13:18:33 +01:00