Commit Graph

33 Commits

Author SHA1 Message Date
Nicholas Nethercote
72800d3b89 Run rustfmt on tests/codegen/.
Except for `simd-intrinsic/`, which has a lot of files containing
multiple types like `u8x64` which really are better when hand-formatted.

There is a surprising amount of two-space indenting in this directory.

Non-trivial changes:
- `rustfmt::skip` needed in `debug-column.rs` to preserve meaning of the
  test.
- `rustfmt::skip` used in a few places where hand-formatting read more
  nicely: `enum/enum-match.rs`
- Line number adjustments needed for the expected output of
  `debug-column.rs` and `coroutine-debug.rs`.
2024-05-31 15:56:43 +10:00
Scott McMurray
459ce3f6bb Add an intrinsic for ptr::metadata 2024-05-28 09:28:51 -07:00
Scott McMurray
f60f2e8cb0 Fix ICE in non-operand aggregate_raw_ptr instrinsic codegen 2024-05-16 09:43:42 -07:00
Gary Guo
cfee72aa24 Fix tests and bless 2024-04-24 13:12:33 +01:00
bors
29a56a3b1c Auto merge of #122053 - erikdesjardins:alloca, r=nikic
Stop using LLVM struct types for alloca

The alloca type has no semantic meaning, only the size (and alignment, but we specify it explicitly) matter. Using `[N x i8]` is a more direct way to specify that we want `N` bytes, and avoids relying on LLVM's struct layout. It is likely that a future LLVM version will change to an untyped alloca representation.

Split out from #121577.

r? `@ghost`
2024-04-24 03:00:44 +00:00
Matthias Krüger
918304b190
Rollup merge of #124003 - WaffleLapkin:dellvmization, r=scottmcm,RalfJung,antoyo
Dellvmize some intrinsics (use `u32` instead of `Self` in some integer intrinsics)

This implements https://github.com/rust-lang/compiler-team/issues/693 minus what was implemented in #123226.

Note: I decided to _not_ change `shl`/... builder methods, as it just doesn't seem worth it.

r? ``@scottmcm``
2024-04-23 20:17:51 +02:00
Markus Reiter
33e68aadc9
Stabilize generic NonZero. 2024-04-22 18:48:47 +02:00
Maybe Waffle
c2046c4b09 Add codegen tests for changed intrinsics 2024-04-16 12:35:22 +00:00
Erik Desjardins
f4426c189f use [N x i8] for alloca types 2024-04-11 21:42:35 -04:00
bors
a77322c16f Auto merge of #118310 - scottmcm:three-way-compare, r=davidtwco
Add `Ord::cmp` for primitives as a `BinOp` in MIR

Update: most of this OP was written months ago.  See https://github.com/rust-lang/rust/pull/118310#issuecomment-2016940014 below for where we got to recently that made it ready for review.

---

There are dozens of reasonable ways to implement `Ord::cmp` for integers using comparison, bit-ops, and branches.  Those differences are irrelevant at the rust level, however, so we can make things better by adding `BinOp::Cmp` at the MIR level:

1. Exactly how to implement it is left up to the backends, so LLVM can use whatever pattern its optimizer best recognizes and cranelift can use whichever pattern codegens the fastest.
2. By not inlining those details for every use of `cmp`, we drastically reduce the amount of MIR generated for `derive`d `PartialOrd`, while also making it more amenable to MIR-level optimizations.

Having extremely careful `if` ordering to μoptimize resource usage on broadwell (#63767) is great, but it really feels to me like libcore is the wrong place to put that logic.  Similarly, using subtraction [tricks](https://graphics.stanford.edu/~seander/bithacks.html#CopyIntegerSign) (#105840) is arguably even nicer, but depends on the optimizer understanding it (https://github.com/llvm/llvm-project/issues/73417) to be practical.  Or maybe [bitor is better than add](https://discourse.llvm.org/t/representing-in-ir/67369/2?u=scottmcm)?  But maybe only on a future version that [has `or disjoint` support](https://discourse.llvm.org/t/rfc-add-or-disjoint-flag/75036?u=scottmcm)?  And just because one of those forms happens to be good for LLVM, there's no guarantee that it'd be the same form that GCC or Cranelift would rather see -- especially given their very different optimizers.  Not to mention that if LLVM gets a spaceship intrinsic -- [which it should](https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Suboptimal.20inlining.20in.20std.20function.20.60binary_search.60/near/404250586) -- we'll need at least a rustc intrinsic to be able to call it.

As for simplifying it in Rust, we now regularly inline `{integer}::partial_cmp`, but it's quite a large amount of IR.  The best way to see that is with 8811efa88b (diff-d134c32d028fbe2bf835fef2df9aca9d13332dd82284ff21ee7ebf717bfa4765R113) -- I added a new pre-codegen MIR test for a simple 3-tuple struct, and this PR change it from 36 locals and 26 basic blocks down to 24 locals and 8 basic blocks.  Even better, as soon as the construct-`Some`-then-match-it-in-same-BB noise is cleaned up, this'll expose the `Cmp == 0` branches clearly in MIR, so that an InstCombine (#105808) can simplify that to just a `BinOp::Eq` and thus fix some of our generated code perf issues.  (Tracking that through today's `if a < b { Less } else if a == b { Equal } else { Greater }` would be *much* harder.)

---

r? `@ghost`
But first I should check that perf is ok with this
~~...and my true nemesis, tidy.~~
2024-04-02 19:21:44 +00:00
clubby789
b500693ad7 Don't emit load metadata in debug mode 2024-03-25 18:32:45 +00:00
Scott McMurray
3da115a93b Add+Use mir::BinOp::Cmp 2024-03-23 23:23:41 -07:00
Scott McMurray
6d2cb39ac5 Stop whining, tidy 2024-03-17 12:51:58 -07:00
Scott McMurray
7d537106a1 Let codegen decide when to mem::swap with immediates
Making `libcore` decide this is silly; the backend has so much better information about when it's a good idea.

So introduce a new `typed_swap` intrinsic with a fallback body, but replace that implementation for immediates and scalar pairs.
2024-03-17 11:59:18 -07:00
Ben Kimock
2eb9c6d49e Lower transmutes from int to pointer type as gep on null 2024-03-11 18:19:17 -04:00
Markus Reiter
b2fbb8a053
Use generic NonZero in tests. 2024-02-25 12:03:48 +01:00
许杰友 Jieyou Xu (Joe)
6e48b96692
[AUTO_GENERATED] Migrate compiletest to use ui_test-style //@ directives 2024-02-22 16:04:04 +00:00
Nikita Popov
c2fd26a115 Separate immediate and in-memory ScalarPair representation
Currently, we assume that ScalarPair is always represented using
a two-element struct, both as an immediate value and when stored
in memory.

This currently works fairly well, but runs into problems with
https://github.com/rust-lang/rust/pull/116672, where a ScalarPair
involving an i128 type can no longer be represented as a two-element
struct in memory. For example, the tuple `(i32, i128)` needs to be
represented in-memory as `{ i32, [3 x i32], i128 }` to satisfy
alignment requirement. Using `{ i32, i128 }` instead will result in
the second element being stored at the wrong offset (prior to
LLVM 18).

Resolve this issue by no longer requiring that the immediate and
in-memory type for ScalarPair are the same. The in-memory type
will now look the same as for normal struct types (and will include
padding filler and similar), while the immediate type stays a
simple two-element struct type. This also means that booleans in
immediate ScalarPair are now represented as i1 rather than i8,
just like we do everywhere else.

The core change here is to llvm_type (which now treats ScalarPair
as a normal struct) and immediate_llvm_type (which returns the
two-element struct that llvm_type used to produce). The rest is
fixing things up to no longer assume these are the same. In
particular, this switches places that try to get pointers to the
ScalarPair elements to use byte-geps instead of struct-geps.
2023-12-15 17:42:05 +01:00
Scott McMurray
502af03445 Add a new compare_bytes intrinsic instead of calling memcmp directly 2023-08-06 15:47:40 -07:00
Josh Stone
da47736f42 CHECK only for opaque ptr 2023-07-27 14:44:13 -07:00
Josh Stone
190ded8443 Update the minimum external LLVM to 15 2023-07-27 14:07:08 -07:00
Camille GILLOT
d7983a2f23 Always name the return place. 2023-07-08 15:38:40 +02:00
Scott McMurray
bf36193ef6 Add a distinct OperandValue::ZeroSized variant for ZSTs
These tend to have special handling in a bunch of places anyway, so the variant helps remember that.  And I think it's easier to grok than non-Scalar Aggregates sometimes being `Immediates` (like I got wrong and caused 109992).  As a minor bonus, it means we don't need to generate poison LLVM values for them to pass around in `OperandValue::Immediate`s.
2023-05-31 19:10:28 -07:00
Scott McMurray
e1da77c76d Also use mir::Offset for pointer add 2023-04-27 22:44:42 -07:00
Scott McMurray
1de2257c3f Add intrinsics::transmute_unchecked
This takes a whole 3 lines in `compiler/` since it lowers to `CastKind::Transmute` in MIR *exactly* the same as the existing `intrinsics::transmute` does, it just doesn't have the fancy checking in `hir_typeck`.

Added to enable experimenting with the request in <https://github.com/rust-lang/rust/pull/106281#issuecomment-1496648190> and because the portable-simd folks might be interested for dependently-sized array-vector conversions.

It also simplifies a couple places in `core`.
2023-04-22 17:22:03 -07:00
Scott McMurray
1bcb0ec28c assume value ranges in transmute
Fixes #109958
2023-04-13 00:12:39 -07:00
Scott McMurray
d757c4b904 Handle not all immediates having abi::Scalars 2023-04-09 11:16:50 -07:00
Scott McMurray
454bca514a Check CastKind::Transmute sizes in a better way
Fixes #110005
2023-04-06 13:53:10 -07:00
Scott McMurray
9aa9a846b6 Allow transmutes to produce OperandValues instead of always using allocas
LLVM can usually optimize these away, but especially for things like transmutes of newtypes it's silly to generate the `alloc`+`store`+`load` at all when it's actually a nop at LLVM level.
2023-04-04 18:44:29 -07:00
Scott McMurray
64cce5fc7d Add CastKind::Transmute to MIR
Updates `interpret`, `codegen_ssa`, and `codegen_cranelift` to consume the new cast instead of the intrinsic.

Includes `CastTransmute` for custom MIR building, to be able to test the extra UB.
2023-03-22 15:15:41 -07:00
Nilstrieb
f1255380ac Add more codegen tests 2023-01-17 16:23:22 +01:00
Nilstrieb
645c0fddd2 Put noundef on all scalars that don't allow uninit
Previously, it was only put on scalars with range validity invariants
like bool, was uninit was obviously invalid for those.

Since then, we have normatively declared all uninit primitives to be
undefined behavior and can therefore put `noundef` on them.

The remaining concern was the `mem::uninitialized` function, which cause
quite a lot of UB in the older parts of the ecosystem. This function now
doesn't return uninit values anymore, making users of it safe from this
change.

The only real sources of UB where people could encounter uninit
primitives are `MaybeUninit::uninit().assume_init()`, which has always
be clear in the docs about being UB and from heap allocations (like
reading from the spare capacity of a vec. This is hopefully rare enough
to not break anything.
2023-01-17 08:14:35 +01:00
Albert Larsan
cf2dff2b1e
Move /src/test to /tests 2023-01-11 09:32:08 +00:00