Commit Graph

151 Commits

Author SHA1 Message Date
joboet
989660c3e6
rename expose_addr to expose_provenance 2024-04-03 16:00:38 +02:00
Jacob Pratt
e9ef8e1efa
Rollup merge of #122935 - RalfJung:with-exposed-provenance, r=Amanieu
rename ptr::from_exposed_addr -> ptr::with_exposed_provenance

As discussed on [Zulip](https://rust-lang.zulipchat.com/#narrow/stream/136281-t-opsem/topic/To.20expose.20or.20not.20to.20expose/near/427757066).

The old name, `from_exposed_addr`, makes little sense as it's not the address that is exposed, it's the provenance. (`ptr.expose_addr()` stays unchanged as we haven't found a better option yet. The intended interpretation is "expose the provenance and return the address".)

The new name nicely matches `ptr::without_provenance`.
2024-04-02 20:37:39 -04:00
Ralf Jung
f2cff5ebb9 also rename the SIMD intrinsic 2024-03-23 23:03:37 +01:00
Matthew Maurer
7967915c7b CFI: Use Instance at callsites
We already use `Instance` at declaration sites when available to glean
additional information about possible abstractions of the type in use.
This does the same when possible at callsites as well.

The primary purpose of this change is to allow CFI to alter how it
generates type information for indirect calls through `Virtual`
instances.
2024-03-23 18:30:39 +00:00
Michael Goulet
7be0dbe772 Make RawPtr take Ty and Mutbl separately 2024-03-22 11:13:29 -04:00
Michael Goulet
ff0c31e6b9 Programmatically convert some of the pat ctors 2024-03-22 11:13:29 -04:00
Oli Scherer
adda9da604 Avoid various uses of Option<Span> in favor of using DUMMY_SP in the few cases that used None 2024-03-18 09:34:08 +00:00
Matthias Krüger
d774fbea7c
Rollup merge of #119365 - nbdd0121:asm-goto, r=Amanieu
Add asm goto support to `asm!`

Tracking issue: #119364

This PR implements asm-goto support, using the syntax described in "future possibilities" section of [RFC2873](https://rust-lang.github.io/rfcs/2873-inline-asm.html#asm-goto).

Currently I have only implemented the `label` part, not the `fallthrough` part (i.e. fallthrough is implicit). This doesn't reduce the expressive though, since you can use label-break to get arbitrary control flow or simply set a value and rely on jump threading optimisation to get the desired control flow. I can add that later if deemed necessary.

r? ``@Amanieu``
cc ``@ojeda``
2024-03-08 08:19:17 +01:00
Trevor Gross
02778b3e0e Add f16 and f128 LLVM intrinsics 2024-03-01 13:59:06 -05:00
Trevor Gross
e3f63d9375 Add f16 and f128 to rustc_type_ir::FloatTy and rustc_abi::Primitive
Make changes necessary to support these types in the compiler.
2024-02-28 12:58:32 -05:00
Matthias Krüger
d95c321062
Rollup merge of #121598 - RalfJung:catch_unwind, r=oli-obk
rename 'try' intrinsic to 'catch_unwind'

The intrinsic has nothing to do with `try` blocks, and corresponds to the stable `catch_unwind` function, so this makes a lot more sense IMO.

Also rename Miri's special function while we are at it, to reflect the level of abstraction it works on: it's an unwinding mechanism, on which Rust implements panics.
2024-02-27 00:40:00 +01:00
Ralf Jung
b4ca582b89 rename 'try' intrinsic to 'catch_unwind' 2024-02-26 11:10:18 +01:00
Matthias Krüger
7c88ea2842
Rollup merge of #121060 - clubby789:bool-newtypes, r=cjgillot
Add newtypes for bool fields/params/return types

Fixed all the cases of this found with some simple searches for `*/ bool` and `bool /*`; probably many more
2024-02-25 17:05:20 +01:00
Gary Guo
5e4fd6bc23 Implement asm goto for LLVM and GCC backend 2024-02-24 18:50:09 +00:00
Ralf Jung
8e0dd993d6 check that simd_insert/extract indices are in-bounds 2024-02-23 19:43:59 +01:00
Ralf Jung
07b6240947 remove simd_reduce_{min,max}_nanless 2024-02-21 20:50:47 +01:00
Ralf Jung
3dc631a61a make simd_reduce_{mul,add}_unordered use only the 'reassoc' flag, not all fast-math flags 2024-02-21 16:28:20 +01:00
Ben Kimock
cc73b71e8e Add "algebraic" versions of the fast-math intrinsics 2024-02-20 12:39:03 -05:00
clubby789
3377dac31e Add newtype for signedness in LLVM SIMD 2024-02-20 13:32:58 +00:00
Oli Scherer
9a0743747f Teach llvm backend how to fall back to default bodies 2024-02-12 17:50:39 +00:00
Lukas Markeffsky
0c1f401d98 old solver: improve normalization of Pointee::Metadata 2024-02-05 15:37:21 +01:00
Alex Huang
7bdf705dd7 Avoid ICE when is_val_statically_known is not of a supported type 2024-01-29 21:01:15 -05:00
Catherine Flores
5a4561749a Add new intrinsic is_constant and optimize pow
Fix overflow check

Make MIRI choose the path randomly and rename the intrinsic

Add back test

Add miri test and make it operate on `ptr`

Define `llvm.is.constant` for primitives

Update MIRI comment and fix test in stage2

Add const eval test

Clarify that both branches must have the same side effects

guaranteed non guarantee

use immediate type instead

Co-Authored-By: Ralf Jung <post@ralfj.de>
2024-01-19 13:46:27 -05:00
bors
432fffa8af Auto merge of #118991 - nikic:scalar-pair, r=nagisa
Separate immediate and in-memory ScalarPair representation

Currently, we assume that ScalarPair is always represented using a two-element struct, both as an immediate value and when stored in memory.

This currently works fairly well, but runs into problems with https://github.com/rust-lang/rust/pull/116672, where a ScalarPair involving an i128 type can no longer be represented as a two-element struct in memory. For example, the tuple `(i32, i128)` needs to be represented in-memory as `{ i32, [3 x i32], i128 }` to satisfy alignment requirements. Using `{ i32, i128 }` instead will result in the second element being stored at the wrong offset (prior to LLVM 18).

Resolve this issue by no longer requiring that the immediate and in-memory type for ScalarPair are the same. The in-memory type will now look the same as for normal struct types (and will include padding filler and similar), while the immediate type stays a simple two-element struct type. This also means that booleans in immediate ScalarPair are now represented as i1 rather than i8, just like we do everywhere else.

The core change here is to llvm_type (which now treats ScalarPair as a normal struct) and immediate_llvm_type (which returns the two-element struct that llvm_type used to produce). The rest is fixing things up to no longer assume these are the same. In particular, this switches places that try to get pointers to the ScalarPair elements to use byte-geps instead of struct-geps.
2024-01-05 14:31:56 +00:00
Nicholas Nethercote
99472c7049 Remove Session methods that duplicate DiagCtxt methods.
Also add some `dcx` methods to types that wrap `TyCtxt`, for easier
access.
2023-12-24 08:05:28 +11:00
Nikita Popov
c2fd26a115 Separate immediate and in-memory ScalarPair representation
Currently, we assume that ScalarPair is always represented using
a two-element struct, both as an immediate value and when stored
in memory.

This currently works fairly well, but runs into problems with
https://github.com/rust-lang/rust/pull/116672, where a ScalarPair
involving an i128 type can no longer be represented as a two-element
struct in memory. For example, the tuple `(i32, i128)` needs to be
represented in-memory as `{ i32, [3 x i32], i128 }` to satisfy
alignment requirement. Using `{ i32, i128 }` instead will result in
the second element being stored at the wrong offset (prior to
LLVM 18).

Resolve this issue by no longer requiring that the immediate and
in-memory type for ScalarPair are the same. The in-memory type
will now look the same as for normal struct types (and will include
padding filler and similar), while the immediate type stays a
simple two-element struct type. This also means that booleans in
immediate ScalarPair are now represented as i1 rather than i8,
just like we do everywhere else.

The core change here is to llvm_type (which now treats ScalarPair
as a normal struct) and immediate_llvm_type (which returns the
two-element struct that llvm_type used to produce). The rest is
fixing things up to no longer assume these are the same. In
particular, this switches places that try to get pointers to the
ScalarPair elements to use byte-geps instead of struct-geps.
2023-12-15 17:42:05 +01:00
Jakub Okoński
95b5a80f47
Fix alignment passed down to LLVM for simd_masked_load 2023-12-12 13:11:59 +01:00
bors
8b1ba11cb1 Auto merge of #117116 - calebzulawski:repr-simd-packed, r=workingjubilee
Implement repr(packed) for repr(simd)

This allows creating vectors with non-power-of-2 lengths that do not have padding.  See rust-lang/portable-simd#319
2023-12-11 08:07:20 +00:00
Jakub Okoński
97ae5095f5
Add simd_masked_{load,store} platform-intrinsics
This maps to the LLVM intrinsics: llvm.masked.load and llvm.masked.store
2023-12-09 12:36:08 +01:00
Caleb Zulawski
329412dad3 Implement repr(packed) for repr(simd) 2023-12-02 10:53:19 -05:00
Oli Scherer
f8372df631 Merge simd size and type extraction into checking whether a type is simd, as these always go together. 2023-10-31 11:23:39 +00:00
Oli Scherer
9a49ef38c7 Simplify all require_simd invocations by moving all of the shared invocation arguments into the macro 2023-10-31 11:23:39 +00:00
Oli Scherer
7c673db195 don't use the moral equivalent of assert!(false, "foo") 2023-10-31 11:23:39 +00:00
Oli Scherer
1a01e57d27 Prototype using const generic for simd_shuffle IDX array 2023-09-18 15:10:28 +00:00
Ralf Jung
7740476a43 explain PassMode::Cast 2023-09-15 10:43:44 +02:00
scottmcm
75277a6606 Apply suggestions from code review
Co-authored-by: Ralf Jung <post@ralfj.de>
2023-08-06 15:47:40 -07:00
Scott McMurray
502af03445 Add a new compare_bytes intrinsic instead of calling memcmp directly 2023-08-06 15:47:40 -07:00
Oli Scherer
4457ef2c6d Forbid old-style simd_shuffleN intrinsics 2023-08-03 09:29:00 +00:00
bors
abd3637e42 Auto merge of #105545 - erikdesjardins:ptrclean, r=bjorn3
cleanup: remove pointee types

This can't be merged until the oldest LLVM version we support uses opaque pointers, which will be the case after #114148. (Also note `-Cllvm-args="-opaque-pointers=0"` can technically be used in LLVM 15, though I don't think we should support that configuration.)

I initially hoped this would provide some minor perf win, but in https://github.com/rust-lang/rust/pull/105412#issuecomment-1341224450 it had very little impact, so this is only valuable as a cleanup.

As a followup, this will enable #96242 to be resolved.

r? `@ghost`

`@rustbot` label S-blocked
2023-08-01 19:44:17 +00:00
bors
3be07c1161 Auto merge of #114266 - calebzulawski:simd-bswap, r=compiler-errors
Fix simd_bswap for i8/u8

#114156 missed this test case ☹️
cc `@workingjubilee`
2023-07-31 04:43:48 +00:00
Caleb Zulawski
77ed437de8 Fix simd_bswap for i8/u8 2023-07-30 15:40:32 -04:00
Matthias Krüger
3ce90b1649 inline format!() args up to and including rustc_codegen_llvm 2023-07-30 14:22:50 +02:00
Erik Desjardins
55800123b7 cg_llvm: simplify llvm.masked.gather/scatter naming with opaque pointers
With opaque pointers, there's no longer a need to generate a chain
of pointer types in the intrinsic name when arguments are pointers to
pointers.
2023-07-29 16:56:27 -04:00
Erik Desjardins
b6540777fe cg_llvm: remove pointee types and pointercast/bitcast-of-ptr 2023-07-29 13:18:17 -04:00
Caleb Zulawski
ce4a48f41f Use i1 instead of bool 2023-07-28 09:46:16 -04:00
Caleb Zulawski
4c02b4cf4c Add SIMD bitreverse, ctlz, cttz intrinsics 2023-07-27 23:53:45 -04:00
Caleb Zulawski
3ea0e6e3fb Add simd_bswap intrinsic 2023-07-27 23:04:14 -04:00
Mahdi Dibaiee
e55583c4b8 refactor(rustc_middle): Substs -> GenericArg 2023-07-14 13:27:35 +01:00
Boxy
12138b8e5e Move TyCtxt::mk_x to Ty::new_x where applicable 2023-07-05 20:27:07 +01:00
Jan-Mirko Otter
744ec64c93 fix comment (review change)
Co-authored-by: bjorn3 <17426603+bjorn3@users.noreply.github.com>
2023-06-07 17:48:33 +02:00