Commit Graph

523 Commits

Author SHA1 Message Date
bors
88c2f4f5f5 Auto merge of #123385 - matthiaskrgr:rollup-v69vjbn, r=matthiaskrgr
Rollup of 8 pull requests

Successful merges:

 - #123198 (Add fn const BuildHasherDefault::new)
 - #123226 (De-LLVM the unchecked shifts [MCP#693])
 - #123302 (Make sure to insert `Sized` bound first into clauses list)
 - #123348 (rustdoc: add a couple of regression tests)
 - #123362 (Check that nested statics in thread locals are duplicated per thread.)
 - #123368 (CFI: Support non-general coroutines)
 - #123375 (rustdoc: synthetic auto trait impls: accept unresolved region vars for now)
 - #123378 (Update sysinfo to 0.30.8)

Failed merges:

 - #123349 (Fix capture analysis for by-move closure bodies)

r? `@ghost`
`@rustbot` modify labels: rollup
2024-04-02 21:23:53 +00:00
bors
a77322c16f Auto merge of #118310 - scottmcm:three-way-compare, r=davidtwco
Add `Ord::cmp` for primitives as a `BinOp` in MIR

Update: most of this OP was written months ago.  See https://github.com/rust-lang/rust/pull/118310#issuecomment-2016940014 below for where we got to recently that made it ready for review.

---

There are dozens of reasonable ways to implement `Ord::cmp` for integers using comparison, bit-ops, and branches.  Those differences are irrelevant at the rust level, however, so we can make things better by adding `BinOp::Cmp` at the MIR level:

1. Exactly how to implement it is left up to the backends, so LLVM can use whatever pattern its optimizer best recognizes and cranelift can use whichever pattern codegens the fastest.
2. By not inlining those details for every use of `cmp`, we drastically reduce the amount of MIR generated for `derive`d `PartialOrd`, while also making it more amenable to MIR-level optimizations.

Having extremely careful `if` ordering to μoptimize resource usage on broadwell (#63767) is great, but it really feels to me like libcore is the wrong place to put that logic.  Similarly, using subtraction [tricks](https://graphics.stanford.edu/~seander/bithacks.html#CopyIntegerSign) (#105840) is arguably even nicer, but depends on the optimizer understanding it (https://github.com/llvm/llvm-project/issues/73417) to be practical.  Or maybe [bitor is better than add](https://discourse.llvm.org/t/representing-in-ir/67369/2?u=scottmcm)?  But maybe only on a future version that [has `or disjoint` support](https://discourse.llvm.org/t/rfc-add-or-disjoint-flag/75036?u=scottmcm)?  And just because one of those forms happens to be good for LLVM, there's no guarantee that it'd be the same form that GCC or Cranelift would rather see -- especially given their very different optimizers.  Not to mention that if LLVM gets a spaceship intrinsic -- [which it should](https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Suboptimal.20inlining.20in.20std.20function.20.60binary_search.60/near/404250586) -- we'll need at least a rustc intrinsic to be able to call it.

As for simplifying it in Rust, we now regularly inline `{integer}::partial_cmp`, but it's quite a large amount of IR.  The best way to see that is with 8811efa88b (diff-d134c32d028fbe2bf835fef2df9aca9d13332dd82284ff21ee7ebf717bfa4765R113) -- I added a new pre-codegen MIR test for a simple 3-tuple struct, and this PR change it from 36 locals and 26 basic blocks down to 24 locals and 8 basic blocks.  Even better, as soon as the construct-`Some`-then-match-it-in-same-BB noise is cleaned up, this'll expose the `Cmp == 0` branches clearly in MIR, so that an InstCombine (#105808) can simplify that to just a `BinOp::Eq` and thus fix some of our generated code perf issues.  (Tracking that through today's `if a < b { Less } else if a == b { Equal } else { Greater }` would be *much* harder.)

---

r? `@ghost`
But first I should check that perf is ok with this
~~...and my true nemesis, tidy.~~
2024-04-02 19:21:44 +00:00
Scott McMurray
0601f0c66d De-LLVM the unchecked shifts [MCP#693]
This is just one part of the MCP, but it's the one that IMHO removes the most noise from the standard library code.

Seems net simpler this way, since MIR already supported heterogeneous shifts anyway, and thus it's not more work for backends than before.
2024-03-30 03:32:11 -07:00
bors
877d36b192 Auto merge of #122976 - caibear:optimize_reserve_for_push, r=cuviper
Remove len argument from RawVec::reserve_for_push

Removes `RawVec::reserve_for_push`'s `len` argument since it's always the same as capacity.
Also makes `Vec::insert` use `RawVec::reserve_for_push`.
2024-03-30 00:29:24 +00:00
Cai Bear
4500c83c62 Fix test. 2024-03-29 15:37:43 -07:00
bors
58dcd1fdb9 Auto merge of #123071 - rcvalle:rust-cfi-fix-method-fn-ptr-cast, r=compiler-errors
CFI: Fix methods as function pointer cast

Fix casting between methods and function pointers by assigning a secondary type id to methods with their concrete self so they can be used as function pointers.

This was split off from #116404.

cc `@compiler-errors` `@workingjubilee`
2024-03-29 09:04:05 +00:00
bors
db2f9759f4 Auto merge of #122671 - Mark-Simulacrum:const-panic-msg, r=Nilstrieb
Codegen const panic messages as function calls

This skips emitting extra arguments at every callsite (of which there
can be many). For a librustc_driver build with overflow checks enabled,
this cuts 0.7MB from the resulting shared library (see [perf]).

A sample improvement from nightly:

```
        leaq    str.0(%rip), %rdi
        leaq    .Lalloc_d6aeb8e2aa19de39a7f0e861c998af13(%rip), %rdx
        movl    $25, %esi
        callq   *_ZN4core9panicking5panic17h17cabb89c5bcc999E@GOTPCREL(%rip)
```

to this PR:

```
        leaq    .Lalloc_d6aeb8e2aa19de39a7f0e861c998af13(%rip), %rdi
        callq   *_RNvNtNtCsduqIKoij8JB_4core9panicking11panic_const23panic_const_div_by_zero@GOTPCREL(%rip)
```

[perf]: https://perf.rust-lang.org/compare.html?start=a7e4de13c1785819f4d61da41f6704ed69d5f203&end=64fbb4f0b2d621ff46d559d1e9f5ad89a8d7789b&stat=instructions:u
2024-03-29 00:24:01 +00:00
DianQK
ec359f7d9f
Restore the test checks for wider_reduce_into_iter
The current minimum support is for LLVM 17.
2024-03-28 21:28:45 +08:00
Ramon de C Valle
8e6b4e91b6 CFI: Fix methods as function pointer cast
Fix casting between methods and function pointers by assigning a
secondary type id to methods with their concrete self so they can be
used as function pointers.
2024-03-27 16:19:17 -07:00
Matthias Krüger
6464e5b78c
Rollup merge of #123075 - rcvalle:rust-cfi-fix-drop-drop-in-place, r=compiler-errors
CFI: Fix drop and drop_in_place

Fix drop and drop_in_place by transforming self of drop and drop_in_place methods into a Drop trait objects.

This was split off from https://github.com/rust-lang/rust/pull/116404.

cc `@compiler-errors` `@workingjubilee`
2024-03-27 23:27:22 +01:00
Ramon de C Valle
0b860818e6 CFI: Fix drop and drop_in_place
Fix drop and drop_in_place by transforming self of drop and
drop_in_place methods into Drop trait objects.
2024-03-27 12:52:14 -07:00
clubby789
b500693ad7 Don't emit load metadata in debug mode 2024-03-25 18:32:45 +00:00
Scott McMurray
3da115a93b Add+Use mir::BinOp::Cmp 2024-03-23 23:23:41 -07:00
Jubilee
b9b65f816d
Rollup merge of #122875 - maurer:cfi-transparent-termination, r=workingjubilee
CFI: Support self_cell-like recursion

Current `transform_ty` attempts to avoid cycles when normalizing `#[repr(transparent)]` types to their interior, but runs afoul of this pattern used in `self_cell`:

```
struct X<T> {
  x: u8,
  p: PhantomData<T>,
}

 #[repr(transparent)]
struct Y(X<Y>);
```

When attempting to normalize Y, it will still cycle indefinitely. By using a types-visited list, this will instead get expanded exactly one layer deep to X<Y>, and then stop, not attempting to normalize `Y` any further.

This PR was split off from #121962 as part of fixing the larger vtable compatibility issues.

r? ``````@workingjubilee``````
2024-03-23 22:59:42 -07:00
bors
d6eb0f5a09 Auto merge of #122582 - scottmcm:swap-intrinsic-v2, r=oli-obk
Let codegen decide when to `mem::swap` with immediates

Making `libcore` decide this is silly; the backend has so much better information about when it's a good idea.

Thus this PR introduces a new `typed_swap` intrinsic with a fallback body, and replaces that fallback implementation when swapping immediates or scalar pairs.

r? oli-obk

Replaces #111744, and means we'll never need more libs PRs like #111803 or #107140
2024-03-23 13:57:55 +00:00
Matthew Maurer
dec36c3d6e CFI: Support self_cell-like recursion
Current `transform_ty` attempts to avoid cycles when normalizing
`#[repr(transparent)]` types to their interior, but runs afoul of this
pattern used in `self_cell`:

```
struct X<T> {
  x: u8,
  p: PhantomData<T>,
}

 #[repr(transparent)]
struct Y(X<Y>);
```

When attempting to normalize Y, it will still cycle indefinitely. By
using a types-visited list, this will instead get expanded exactly
one layer deep to X<Y>, and then stop, not attempting to normalize `Y`
any further.
2024-03-22 23:02:05 +00:00
Mark Rousskov
00f4daa276 Codegen const panic messages as function calls
This skips emitting extra arguments at every callsite (of which there
can be many). For a librustc_driver build with overflow checks enabled,
this cuts 0.7MB from the resulting binary.
2024-03-22 09:55:50 -04:00
bors
7762adccb2 Auto merge of #122456 - maurer:cfi-nonpassed, r=workingjubilee
CFI: Skip non-passed arguments

Rust will occasionally rely on fn((), X) -> Y being compatible with fn(X) -> Y, since () is a non-passed argument. Relax CFI by choosing not to encode non-passed arguments.

This PR was split off from #121962 as part of fixing the larger vtable compatibility issues.

r? `@workingjubilee`
2024-03-22 06:09:40 +00:00
Matthew Maurer
f2f0d255df CFI: Skip non-passed arguments
Rust will occasionally rely on fn((), X) -> Y being compatible with
fn(X) -> Y, since () is a non-passed argument. Relax CFI by choosing not
to encode non-passed arguments.
2024-03-21 22:26:26 +00:00
clubby789
5f254d8b66 Remove SpecOptionPartialEq 2024-03-19 16:32:01 +00:00
bors
148a41c6b5 Auto merge of #122375 - rcvalle:rust-cfi-break-tests-into-smaller-files, r=compiler-errors
CFI: Break tests into smaller files

Break type metadata identifiers tests into smaller set of tests/files, and move CFI (and KCFI) codegen tests to a cfi (and kcfi) subdirectory,
2024-03-19 02:17:52 +00:00
Scott McMurray
6d2cb39ac5 Stop whining, tidy 2024-03-17 12:51:58 -07:00
Scott McMurray
7d537106a1 Let codegen decide when to mem::swap with immediates
Making `libcore` decide this is silly; the backend has so much better information about when it's a good idea.

So introduce a new `typed_swap` intrinsic with a fallback body, but replace that implementation for immediates and scalar pairs.
2024-03-17 11:59:18 -07:00
Josh Stone
d9132de4ab Remove an obsolete ignore-llvm-version 2024-03-17 10:52:00 -07:00
Josh Stone
29430554f6 Update the minimum external LLVM to 17 2024-03-17 10:11:04 -07:00
bors
c563f2ee79 Auto merge of #122371 - oli-obk:visit_nested_body, r=tmiasko
Stop walking the bodies of statics for reachability, and evaluate them instead

cc `@saethlin` `@RalfJung`

cc #119214

This reuses the `DefIdVisitor` from `rustc_privacy`, because they basically try to do the same thing.

This PR's changes can probably be extended to constants, too, but let's tackle that separately, it's likely more involved.
2024-03-16 04:35:02 +00:00
Matthias Krüger
722514f466
Rollup merge of #122212 - erikdesjardins:byval-align2, r=wesleywiser
Copy byval argument to alloca if alignment is insufficient

Fixes #122211

"Ignore whitespace" recommended.
2024-03-14 20:00:18 +01:00
Oli Scherer
8332b47cae Stop walking the bodies of statics for reachability, and evaluate them instead 2024-03-14 14:10:45 +00:00
Oli Scherer
54d83beb38 Add test 2024-03-14 14:10:45 +00:00
Ramon de C Valle
6bd85c4de4 CFI: Break tests into smaller files
Break type metadata identifiers tests into smaller set of tests/files,
and move CFI (and KCFI) codegen tests to a cfi (and kcfi) subdirectory.
2024-03-14 00:56:29 -07:00
bors
3cbb93223f Auto merge of #121668 - erikdesjardins:commonprim, r=scottmcm,oli-obk
Represent `Result<usize, Box<T>>` as ScalarPair(i64, ptr)

This allows types like `Result<usize, std::io::Error>` (and integers of differing sign, e.g. `Result<u64, i64>`) to be passed in a pair of registers instead of through memory, like `Result<u64, u64>` or `Result<Box<T>, Box<U>>` are today.

Fixes #97540.

r? `@ghost`
2024-03-13 15:25:35 +00:00
Erik Desjardins
9f55200a42 refine common_prim test
Co-authored-by: Scott McMurray <scottmcm@users.noreply.github.com>
2024-03-13 01:17:15 -04:00
Ben Kimock
81d630453b Avoid lowering code under dead SwitchInt targets 2024-03-12 19:01:04 -04:00
bors
0fa7feaf3f Auto merge of #121282 - saethlin:gep-null-means-no-provenance, r=scottmcm
Lower transmutes from int to pointer type as gep on null

I thought of this while looking at https://github.com/rust-lang/rust/pull/121242. See that PR's description for why this lowering is preferable.

The UI test that's being changed here crashes without changing the transmutes into casts. Based on that, this PR should not be merged without a crater build-and-test run.
2024-03-12 04:11:37 +00:00
bors
dc2ffa4054 Auto merge of #122036 - alexcrichton:test-wasm-with-wasi, r=oli-obk
Test wasm32-wasip1 in CI, not wasm32-unknown-unknown

This commit changes CI to no longer test the `wasm32-unknown-unknown` target and instead test the `wasm32-wasip1` target. There was some discussion of this in a [Zulip thread], and the motivations for this PR are:

* Runtime failures on `wasm32-unknown-unknown` print nothing, meaning all you get is "something failed". In contrast `wasm32-wasip1` can print to stdout/stderr.

* The unknown-unknown target is missing lots of pieces of libstd, and while `wasm32-wasip1` is also missing some pieces (e.g. threads) it's missing fewer pieces. This means that many more tests can be run.

Overall my hope is to improve the debuggability of wasm failures on CI and ideally be a bit less of a maintenance burden.

This commit specifically removes the testing of `wasm32-unknown-unknown` and replaces it with testing of `wasm32-wasip1`. Along the way there were a number of other archiectural changes made as well, including:

* A new `target.*.runtool` option can now be specified in `config.toml` which is passed as `--runtool` to `compiletest`. This is used to reimplement execution of WebAssembly in a less-wasm-specific fashion.

* The default value for `runtool` is an ambiently located WebAssembly runtime found on the system, if any. I've implemented logic for Wasmtime.

* Existing testing support for `wasm32-unknown-unknown` and Emscripten has been removed. I'm not aware of Emscripten testing being run any time recently and otherwise `wasm32-wasip1` is in theory the focus now.

* I've added a new `//@ needs-threads` directive for `compiletest` and classified a bunch of wasm-ignored tests as needing threads. In theory these tests can run on `wasm32-wasi-preview1-threads`, for example.

* I've tried to audit all existing tests that are either `ignore-emscripten` or `ignore-wasm*`. Many now run on `wasm32-wasip1` due to being able to emit error messages, for example. Many are updated with comments as to why they can't run as well.

* The `compiletest` output matching for `wasm32-wasip1` automatically uses "match a subset" mode implemented in `compiletest`. This is because WebAssembly runtimes often add extra information on failure, such as the `unreachable` instruction in `panic!`, which isn't able to be matched against the golden output from native platforms.

* I've ported most existing `run-make` tests that use custom Node.js wrapper scripts to the new run-make-based-in-Rust infrastructure. To do this I added `wasmparser` as a dependency of `run-make-support` for the various wasm tests to use that parse wasm files. The one test that executed WebAssembly now uses `wasmtime`-the-CLI to execute the test instead. I have not ported over an exception-handling test as Wasmtime doesn't implement this yet.

* I've updated the `test` crate to print out timing information for WASI targets as it can do that (gets a previously ignored test now passing).

* The `test-various` image now builds a WASI sysroot for the WASI target and additionally downloads a fixed release of Wasmtime, currently the latest one at 18.0.2, and uses that for testing.

[Zulip thread]: https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Have.20wasm.20tests.20ever.20caused.20problems.20on.20CI.3F/near/424317944
2024-03-12 00:03:54 +00:00
Ben Kimock
2eb9c6d49e Lower transmutes from int to pointer type as gep on null 2024-03-11 18:19:17 -04:00
Alex Crichton
cf6d6050f7 Update test directives for wasm32-wasip1
* The WASI targets deal with the `main` symbol a bit differently than
  native so some `codegen` and `assembly` tests have been ignored.
* All `ignore-emscripten` directives have been updated to
  `ignore-wasm32` to be more clear that all wasm targets are ignored and
  it's not just Emscripten.
* Most `ignore-wasm32-bare` directives are now gone.
* Some ignore directives for wasm were switched to `needs-unwind`
  instead.
* Many `ignore-wasm32*` directives are removed as the tests work with
  WASI as opposed to `wasm32-unknown-unknown`.
2024-03-11 09:36:35 -07:00
Jubilee
028e2600c9
Rollup merge of #122320 - erikdesjardins:vtable, r=nikic
Use ptradd for vtable indexing

Extension of #121665.

After this, the only remaining usages of GEP are [this](cd81f5b27e/compiler/rustc_codegen_llvm/src/intrinsic.rs (L909-L920)) kinda janky Emscription EH code, which I'll change in a future PR, and array indexing / pointer offsets, where there isn't yet a canonical `ptradd` form. (Out of curiosity I tried converting the latter to `ptradd(ptr, mul(size, index))`, but that causes codegen regressions right now.)

r? `@nikic`
2024-03-11 09:29:38 -07:00
Erik Desjardins
207fe38630 copy byval argument to alloca if alignment is insufficient 2024-03-11 09:38:54 -04:00
bors
a6d93acf5f Auto merge of #122050 - erikdesjardins:sret, r=nikic
Stop using LLVM struct types for byval/sret

For `byval` and `sret`, the type has no semantic meaning, only the size matters\*†. Using `[N x i8]` is a more direct way to specify that we want `N` bytes, and avoids relying on LLVM's struct layout.

\*: The alignment would matter, if we didn't explicitly specify it. From what I can tell, we always specified the alignment for `sret`; for `byval`, we didn't until #112157.

†: For `byval`, the hidden copy may be impacted by padding in the LLVM struct type, i.e. padding bytes may not be copied. (I'm not sure if this is done today, but I think it would be legal.) But we manually pad our LLVM struct types specifically to avoid there ever being LLVM-visible padding, so that shouldn't be an issue.

Split out from #121577.

r? `@nikic`
2024-03-11 04:45:27 +00:00
Erik Desjardins
a7cd803d02 use ptradd for vtable indexing
Like field offsets, these are always constant.
2024-03-10 22:47:30 -04:00
Erik Desjardins
f18c2f83e9 add -O to some tests which depend on attributes being added 2024-03-10 16:04:12 -04:00
Matthias Krüger
e8e41877a2
Rollup merge of #121642 - TimNN:test-v0, r=Mark-Simulacrum
Update a test to support Symbol Mangling V0

Note that since this is a symbol from `std`, overriding the symbol mangling version via the `compile-flags` directive does not work.
2024-03-10 10:58:15 +01:00
Erik Desjardins
8fdd5e044b convert codegen/repr/transparent-* tests to no_core, fix discrepancies 2024-03-09 23:16:02 -05:00
Guillaume Boisseau
e3c0158788
Rollup merge of #120504 - kornelski:try_with_capacity, r=Amanieu
Vec::try_with_capacity

Related to #91913

Implements try_with_capacity for `Vec`, `VecDeque`, and `String`. I can follow it up with more collections if desired.

`Vec::try_with_capacity()` is functionally equivalent to the current stable:

```rust
let mut v = Vec::new();
v.try_reserve_exact(n)?
```

However, `try_reserve` calls non-inlined `finish_grow`, which requires old and new `Layout`, and is designed to reallocate memory. There is benefit to using `try_with_capacity`, besides syntax convenience, because it generates much smaller code at the call site with a direct call to the allocator. There's codegen test included.

It's also a very desirable functionality for users of `no_global_oom_handling` (Rust-for-Linux), since it makes a very commonly used function available in that environment (`with_capacity` is used much more frequently than all `(try_)reserve(_exact)`).
2024-03-09 21:40:06 +01:00
bors
1b2c53a15d Auto merge of #122182 - matthiaskrgr:rollup-gzimi4c, r=matthiaskrgr
Rollup of 8 pull requests

Successful merges:

 - #118623 (Improve std::fs::read_to_string example)
 - #119365 (Add asm goto support to `asm!`)
 - #120608 (Docs for std::ptr::slice_from_raw_parts)
 - #121832 (Add new Tier-3 target: `loongarch64-unknown-linux-musl`)
 - #121938 (Fix quadratic behavior of repeated vectored writes)
 - #122099 (Add  `#[inline]` to `BTreeMap::new` constructor)
 - #122103 (Make TAITs and ATPITs capture late-bound lifetimes in scope)
 - #122143 (PassWrapper: update for llvm/llvm-project@a331937197)

Failed merges:

 - #122076 (Tweak the way we protect in-place function arguments in interpreters)

r? `@ghost`
`@rustbot` modify labels: rollup
2024-03-08 09:34:05 +00:00
Matthias Krüger
d774fbea7c
Rollup merge of #119365 - nbdd0121:asm-goto, r=Amanieu
Add asm goto support to `asm!`

Tracking issue: #119364

This PR implements asm-goto support, using the syntax described in "future possibilities" section of [RFC2873](https://rust-lang.github.io/rfcs/2873-inline-asm.html#asm-goto).

Currently I have only implemented the `label` part, not the `fallthrough` part (i.e. fallthrough is implicit). This doesn't reduce the expressive though, since you can use label-break to get arbitrary control flow or simply set a value and rely on jump threading optimisation to get the desired control flow. I can add that later if deemed necessary.

r? ``@Amanieu``
cc ``@ojeda``
2024-03-08 08:19:17 +01:00
bors
14fbc3c005 Auto merge of #120268 - DianQK:otherwise_is_last_variant_switchs, r=oli-obk
Replace the default branch with an unreachable branch If it is the last variant

Fixes #119520. Fixes #110097.

LLVM currently has limited ability to eliminate dead branches in switches, even with the patch of https://github.com/llvm/llvm-project/issues/73446.

The main reasons are as follows:

- Additional costs are required to calculate the range of values, and there exist many scenarios that cannot be analyzed accurately.
- Matching values by bitwise calculation cannot handle odd branches, nor can it handle values like `-1, 0, 1`. See [SimplifyCFG.cpp#L5424](https://github.com/llvm/llvm-project/blob/llvmorg-17.0.6/llvm/lib/Transforms/Utils/SimplifyCFG.cpp#L5424) and https://llvm.godbolt.org/z/qYMqhvMa8
- The current range information is continuous, even if the metadata for the range is submitted. See [ConstantRange.cpp#L1869-L1870](https://github.com/llvm/llvm-project/blob/llvmorg-17.0.6/llvm/lib/IR/ConstantRange.cpp#L1869-L1870).
- The metadata of the range may be lost in passes such as SROA. See https://rust.godbolt.org/z/e7f87vKMK.

Although we can make improvements, I think it would be more appropriate to put this issue to rustc first. After all, we can easily know the possible values.

Note that we've currently found a slow compilation problem in the presence of unreachable branches. See
https://github.com/llvm/llvm-project/issues/78578.

r? compiler
2024-03-08 07:18:17 +00:00
bors
79d246112d Auto merge of #122048 - erikdesjardins:inbounds, r=oli-obk
Use GEP inbounds for ZST and DST field offsets

ZST field offsets have been non-`inbounds` since I made [this old layout change](https://github.com/rust-lang/rust/pull/73453/files#diff-160634de1c336f2cf325ff95b312777326f1ab29fec9b9b21d5ee9aae215ecf5). Before that, they would have been `inbounds` due to using `struct_gep`. Using `inbounds` for ZSTs likely doesn't matter for performance, but I'd like to remove the special case.

DST field offsets have been non-`inbounds` since the alignment-aware DST field offset computation was first [implemented](a2557d472e (diff-04fd352da30ca186fe0bb71cc81a503d1eb8a02ca17a3769e1b95981cd20964aR1188)) in 1.6 (back then `GEPi()` would be used for `inbounds`), but I don't think there was any reason for it.

Split out from #121577 / #121665.

r? `@oli-obk`

cc `@RalfJung` -- is there some weird situation where field offsets can't be `inbounds`?

Note that it's fine for `inbounds` offsets to be one-past-the-end, so it's okay even if there's a ZST as the last field in the layout:

> The base pointer has an in bounds address of an allocated object, which means that it points into an allocated object, or to its end. [(link)](https://llvm.org/docs/LangRef.html#getelementptr-instruction)

For https://github.com/rust-lang/unsafe-code-guidelines/issues/93, zero-offset GEP is (now) always `inbounds`:

> Note that getelementptr with all-zero indices is always considered to be inbounds, even if the base pointer does not point to an allocated object. [(link)](https://llvm.org/docs/LangRef.html#getelementptr-instruction)
2024-03-08 02:01:51 +00:00
DianQK
08ae8380ce
Replace the default branch with an unreachable branch If it is the last variant 2024-03-07 22:58:51 +08:00