mirror of
https://github.com/rust-lang/rust.git
synced 2025-06-21 03:57:38 +00:00
![]() `slice.get(i)` should use a slice projection in MIR, like `slice[i]` does `slice[i]` is built-in magic, so ends up being quite different from `slice.get(i)` in MIR, even though they're both doing nearly identical operations -- checking the length of the slice then getting a ref/ptr to the element if it's in-bounds. This PR adds a `slice_get_unchecked` intrinsic for `impl SliceIndex for usize` to use to fix that, so it no longer needs to do a bunch of lines of pointer math and instead just gets the obvious single statement. (This is *not* used for the range versions, since `slice[i..]` and `slice[..k]` can't use the mir Slice projection as they're using fenceposts, not indices.) I originally tried to do this with some kind of GVN pattern, but realized that I'm pretty sure it's not legal to optimize `BinOp::Offset` to `PlaceElem::Index` without an extremely complicated condition. Basically, the problem is that the `Index` projection on a dereferenced slice pointer *cares about the metadata*, since it's UB to `PlaceElem::Index` outside the range described by the metadata. But then you cast the fat pointer to a thin pointer then offset it, that *ignores* the slice length metadata, so it's possible to write things that are legal with `Offset` but would be UB if translated in the obvious way to `Index`. Checking (or even determining) the necessary conditions for that would be complicated and error-prone, whereas this intrinsic-based approach is quite straight-forward. Zero backend changes, because it just lowers to MIR, so it's already supported naturally by CTFE/Miri/cg_llvm/cg_clif. |
||
---|---|---|
.. | ||
check | ||
coherence | ||
collect | ||
errors | ||
hir_ty_lowering | ||
impl_wf_check | ||
outlives | ||
variance | ||
autoderef.rs | ||
check_unused.rs | ||
collect.rs | ||
constrained_generic_params.rs | ||
delegation.rs | ||
errors.rs | ||
hir_wf_check.rs | ||
impl_wf_check.rs | ||
lib.rs |