mirror of
https://github.com/rust-lang/rust.git
synced 2024-11-24 07:44:10 +00:00
a77322c16f
Add `Ord::cmp` for primitives as a `BinOp` in MIR
Update: most of this OP was written months ago. See https://github.com/rust-lang/rust/pull/118310#issuecomment-2016940014 below for where we got to recently that made it ready for review.
---
There are dozens of reasonable ways to implement `Ord::cmp` for integers using comparison, bit-ops, and branches. Those differences are irrelevant at the rust level, however, so we can make things better by adding `BinOp::Cmp` at the MIR level:
1. Exactly how to implement it is left up to the backends, so LLVM can use whatever pattern its optimizer best recognizes and cranelift can use whichever pattern codegens the fastest.
2. By not inlining those details for every use of `cmp`, we drastically reduce the amount of MIR generated for `derive`d `PartialOrd`, while also making it more amenable to MIR-level optimizations.
Having extremely careful `if` ordering to μoptimize resource usage on broadwell (#63767) is great, but it really feels to me like libcore is the wrong place to put that logic. Similarly, using subtraction [tricks](https://graphics.stanford.edu/~seander/bithacks.html#CopyIntegerSign) (#105840) is arguably even nicer, but depends on the optimizer understanding it (https://github.com/llvm/llvm-project/issues/73417) to be practical. Or maybe [bitor is better than add](https://discourse.llvm.org/t/representing-in-ir/67369/2?u=scottmcm)? But maybe only on a future version that [has `or disjoint` support](https://discourse.llvm.org/t/rfc-add-or-disjoint-flag/75036?u=scottmcm)? And just because one of those forms happens to be good for LLVM, there's no guarantee that it'd be the same form that GCC or Cranelift would rather see -- especially given their very different optimizers. Not to mention that if LLVM gets a spaceship intrinsic -- [which it should](https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Suboptimal.20inlining.20in.20std.20function.20.60binary_search.60/near/404250586) -- we'll need at least a rustc intrinsic to be able to call it.
As for simplifying it in Rust, we now regularly inline `{integer}::partial_cmp`, but it's quite a large amount of IR. The best way to see that is with
|
||
---|---|---|
.. | ||
asm | ||
auxiliary | ||
libs | ||
nvptx-kernel-abi | ||
stack-protector | ||
targets | ||
aarch64-naked-fn-no-bti-prolog.rs | ||
aarch64-pointer-auth.rs | ||
align_offset.rs | ||
closure-inherit-target-feature.rs | ||
dwarf4.rs | ||
dwarf5.rs | ||
is_aligned.rs | ||
niche-prefer-zero.rs | ||
nvptx-arch-default.rs | ||
nvptx-arch-emit-asm.rs | ||
nvptx-arch-link-arg.rs | ||
nvptx-arch-target-cpu.rs | ||
nvptx-atomics.rs | ||
nvptx-internalizing.rs | ||
nvptx-linking-binary.rs | ||
nvptx-linking-cdylib.rs | ||
nvptx-safe-naming.rs | ||
panic-no-unwind-no-uwtable.rs | ||
panic-unwind-no-uwtable.rs | ||
pic-relocation-model.rs | ||
pie-relocation-model.rs | ||
simd-bitmask.rs | ||
simd-intrinsic-gather.rs | ||
simd-intrinsic-mask-load.rs | ||
simd-intrinsic-mask-reduce.rs | ||
simd-intrinsic-mask-store.rs | ||
simd-intrinsic-scatter.rs | ||
simd-intrinsic-select.rs | ||
slice-is_ascii.rs | ||
sparc-struct-abi.rs | ||
stack-probes.rs | ||
static-relocation-model.rs | ||
strict_provenance.rs | ||
target-feature-multiple.rs | ||
thin-lto.rs | ||
wasm_exceptions.rs | ||
x86_64-array-pair-load-store-merge.rs | ||
x86_64-cmp.rs | ||
x86_64-floating-point-clamp.rs | ||
x86_64-fortanix-unknown-sgx-lvi-generic-load.rs | ||
x86_64-fortanix-unknown-sgx-lvi-generic-ret.rs | ||
x86_64-fortanix-unknown-sgx-lvi-inline-assembly.rs | ||
x86_64-function-return.rs | ||
x86_64-naked-fn-no-cet-prolog.rs | ||
x86_64-no-jump-tables.rs | ||
x86_64-sse_crc.rs | ||
x86_64-typed-swap.rs |