Use `append` instead of `extend(drain(..))`
The first commit adds `IndexVec::append` that forwards to `Vec::append`, and uses it in a couple places.
The second commit updates `indexmap` for its new `IndexMap::append`, and also uses that in a couple places.
These changes are similar to what [`clippy::extend_with_drain`](https://rust-lang.github.io/rust-clippy/master/index.html#/extend_with_drain) would suggest, just for other collection types.
add test ensuring simd codegen checks don't run when a static assertion failed
stdarch relies on this to ensure that SIMD indices are in bounds.
I would love to know why this works, but I can't figure out where codegen decides to not codegen a function if a required-const does not evaluate. `@oli-obk` `@bjorn3` do you have any idea?
rustc_monomorphize: fix outdated comment in partition
`max_cgu_count` was removed in 51821515b3, but not comment (usage in `merge_codegen_units` was removed earlier).
r? `@nnethercote`
Use `bool` instead of `PartiolOrd` as return value of the comparison closure in `{slice,Iteraotr}::is_sorted_by`
Changes the function signature of the closure given to `{slice,Iteraotr}::is_sorted_by` to return a `bool` instead of a `PartiolOrd` as suggested by the libs-api team here: https://github.com/rust-lang/rust/issues/53485#issuecomment-1766411980.
This means these functions now return true if the closure returns true for all the pairs of values.
Fix cases where std accidentally relied on inline(never)
This PR increases the power of `-Zcross-crate-inline-threshold=always` so that it applies through `#[inline(never)]`. Note that though this is called "cross-crate-inlining" in this case especially it is _just_ lazy per-CGU codegen. The MIR inliner and LLVM still respect the attribute as much as they ever have.
Trying to bootstrap with the new `-Zcross-crate-inline-threshold=always` change revealed two bugs:
We have special intrinsics `assert_inhabited`, `assert_zero_valid`, and `assert_mem_uniniitalized_valid` which codegen backends will lower to nothing or a call to `panic_nounwind`. Since we may not have any call to `panic_nounwind` in MIR but emit one anyway, we need to specially tell `MirUsedCollector` about this situation.
`#[lang = "start"]` is special-cased already so that `MirUsedCollector` will collect it, but then when we make it cross-crate-inlinable it is only assigned to a CGU based on whether `MirUsedCollector` saw a call to it, which of course we didn't.
---
I started looking into this because https://github.com/rust-lang/rust/pull/118683 revealed a case where we were accidentally relying on a function being `#[inline(never)]`, and cranking up cross-crate-inlinability seems like a way to find other situations like that.
r? `@nnethercote` because I don't like what I'm doing to the CGU partitioning code here but I can't come up with something much better
This query has a name that sounds general-purpose, but in fact it has
coverage-specific semantics, and (fortunately) is only used by coverage code.
Because it is only ever called once (from one designated CGU), it doesn't need
to be a query, and we can change it to a regular function instead.
This fixes the changes brought to codegen tests when effect params are
added to libcore, by not attempting to monomorphize functions that get
the host param by being `const fn`.
Tweak CGU sorting in a couple of places.
In `base.rs`, tweak how the CGU size interleaving works. Since #113777, it's much more common to have multiple CGUs with identical sizes. With the existing code these same-sized items ended up in the opposite-to-desired order due to the stable sorting. The code now starts with a reverse sort (like is done in `partitioning.rs`) which gives the behaviour we want. This doesn't matter much for perf, but makes profiles in `samply` look more like what we expect.
In `partitioning.rs`, we can use `sort_by_key` instead of `sort_by_cached_key` because `CGU::size_estimate()` is cheap. (There is an identical CGU sort earlier in that function that already uses `sort_by_key`.)
r? `@pnkfelix`
In `base.rs`, tweak how the CGU size interleaving works. Since #113777,
it's much more common to have multiple CGUs with identical sizes. With
the existing code these same-sized items ended up in the
opposite-to-desired order due to the stable sorting. The code now starts
with a reverse sort (like is done in `partitioning.rs`) which gives the
behaviour we want. This doesn't matter much for perf, but makes profiles
in `samply` look more like what we expect.
In `partitioning.rs`, we can use `sort_by_key` instead of
`sort_by_cached_key` because `CGU::size_estimate()` is cheap. (There is
an identical CGU sort earlier in that function that already uses
`sort_by_key`.)
Instead of repeatedly merging the two smallest CGUs, we now use a
merging algorithm that aims to minimize the duplication of inlined
functions.
`exa-0.10.1` was one benchmark that saw particularly good results. The
old CGU stats:
```
INTERNALIZE
- unique items: 2774 (1216 root + 1558 inlined), unique size: 122065 (77219 root + 44846 inlined)
- placed items: 3834 (1216 root + 2618 inlined), placed size: 154552 (77219 root + 77333 inlined)
- placed/unique items ratio: 1.38, placed/unique size ratio: 1.27
- CGUs: 16, mean size: 9659.5, sizes: [11791, 11634, 11173, 10987, 10939, 10507, 9992, 9813, 9593, 9580, 9030, 8447, 7975, 7961, 7876, 7254]
```
The new CGU stats:
```
INTERNALIZE
- unique items: 2774 (1216 root + 1558 inlined), unique size: 122065 (77219 root + 44846 inlined)
- placed items: 3626 (1216 root + 2410 inlined), placed size: 147201 (77219 root + 69982 inlined)
- placed/unique items ratio: 1.31, placed/unique size ratio: 1.21
- CGUs: 16, mean size: 9200.1, sizes: [11634, 10939, 10227, 9555, 9178, 9167, 8879, 8804, 8604, 8603 (x3), 8602 (x2), 8601, 8600]
```
The difference is in the number of inlined items. There are 1558 unique
inlined items. With the old algorithm these were placed 2618 times,
resulting in 1060 duplicates. With the new algorithm these were placed
2410 times, resulting in 852 duplicates. Also, the mean CGU size dropped
from 9659.5 to 9200.1, and the CGU size distribution tightened, with the
biggest one a little smaller and the smallest ones a little bigger.
They're quite rare, and ignoring them simplifies things quite a bit, and
further reduces the number of calls to `MonoItem::size_estimate` to the
number of placed items (one per root item, and one or more per reachable
inlined item).
This means we call `MonoItem::size_estimate` (which involves a query)
less often: just once per mono item, and then once more per inline item
placement. After that we can reuse the stored value as necessary. This
means `CodegenUnit::compute_size_estimate` is cheaper.