Implement `alloc::vec::IsZero` for `Option<$NUM>` types
Fixes#106911
Mirrors the `NonZero$NUM` implementations with an additional `assert_zero_valid`.
`None::<i32>` doesn't stricly satisfy `IsZero` but for the purpose of allocating we can produce more efficient codegen.
Various cleanups around pre-TyCtxt queries and functions
part of #105462
based on https://github.com/rust-lang/rust/pull/106776 (everything starting at [0e2b39f](0e2b39fd1f) is new in this PR)
r? `@petrochenkov`
I think this should be most of the uncontroversial part of #105462.
Rollup of 8 pull requests
Successful merges:
- #105796 (rustdoc: simplify JS search routine by not messing with lev distance)
- #106753 (Make sure that RPITITs are not considered suggestable)
- #106917 (Encode const mir for closures if they're const)
- #107004 (Implement some candidates for the new solver (redux))
- #107023 (Stop using `BREAK` & `CONTINUE` in compiler)
- #107030 (Correct typo)
- #107042 (rustdoc: fix corner cases with "?" JS keyboard command)
- #107045 (rustdoc: remove redundant CSS rule `#settings .setting-line`)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
rustdoc: remove redundant CSS rule `#settings .setting-line`
Since the current version of settings.js always nests things below a div with ID `settings`, this rule always overrode the one above.
Stop using `BREAK` & `CONTINUE` in compiler
Switching them to `Break(())` and `Continue(())` instead.
Entirely search-and-replace, though there's one spot where rustfmt insisted on a reformatting too.
libs-api would like to remove these constants (https://github.com/rust-lang/rust/pull/102697#issuecomment-1385705202), so stop using them in compiler to make the removal PR later smaller.
Implement some candidates for the new solver (redux)
Based on #106718, so the diff is hard to read without it. See [here](98700cf481...compiler-errors:rust:new-solver-new-candidates-2) for an easier view until that one lands.
Of note:
* 44af916020fb43c12070125c45b6dee4ec303bbc fixes a bug where we need to make the query response *inside* of a probe, or else we make no inference progress (I think)
* 50daad5acd2f163d03e7ffab942534f09bc36e2e implements `consider_assumption` for traits and predicates. I'm not sure if using `sup` here is necessary or if `eq` is fine.
* We decided that all of the `instantiate_constituent_tys_for_*` functions are verbose but ok, since they need to be exhaustive and the logic between each of them is not similar enough, right?
r? ``@lcnr``
rustdoc: simplify JS search routine by not messing with lev distance
Since the sorting function accounts for an `index` field, there's not much reason to also be applying changes to the levenshtein distance. Instead, we can just not treat `lev` as a filter if there's already a non-sentinel value for `index`.
<details>
This change gives slightly more weight to the index and path part, as search criteria, than it used to. This changes some of the test cases, but not in any obviously-"worse" way, and, in particular, substring matches are a bigger deal than levenshtein distances (we're assuming that a typo is less likely than someone just not typing the entire name).
The biggest change is the addition of a `path_lev` field to result items. It's always zero if the search query has no parent path part and for type queries, making the check in the `sortResults` function a no-op. When it's present, it is used to implement different precedence for the parent path and the tail.
Consider the query `hashset::insert`, a test case [that already exists and can be found here](5c6a1681a9/src/test/rustdoc-js-std/path-ordering.js). We want the ordering shown in the test case:
```
{ 'path': 'std::collections::hash_set::HashSet', 'name': 'insert' },
{ 'path': 'std::collections::hash_set::HashSet', 'name': 'get_or_insert' },
{ 'path': 'std::collections::hash_set::HashSet', 'name': 'get_or_insert_with' },
{ 'path': 'std::collections::hash_set::HashSet', 'name': 'get_or_insert_owned' },
{ 'path': 'std::collections::hash_map::HashMap', 'name': 'insert' },
```
We do not want this ordering, which is the ordering that would occur if substring position took priority over `path_lev`:
```
{ 'path': 'std::collections::hash_set::HashSet', 'name': 'insert' },
{ 'path': 'std::collections::hash_map::HashMap', 'name': 'insert' }, // BAD
{ 'path': 'std::collections::hash_set::HashSet', 'name': 'get_or_insert' },
{ 'path': 'std::collections::hash_set::HashSet', 'name': 'get_or_insert_with' },
{ 'path': 'std::collections::hash_set::HashSet', 'name': 'get_or_insert_owned' },
```
We also do not want `HashSet::iter` to appear before `HashMap::insert`, which is what would happen if `path_lev` took priority over the appearance of any substring match. This is why the `sortResults` function has `path_lev` sandwiched between a `index < 0` check and a `index` comparison check:
```
{ 'path': 'std::collections::hash_set::HashSet', 'name': 'insert' },
{ 'path': 'std::collections::hash_set::HashSet', 'name': 'get_or_insert' },
{ 'path': 'std::collections::hash_set::HashSet', 'name': 'get_or_insert_with' },
{ 'path': 'std::collections::hash_set::HashSet', 'name': 'get_or_insert_owned' },
{ 'path': 'std::collections::hash_set::HashSet', 'name': 'iter' }, // BAD
{ 'path': 'std::collections::hash_map::HashMap', 'name': 'insert' },
```
The old code implemented a similar feature by manipulating the `lev` member based on whether a substring match was found and averaging in the path distance (`item.lev = name_lev + path_lev / 10`), so the path lev wound up acting like a tie breaker, but it gives slightly different results for `Vec::new`, [changing the test case](https://github.com/rust-lang/rust/pull/105796/files#diff-b346e2ef72a407915f438063c8c2c04f7a621df98923d441b41c0312211a5b21) because of the slight changes to ordering priority.
</details>
Based on https://github.com/rust-lang/rust/pull/103710#issuecomment-1296894296
Previews:
* https://notriddle.com/notriddle-rustdoc-demos/rustdoc-search-stop-doing-demerits/std/index.html
* https://notriddle.com/notriddle-rustdoc-demos/rustdoc-search-stop-doing-demerits-compiler/index.html
Revert "Improve heuristics whether `format_args` string is a source literal"
This reverts commit e6c02aad93 (from #106195).
Keeps the code improvements from the PR and the test (as a known-bug).
Works around #106408 while a proper fix is discussed more thoroughly in #106505, as proposed by `@tmandry.`
Reopens#106191
r? compiler-errors
This prevents some strange blur-event-related bugs with the "?" command
by ensuring that the focus remains in the same spot when the settings
area closes.
Do not filter substs in `remap_generic_params_to_declaration_params`.
The relevant filtering should have been performed by borrowck.
Fixes https://github.com/rust-lang/rust/issues/105826
r? types
Don't do pointer arithmetic on pointers to deallocated memory
vec::Splice can invalidate the slice::Iter inside vec::Drain. So we replace them with dangling pointers which, unlike ones to deallocated memory, are allowed.
Fixes miri test failures.
Fixes https://github.com/rust-lang/miri/issues/2759
finish trait solver skeleton work
### 648d661b4e0fcf55f7082894f577377eb451db4b
The previous implementation didn't remove provisional entries which depended on the current goal if we're forced to rerun in case the provisional result of that entry is different from the new result. For reference, see https://rust-lang.github.io/chalk/book/recursive/search_graph.html.
We should also treat inductive cycles as overflow, not ordinary ambiguity.
### 219a5de2517cebfe20a2c3417bd302f7c12db70c 6a1912be539dd5a3b3c10be669787c4bf0c1868a
These two commits move canonicalization to the start of the queries which simplifies a bunch of stuff. I originally intended to keep stuff canonicalized for a while because I expected us to add a additional caches the trait solver, either for candidate assembly or for projections. We ended up not adding (and expect to not need) any of them so this just ends up being easier to understand.
### d78d5ad0979e965afde6500bccfa119b47063506
adds a special `eq` for the solver which doesn't care about obligations or spans
### 18704e6a78b7703e1bbb3856f015cb76c0a07a06
implements https://rust-lang.zulipchat.com/#narrow/stream/364551-t-types.2Ftrait-system-refactor/topic/projection.20cache
r? `@compiler-errors`
Lift `T: Sized` bounds from some `strict_provenance` pointer methods
This PR removes requirement for `T` (pointee type) to be `Sized` to call `pointer::{addr, expose_addr, with_addr, map_addr}`. These functions don't use `T`'s size, so there is no reason for them to require this. Updated public API:
cc ``@Gankra,`` #95228
r? libs-api
Switching them to `Break(())` and `Continue(())` instead.
libs-api would like to remove these constants, so stop using them in compiler to make the removal PR later smaller.
Add heapsort fallback in `select_nth_unstable`
Addresses #102451 and #106933.
`slice::select_nth_unstable` uses a quick select implementation based on the same pattern defeating quicksort algorithm that `slice::sort_unstable` uses. `slice::sort_unstable` uses a recursion limit and falls back to heapsort if there were too many bad pivot choices, to ensure O(n log n) worst case running time (known as introsort). However, `slice::select_nth_unstable` does not have such a fallback strategy, which leads to it having a worst case running time of O(n²) instead. #102451 links to a playground which generates pathological inputs that show this quadratic behavior. On my machine, a randomly generated slice of length `1 << 19` takes ~200µs to calculate its median, whereas a pathological input of the same length takes over 2.5s. This PR adds an iteration limit to `select_nth_unstable`, falling back to heapsort, which ensures an O(n log n) worst case running time (introselect). With this change, there was no noticable slowdown for the random input, but the same pathological input now takes only ~1.2ms. In the future it might be worth implementing something like Median of Medians or Fast Deterministic Selection instead, which guarantee O(n) running time for all possible inputs. I've left this as a `FIXME` for now and only implemented the heapsort fallback to minimize the needed code changes.
I still think we should clarify in the `select_nth_unstable` docs that the worst case running time isn't currently O(n) (the original reason that #102451 was opened), but I think it's a lot better to be able to guarantee O(n log n) instead of O(n²) for the worst case.