Inline and remove `place_contents_drop_state_cannot_differ`.
It has a single call site and is hot enough to be worth inlining. And make sure `is_terminal_path` is inlined, too.
r? `@ghost`
Point out the type of associated types in every method call of iterator chains
Partially address #105184 by pointing out the type of associated types in every method call of iterator chains:
```
note: the expression is of type `Map<std::slice::Iter<'_, {integer}>, [closure@src/test/ui/iterators/invalid-iterator-chain.rs:12:18: 12:21]>`
--> src/test/ui/iterators/invalid-iterator-chain.rs:12:14
|
10 | vec![0, 1]
| ---------- this expression has type `Vec<{integer}>`
11 | .iter()
| ------ associated type `std::iter::Iterator::Item` is `&{integer}` here
12 | .map(|x| { x; })
| ^^^^^^^^^^^^^^^ associated type `std::iter::Iterator::Item` is `()` here
```
We also reduce the number of impls we mention when any of the candidates is an "exact match". This benefits the output of cases with numerics greatly.
Outstanding work would be to provide a structured suggestion for appropriate changes, like in this case detecting the spurious `;` in the closure.
Use struct types during codegen in less places
This makes it easier to use cg_ssa from a backend like Cranelift that doesn't have any struct types at all. After this PR struct types are still used for function arguments and return values. Removing those usages is harder but should still be doable.
Remove `token::Lit` from `ast::MetaItemLit`.
Currently `ast::MetaItemLit` represents the literal kind twice. This PR removes that redundancy. Best reviewed one commit at a time.
r? `@petrochenkov`
Fix lint perf regressions
#104863 caused small but widespread regressions in lint performance. I tried to improve things in #105291 and #105416 with minimal success, before fully understanding what caused the regression. This PR effectively reverts all of #105291 and part of #104863 to fix the perf regression.
r? `@cjgillot`
Make encode_info_for_trait_item use queries instead of accessing the HIR
This change avoids accessing the HIR on `encode_info_for_trait_item` and uses queries. We will need to execute this function for elements that have no HIR and by using queries we will be able to feed for definitions that have no HIR.
r? ``@oli-obk``
This commit partly undoes #104863, which combined the builtin lints pass
with other lints. This caused a slowdown, because often there are no
other lints, and it's faster to do a pass with a single lint directly
than it is to do a combined pass with a `passes` vector containing a
single lint.
I removed these in #105291, and subsequently learned they are necessary
for performance.
This commit reinstates them with the new and more descriptive names
`RuntimeCombined{Early,Late}LintPass`, similar to the existing passes
like `BuiltinCombinedEarlyLintPass`. It also adds some comments,
particularly emphasising how we have ways to combine passes at both
compile-time and runtime. And it moves some comments around.
When encountering an unmet obligation that affects a method chain, like
in iterator chains where one of the links has the wrong associated
type, we point at every method call and mention their evaluated
associated type at that point to give context to the user of where
expectations diverged from the code as written.
```
note: the expression is of type `Map<std::slice::Iter<'_, {integer}>, [closure@$DIR/invalid-iterator-chain.rs:12:18: 12:21]>`
--> $DIR/invalid-iterator-chain.rs:12:14
|
LL | vec![0, 1]
| ---------- this expression has type `Vec<{integer}>`
LL | .iter()
| ------ associated type `std::iter::Iterator::Item` is `&{integer}` here
LL | .map(|x| { x; })
| ^^^^^^^^^^^^^^^ associated type `std::iter::Iterator::Item` is `()` here
```
Don't internalize __llvm_profile_counter_bias
Currently, LLVM profiling runtime counter relocation cannot be used by rust during LTO because symbols are being internalized before all symbol information is known.
This mode makes LLVM emit a __llvm_profile_counter_bias symbol which is referenced by the profiling initialization, which itself is pulled in by the rust driver here [1].
It is enabled with -Cllvm-args=-runtime-counter-relocation for platforms which are opt-in to this mode like Linux. On these platforms there will be no link error, rather just surprising behavior for a user which request runtime counter relocation. The profiling runtime will not see that symbol go on as if it were never there. On Fuchsia, the profiling runtime must have this symbol which will cause a hard link error.
As an aside, I don't have enough context as to why rust's LTO model is how it is. AFAICT, the internalize pass is only safe to run at link time when all symbol information is actually known, this being an example as to why. I think special casing this symbol as a known one that LLVM can emit which should not have it's visbility de-escalated should be fine given how seldom this pattern of defining an undefined symbol to get initilization code pulled in is. From a quick grep, __llvm_profile_runtime is the only symbol that rustc does this for.
[1] 0265a3e93b/compiler/rustc_codegen_ssa/src/back/linker.rs (L598)
compiler: remove unnecessary imports and qualified paths
Some of these imports were necessary before Edition 2021, others were already in the prelude.
I hope it's fine that this PR is so spread-out across files :/
Some method confirmation code nits
1. Make some pick methods take `&self` instead of `&mut` to avoid some cloning
2. Pass some values by reference to avoid some cloning
3. Rename a few variables here and there
Fix invalid codegen during debuginfo lowering
In order for LLVM to correctly generate debuginfo for msvc, we sometimes need to spill arguments to the stack and perform some direct & indirect offsets into the value. Previously, this code always performed those actions, even when not required as LLVM would clean it up during optimization.
However, when MIR inlining is enabled, this can cause problems as the operations occur prior to the spilled value being initialized. To solve this, we first calculate the necessary offsets using just the type which is side-effect free and does not alter the LLVM IR. Then, if we are in a situation which requires us to generate the LLVM IR (and this situation only occurs for arguments, not local variables) then we perform the same calculation again, this time generating the appropriate LLVM IR as we go.
r? `@tmiasko` but feel free to reassign if you want 🙂Fixes#105386