Inline a bunch of trivial conditions in parser
It is often the case that these small, conditional functions, when inlined, reveal notable optimization opportunities to LLVM. While saethlin has done a lot of good work on making these kinds of small functions not need `#[inline]` tags as much, being clearer about what we want inlined will get both the MIR opts and LLVM to pursue it more aggressively.
On local perf runs, this seems fruitful. Let's see what rust-timer says.
r? `@ghost`
recursively evaluate the constants in everything that is 'mentioned'
This is another attempt at fixing https://github.com/rust-lang/rust/issues/107503. The previous attempt at https://github.com/rust-lang/rust/pull/112879 seems stuck in figuring out where the [perf regression](https://perf.rust-lang.org/compare.html?start=c55d1ee8d4e3162187214692229a63c2cc5e0f31&end=ec8de1ebe0d698b109beeaaac83e60f4ef8bb7d1&stat=instructions:u) comes from. In https://github.com/rust-lang/rust/pull/122258 I learned some things, which informed the approach this PR is taking.
Quoting from the new collector docs, which explain the high-level idea:
```rust
//! One important role of collection is to evaluate all constants that are used by all the items
//! which are being collected. Codegen can then rely on only encountering constants that evaluate
//! successfully, and if a constant fails to evaluate, the collector has much better context to be
//! able to show where this constant comes up.
//!
//! However, the exact set of "used" items (collected as described above), and therefore the exact
//! set of used constants, can depend on optimizations. Optimizing away dead code may optimize away
//! a function call that uses a failing constant, so an unoptimized build may fail where an
//! optimized build succeeds. This is undesirable.
//!
//! To fix this, the collector has the concept of "mentioned" items. Some time during the MIR
//! pipeline, before any optimization-level-dependent optimizations, we compute a list of all items
//! that syntactically appear in the code. These are considered "mentioned", and even if they are in
//! dead code and get optimized away (which makes them no longer "used"), they are still
//! "mentioned". For every used item, the collector ensures that all mentioned items, recursively,
//! do not use a failing constant. This is reflected via the [`CollectionMode`], which determines
//! whether we are visiting a used item or merely a mentioned item.
//!
//! The collector and "mentioned items" gathering (which lives in `rustc_mir_transform::mentioned_items`)
//! need to stay in sync in the following sense:
//!
//! - For every item that the collector gather that could eventually lead to build failure (most
//! likely due to containing a constant that fails to evaluate), a corresponding mentioned item
//! must be added. This should use the exact same strategy as the ecollector to make sure they are
//! in sync. However, while the collector works on monomorphized types, mentioned items are
//! collected on generic MIR -- so any time the collector checks for a particular type (such as
//! `ty::FnDef`), we have to just onconditionally add this as a mentioned item.
//! - In `visit_mentioned_item`, we then do with that mentioned item exactly what the collector
//! would have done during regular MIR visiting. Basically you can think of the collector having
//! two stages, a pre-monomorphization stage and a post-monomorphization stage (usually quite
//! literally separated by a call to `self.monomorphize`); the pre-monomorphizationn stage is
//! duplicated in mentioned items gathering and the post-monomorphization stage is duplicated in
//! `visit_mentioned_item`.
//! - Finally, as a performance optimization, the collector should fill `used_mentioned_item` during
//! its MIR traversal with exactly what mentioned item gathering would have added in the same
//! situation. This detects mentioned items that have *not* been optimized away and hence don't
//! need a dedicated traversal.
enum CollectionMode {
/// Collect items that are used, i.e., actually needed for codegen.
///
/// Which items are used can depend on optimization levels, as MIR optimizations can remove
/// uses.
UsedItems,
/// Collect items that are mentioned. The goal of this mode is that it is independent of
/// optimizations: the set of "mentioned" items is computed before optimizations are run.
///
/// The exact contents of this set are *not* a stable guarantee. (For instance, it is currently
/// computed after drop-elaboration. If we ever do some optimizations even in debug builds, we
/// might decide to run them before computing mentioned items.) The key property of this set is
/// that it is optimization-independent.
MentionedItems,
}
```
And the `mentioned_items` MIR body field docs:
```rust
/// Further items that were mentioned in this function and hence *may* become monomorphized,
/// depending on optimizations. We use this to avoid optimization-dependent compile errors: the
/// collector recursively traverses all "mentioned" items and evaluates all their
/// `required_consts`.
///
/// This is *not* soundness-critical and the contents of this list are *not* a stable guarantee.
/// All that's relevant is that this set is optimization-level-independent, and that it includes
/// everything that the collector would consider "used". (For example, we currently compute this
/// set after drop elaboration, so some drop calls that can never be reached are not considered
/// "mentioned".) See the documentation of `CollectionMode` in
/// `compiler/rustc_monomorphize/src/collector.rs` for more context.
pub mentioned_items: Vec<Spanned<MentionedItem<'tcx>>>,
```
Fixes#107503
This commit fixes several issues with the format string parsing of the
`#[diagnostic::on_unimplemented]` attribute that were pointed out by
@ehuss.
In detail it fixes:
* Appearing format specifiers (display, etc). For these we generate a
warning that the specifier is unsupported. Otherwise we ignore them
* Positional arguments. For these we generate a warning that positional
arguments are unsupported in that location and replace them with the
format string equivalent (so `{}` or `{n}` where n is the index of the
positional argument)
* Broken format strings with enclosed }. For these we generate a warning
about the broken format string and set the emitted message literally to
the provided unformatted string
* Unknown format specifiers. For these we generate an additional warning
about the unknown specifier. Otherwise we emit the literal string as
message.
This essentially makes those strings behave like `format!` with the
minor difference that we do not generate hard errors but only warnings.
After that we continue trying to do something unsuprising (mostly either
ignoring the broken parts or falling back to just giving back the
literal string as provided).
Fix#122391
Split an item bounds and an item's super predicates
This is the moral equivalent of #107614, but instead for predicates this applies to **item bounds**. This PR splits out the item bounds (i.e. *all* predicates that are assumed to hold for the alias) from the item *super predicates*, which are the subset of item bounds which share the same self type as the alias.
## Why?
Much like #107614, there are places in the compiler where we *only* care about super-predicates, and considering predicates that possibly don't have anything to do with the alias is problematic. This includes things like closure signature inference (which is at its core searching for `Self: Fn(..)` style bounds), but also lints like `#[must_use]`, error reporting for aliases, computing type outlives predicates.
Even in cases where considering all of the `item_bounds` doesn't lead to bugs, unnecessarily considering irrelevant bounds does lead to a regression (#121121) due to doing extra work in the solver.
## Example 1 - Trait Aliases
This is best explored via an example:
```
type TAIT<T> = impl TraitAlias<T>;
trait TraitAlias<T> = A + B where T: C;
```
The item bounds list for `Tait<T>` will include:
* `Tait<T>: A`
* `Tait<T>: B`
* `T: C`
While `item_super_predicates` query will include just the first two predicates.
Side-note: You may wonder why `T: C` is included in the item bounds for `TAIT`? This is because when we elaborate `TraitAlias<T>`, we will also elaborate all the predicates on the trait.
## Example 2 - Associated Type Bounds
```
type TAIT<T> = impl Iterator<Item: A>;
```
The `item_bounds` list for `TAIT<T>` will include:
* `Tait<T>: Iterator`
* `<Tait<T> as Iterator>::Item: A`
But the `item_super_predicates` will just include the first bound, since that's the only bound that is relevant to the *alias* itself.
## So what
This leads to some diagnostics duplication just like #107614, but none of it will be user-facing. We only see it in the UI test suite because we explicitly disable diagnostic deduplication.
Regarding naming, I went with `super_predicates` kind of arbitrarily; this can easily be changed, but I'd consider better names as long as we don't block this PR in perpetuity.
Fix bad span for explicit lifetime suggestions
Fixes#121267
Current explicit lifetime suggestions are not showing correct spans for some lifetimes - e.g. elided lifetime generic parameters;
This should be done correctly regarding elided lifetime kind like the following code
43fdd4916d/compiler/rustc_resolve/src/late/diagnostics.rs (L3015-L3044)
Backend and target selection is a mess: the target can override the
backend (via `Target::default_codegen_backend`), *and* the backend can
override the target (via `CodegenBackend::target_override`).
The code that handles this is ugly. It calls `build_target_config`
twice, once before getting the backend and once again afterward. It also
must check that both overrides aren't triggering at the same time.
This commit removes the latter override. It's used in rust-gpu but
@eddyb said via Zulip that removing it would be ok. This simplifies the
code greatly, and will allow some nice follow-up refactorings.
compiletest: Introduce `remove_and_create_dir_all()` helper
The code
let _ = fs::remove_dir_all(&dir);
create_dir_all(&dir).unwrap();
is duplicated in 7 places. Let's introduce a helper.
Add `usize::MAX` arg tests for Vec
Tests to prevent recurrence of the UB from the rust-lang/rust#122760 issue.
I skipped the `with_capacity`, `drain`, `reserve`, etc. APIs because they actually had a good assortment of tests earlier in the same file.
r? Nilstrieb
coverage: Remove incorrect assertions from counter allocation
These assertions detect situations where a BCB node (in the coverage graph) would have both a physical counter and one or more in-edge counters/expressions.
For most BCBs that situation would indicate an implementation bug. However, it's perfectly fine in the case of a BCB having an edge that loops back to itself.
Given the complexity and risk involved in fixing the assertions, and the fact that nothing relies on them actually being true, this patch just removes them instead.
Fixes#122738.
`````@rustbot````` label +A-code-coverage
Relax SeqCst ordering in standard library.
Every single SeqCst in the standard library is unnecessary. In all cases, Relaxed or Release+Acquire was sufficient.
As I [wrote](https://marabos.nl/atomics/memory-ordering.html#common-misconceptions) in my book on atomics:
> [..] when reading code, SeqCst basically tells the reader: "this operation depends on the total order of every single SeqCst operation in the program," which is an incredibly far-reaching claim. The same code would likely be easier to review and verify if it used weaker memory ordering instead, if possible. For example, Release effectively tells the reader: "this relates to an acquire operation on the same variable," which involves far fewer considerations when forming an understanding of the code.
>
> It is advisable to see SeqCst as a warning sign. Seeing it in the wild often means that either something complicated is going on, or simply that the author did not take the time to analyze their memory ordering related assumptions, both of which are reasons for extra scrutiny.
r? ````@Amanieu```` ````@joboet````
Ignore paths from expansion in `unused_qualifications`
If any of the path segments are from an expansion the lint is skipped currently, but a path from an expansion where all of the segments are passed in would not be. Doesn't seem that likely to occur but it could happen
Changelog for Clippy 1.77 🏫
Roses are violets,
Red is blue,
Let's create a world,
Perfect for me and you
---
### The cat of this release is: *Luigi*
<img width=500 src="https://github.com/rust-lang/rust-clippy/assets/17087237/ea13d05c-e5ba-4189-9e16-49bf1b43c468" alt="The cats of this Clippy release" />
The cat for the next release can be voted on: [here](https://forms.gle/57gbrNvXtCUmrHYh6)
The cat for the next next release can be nominated in the comments and will be voted in the next changelog PR (Submission deadline is 2024-03-30 23:59CET)
---
changelog: none
LLD parses @ files like the command arguments on the platform it's on,
so on windows it needs to follow the MSVC style to work correctly.
Otherwise builds can fail if the linker command gets too long and the
build path contains spaces.
It uses very old language that is more confusing today than helpful,
including references to `SubstNt` that no longer exists. The comment
above `TokenStream` is better, and suffices for a basic understanding of
these types.
This commit combines `MatchedTokenTree` and `MatchedNonterminal`, which
are often considered together, into a single `MatchedSingle`. It shares
a representation with the newly-parameterized `ParseNtResult`.
This will also make things much simpler if/when variants from
`Interpolated` start being moved to `ParseNtResult`.
Disable `cast_lossless` when casting to u128 from any (u)int type
Fixes https://github.com/rust-lang/rust-clippy/issues/12492
Disables `cast_lossless` when casting to u128 from any int or uint type. The lint states that when casting to any int type, there can potentially be lossy behaviour if the source type ever exceeds the size of the destination type in the future, which is impossible with a destination of u128.
It's possible this is a bit of a niche edge case which is better addressed by just disabling the lint in code, but I personally couldn't think of any good reason to still lint in this specific case - maybe except if the source is a bool, for readability reasons :).
changelog: FP: `cast_lossless`: disable lint when casting to u128 from any (u)int type
This is implemented with the freshly-released Wasmtime 19 and should
prevent beta breakage from wasm tests that was observed and fixed
in #122640 again.