fix rustc_nonnull_optimization_guaranteed docs
As far as I can tell, even back when this was [added](https://github.com/rust-lang/rust/pull/60300) it never *enabled* any optimizations. It just indicates that the FFI compat lint should accept those types for NPO.
Rollup of 5 pull requests
Successful merges:
- #130648 (move enzyme flags from general cargo to rustc-specific cargo)
- #130650 (Fixup Apple target's description strings)
- #130664 (Generate line numbers for non-rust code examples as well)
- #130665 (Prevent Deduplication of `LongRunningWarn`)
- #130669 (tests: Test that `extern "C" fn` ptrs lint on slices)
r? `@ghost`
`@rustbot` modify labels: rollup
Prevent Deduplication of `LongRunningWarn`
Fixes#118612
As mention in the issue, `LongRunningWarn` is meant to be repeated multiple times.
Therefore, this PR stores a unique number in every instance of `LongRunningWarn` so that it's not hashed into the same value and omitted by the deduplication mechanism.
Fixup Apple target's description strings
Noticed this inconsistency in how the Apple target's had their new descriptions written while looking at https://github.com/rust-lang/rust/pull/130614, and figured it was easy enough to fixup shortly. I think prefixing every OS with `Apple` is clearer, especially for less known ones like `visionOS` and `watchOS`; so that's what was done here along with making the architecture names more consistent and then some other small tweaks.
~~r? `@thomcc~~`
cc `@madsmtm`
rustc_llvm: adapt to flattened CLI args in LLVM
This changed in
llvm/llvm-project@e190d074a0. I decided to stick with more duplication between the ifdef blocks to make the code easier to read for the next two years before we can plausibly drop LLVM 19.
`@rustbot` label: +llvm-main
try-job: x86_64-msvc
rustc_expand: remember module `#[path]`s during expansion
During invocation collection, if a module item parsed from a `#[path]` attribute needed a second pass after parsing, its path wouldn't get added to the file path stack, so cycle detection broke. This checks the `#[path]` in such cases, so that it gets added appropriately. I think it should work identically to the case for external modules that don't need a second pass, but I'm not 100% sure.
Fixes#97589
Fix anon const def-creation when macros are involved take 2
Fixes#130321
There were two cases that #129137 did not handle correctly:
- Given a const argument `Foo<{ bar!() }>` in which `bar!()` expands to `N`, we would visit the anon const and then visit the `{ bar() }` expression instead of visiting the macro call. This meant that we would build a def for the anon const as `{ bar!() }` is not a trivial const argument as `bar!()` is not a path.
- Given a const argument `Foo<{ bar!() }>` is which `bar!()` expands to `{ qux!() }` in which `qux!()` expands to `N`, it should not be considered a trivial const argument as `{{ N }}` has two pairs of braces. If we only looked at `qux`'s expansion it would *look* like a trivial const argument even though it is not. We have to track whether we have "unwrapped" a brace already when recursing into the expansions of `bar`/`qux`/any macro
r? `@camelid`
Assert that `explicit_super_predicates_of` and `explicit_item_super_predicates` truly only contains bounds for the type itself
We distinguish _implied_ predicates (anything that is implied from elaborating a trait bound) from _super_ predicates, which are are the subset of implied predicates that share the same self type as the trait predicate we're elaborating. This was originally done in #107614, which fixed a large class of ICEs and strange errors where the compiler expected the self type of a trait predicate not to change when elaborating super predicates.
Specifically, super predicates are special for various reasons: they're the valid candidates for trait upcasting, are the only predicates we elaborate when doing closure signature inference, etc. So making sure that we get this list correct and don't accidentally "leak" any other predicates into this list is quite important.
This PR adds some debug assertions that we're in fact not doing so, and it fixes an oversight in the effect desugaring rework.
Implement Return Type Notation (RTN)'s path form in where clauses
Implement return type notation (RTN) in path position for where clauses. We already had RTN in associated type position ([e.g.](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=627a4fb8e2cb334863fbd08ed3722c09)), but per [the RFC](https://rust-lang.github.io/rfcs/3654-return-type-notation.html#where-rtn-can-be-used-for-now):
> As a standalone type, RTN can only be used as the Self type of a where-clause [...]
Specifically, in order to enable code like:
```rust
trait Foo {
fn bar() -> impl Sized;
}
fn is_send(_: impl Send) {}
fn test<T>()
where
T: Foo,
T::bar(..): Send,
{
is_send(T::bar());
}
```
* In the resolver, when we see a `TyKind::Path` whose final segment is `GenericArgs::ParenthesizedElided` (i.e. `(..)`), resolve that path in the *value* namespace, since we're looking for a method.
* When lowering where clauses in HIR lowering, we first try to intercept an RTN self type via `lower_ty_maybe_return_type_notation`. If we find an RTN type, we lower it manually in a way that respects its higher-ranked-ness (see below) and resolves to the corresponding RPITIT. Anywhere else, we'll emit the same "return type notation not allowed in this position yet" error we do when writing RTN in every other position.
* In `resolve_bound_vars`, we add some special treatment for RTN types in where clauses. Specifically, we need to add new lifetime variables to our binders for the early- and late-bound vars we encounter on the method. This implements the higher-ranked desugaring [laid out in the RFC](https://rust-lang.github.io/rfcs/3654-return-type-notation.html#converting-to-higher-ranked-trait-bounds).
This PR also adds a bunch of tests, mostly negative ones (testing error messages).
In a follow-up PR, I'm going to mark RTN as no longer incomplete, since this PR basically finishes the impl surface that we should initially stabilize, and the RFC was accepted.
cc [RFC 3654](https://github.com/rust-lang/rfcs/pull/3654) and https://github.com/rust-lang/rust/issues/109417
add `extern "C-cmse-nonsecure-entry" fn`
tracking issue #75835
in https://github.com/rust-lang/rust/issues/75835#issuecomment-1183517255 it was decided that using an abi, rather than an attribute, was the right way to go for this feature.
This PR adds that ABI and removes the `#[cmse_nonsecure_entry]` attribute. All relevant tests have been updated, some are now obsolete and have been removed.
Error 0775 is no longer generated. It contains the list of targets that support the CMSE feature, and maybe we want to still use this? right now a generic "this abi is not supported on this platform" error is returned when this abi is used on an unsupported platform. On the other hand, users of this abi are likely to be experienced rust users, so maybe the generic error is good enough.
Correct outdated object size limit
The comment here about 48 bit addresses being enough was written in 2016 but was made incorrect in 2019 by 5-level paging, and then persisted for another 5 years before being noticed and corrected.
The bolding of the "exclusive" part is merely to call attention to something I missed when reading it and doublechecking the math.
try-job: i686-msvc
try-job: test-various
Don't alloca for unused locals
We already have a concept of mono-unreachable basic blocks; this is primarily useful for ensuring that we do not compile code under an `if false`. But since we never gave locals the same analysis, a large local only used under an `if false` will still have stack space allocated for it.
There are 3 places we traverse MIR during monomorphization: Inside the collector, `non_ssa_locals`, and the walk to generate code. Unfortunately, https://github.com/rust-lang/rust/pull/129283#issuecomment-2297925578 indicates that we cannot afford the expense of tracking reachable locals during the collector's traversal, so we do need at least two mono-reachable traversals. And of course caching is of no help here because the benchmarks that regress are incr-unchanged; they don't do any codegen.
This fixes the second problem in https://github.com/rust-lang/rust/issues/129282, and brings us anther step toward `const if` at home.
Explain why `non_snake_case` is skipped for binary crates and cleanup tests
- Explain `non_snake_case` lint is skipped for bin crate names because binaries are not intended to be distributed or consumed like library crates (#45127).
- Coalesce the bunch of tests into a single one but with revisions, which is easier to compare the differences for `non_snake_case` behavior with respect to crate types.
Follow-up to #121749 with some more comments and test cleanup.
cc `@saethlin` who bumped into one of the tests and was confused why it was `only-x86_64-unknown-linux-gnu`.
try-job: dist-i586-gnu-i586-i686-musl
compiler: factor out `OVERFLOWING_LITERALS` impl
This puts it into `rustc_lint/src/types/literal.rs`. It then uses the fact that it's easier to navigate the logic to identify something that can easily be factored out, as an instance of "why".
Normalize consts in writeback when GCE is enabled
GCE lazily normalizes its unevaluated consts. This PR ensures that, like the new solver with its lazy norm types, we can assume that the writeback results are fully normalized.
This is important since we're trying to eliminate unnecessary calls to `ty::Const::{eval,normalize}` since they won't work with mGCE. Previously, we'd keep those consts unnormalized in writeback all the way through MIR build, and they'd only get normalized if we explicitly called `ty::Const::{eval,normalize}`, or during codegen since that calls `normalize_erasing_regions` (which invokes the `QueryNormalizer`, which evaluates the const accordingly).
This hack can (hopefully obviously) be removed when mGCE is implemented and we yeet the old GCE; it's only reachable with the GCE flag anyways, so I'm not worried about the implications here.
r? `@BoxyUwU`
Only expect valtree consts in codegen
Turn a bunch of `Const::eval_*` calls into `Const::try_to_*` calls, which implicitly assert that we only have valtrees by the time we get to codegen.
r? `@BoxyUwU`
Add recursion limit to FFI safety lint
Fixes#130310
Now we check against `tcx.recursion_limit()` and raise an error if it the limit is reached instead of overflowing the stack.