const_evaluatable_checked: Stop eagerly erroring in `is_const_evaluatable`
Fixes#82279
We don't want to be emitting errors inside of is_const_evaluatable because we may call this during selection where it should be able to fail silently
There were two errors being emitted in `is_const_evaluatable`. The one causing the compile error in #82279 was inside the match arm for `FailureKind::MentionsParam` but I moved the other error being emitted too since it made things cleaner imo
The `NotConstEvaluatable` enum \*should\* have a fourth variant for when we fail to evaluate a concrete const, e.g. `0 - 1` but that cant happen until #81339
cc `@oli-obk` `@lcnr`
r? `@nikomatsakis`
* Use Markdown list syntax and unindent a bit to prevent Markdown
interpreting the nested lists as code blocks
* A few more small typographical cleanups
Revert performance-sensitive change in #82436
This change was done in #82436, as an "optimization". Unfortunately I
missed that this code is not always executed, because of the "continue"
in the conditional above it.
This commit should solve the perf regressions introduced by #82436 as I
think there isn't anything else that could affect runtime performance in
that PR. The `Pick` type grows only one word, which I doubt can cause up
to 8.8% increase in RSS in some of the benchmarks.
---
Could someone with the rights start a perf job please?
Adjusted LLVM codegen for code compiled with `-Zinstrument-coverage` to
address multiple, somewhat related issues.
Fixed a significant flaw in prior coverage solution: Every counter
generated a new counter variable, but there should have only been one
counter variable per function. This appears to have bloated .profraw
files significantly. (For a small program, it increased the size by
about 40%. I have not tested large programs, but there is anecdotal
evidence that profraw files were way too large. This is a good fix,
regardless, but hopefully it also addresses related issues.
Fixes: #82144
Invalid LLVM coverage data produced when compiled with -C opt-level=1
Existing tests now work up to at least `opt-level=3`. This required a
detailed analysis of the LLVM IR, comparisons with Clang C++ LLVM IR
when compiled with coverage, and a lot of trial and error with codegen
adjustments.
The biggest hurdle was figuring out how to continue to support coverage
results for unused functions and generics. Rust's coverage results have
three advantages over Clang's coverage results:
1. Rust's coverage map does not include any overlapping code regions,
making coverage counting unambiguous.
2. Rust generates coverage results (showing zero counts) for all unused
functions, including generics. (Clang does not generate coverage for
uninstantiated template functions.)
3. Rust's unused functions produce minimal stubbed functions in LLVM IR,
sufficient for including in the coverage results; while Clang must
generate the complete LLVM IR for each unused function, even though
it will never be called.
This PR removes the previous hack of attempting to inject coverage into
some other existing function instance, and generates dedicated instances
for each unused function. This change, and a few other adjustments
(similar to what is required for `-C link-dead-code`, but with lower
impact), makes it possible to support LLVM optimizations.
Fixes: #79651
Coverage report: "Unexecuted instantiation:..." for a generic function
from multiple crates
Fixed by removing the aforementioned hack. Some "Unexecuted
instantiation" notices are unavoidable, as explained in the
`used_crate.rs` test, but `-Zinstrument-coverage` has new options to
back off support for either unused generics, or all unused functions,
which avoids the notice, at the cost of less coverage of unused
functions.
Fixes: #82875
Invalid LLVM coverage data produced with crate brotli_decompressor
Fixed by disabling the LLVM function attribute that forces inlining, if
`-Z instrument-coverage` is enabled. This attribute is applied to
Rust functions with `#[inline(always)], and in some cases, the forced
inlining breaks coverage instrumentation and reports.
Mark early otherwise optimization unsound
r? `@oli-obk`
cc `@tmiasko`
Related to #78496 and #82905
Should I also bump this one to level 3 or 4 or given that is unsound it doesn't matter?.
Probably need to adjust some tests.
With this PR, we now lint for all cases where we perform some kind of
proc-macro back-compat hack.
The `js-sys` had an internal fix made to properly handle
`None`-delimited groups, so we need to manually check the version in the
filename. As a result, we no longer apply the back-compat hack to cases
where the version number is missing file the file path. This should not
affect any users of the `crates.io` crate.
Replace closures_captures and upvar_capture with closure_min_captures
Removed all uses of closures_captures and upvar_capture and refactored code to work with closure_min_captures. This also involved removing functions that were no longer needed like the bridge.
Closes https://github.com/rust-lang/project-rfc-2229/issues/18
r? `@nikomatsakis`
Remove unnecessary `forward_inner_docs` hack
and replace it with `extended_key_value_attributes` feature.
This is https://github.com/rust-lang/rust/pull/79150, but for compiler/.
Extend `proc_macro_back_compat` lint to `actix-web`
Unlike the other cases of this lint, there's no simple way to detect if
an old version of the relevant crate (`syn`) is in use. The `actix-web`
crate only depends on `pin-project` v1.0.0, so checking the version of
`actix-web` does not guarantee that a new enough version of
`pin-project` (and therefore `syn`) is in use.
Instead, we rely on the fact that virtually all of the regressed crates
are pinned to a pre-1.0 version of `pin-project`. When this is the case,
bumping the `actix-web` dependency will pull in the *latest* version of
`pin-project`, which has an explicit dependency on a newer v dependency
on a newer version of `syn`.
The lint message tells users to update `actix-web`, since that's what
they're most likely to have control over. We could potentially tell them
to run `cargo update -p syn`, but I think it's more straightforward to
suggest an explicit change to the `Cargo.toml`
The `actori-web` fork had its last commit over a year ago, and appears
to just be a renamed fork of `actix-web`. Therefore, I've removed the
`actori-web` check entirely - any crates that actually get broken can
simply update `syn` themselves.
rustdoc: allow list syntax for #[doc(alias)] attributes
Fixes https://github.com/rust-lang/rust/issues/81205.
It now allows to have:
```rust
#[doc(alias = "x")]
// and:
#[doc(alias("y", "z"))]
```
cc ``@jplatte``
r? ``@jyn514``
Remove unwrap_none/expect_none from compiler/.
We're not going to stabilize `Option::{unwrap_none, expect_none}`. (See https://github.com/rust-lang/rust/issues/62633.) This removes the usage of those unstable methods from `compiler/`.
This change was done in #82436, as an "optimization". Unfortunately I
missed that this code is not always executed, because of the "continue"
in the conditional above it.
This commit should solve the perf regressions introduced by #82436 as I
think there isn't anything else that could affect runtime performance in
that PR. The `Pick` type grows only one word, which I doubt can cause up
to 8.8% increase in RSS in some of the benchmarks.
make changes to liveness to use closure_min_captures
use different span
borrow check uses new structures
rename to CapturedPlace
stop using upvar_capture in regionck
remove the bridge
cleanup from rebase + remove the upvar_capture reference from mutability_errors.rs
remove line from livenes test
make our unused var checking more consistent
update tests
adding more warnings to the tests
move is_ancestor_or_same_capture to rustc_middle/ty
update names to reflect the closures
add FIXME
check that all captures are immutable borrows before returning
add surrounding if statement like the original
move var out of the loop and rename
Co-authored-by: Logan Mosier <logmosier@gmail.com>
Co-authored-by: Roxane Fruytier <roxane.fruytier@hotmail.com>
Unlike the other cases of this lint, there's no simple way to detect if
an old version of the relevant crate (`syn`) is in use. The `actix-web`
crate only depends on `pin-project` v1.0.0, so checking the version of
`actix-web` does not guarantee that a new enough version of
`pin-project` (and therefore `syn`) is in use.
Instead, we rely on the fact that virtually all of the regressed crates
are pinned to a pre-1.0 version of `pin-project`. When this is the case,
bumping the `actix-web` dependency will pull in the *latest* version of
`pin-project`, which has an explicit dependency on a newer v dependency
on a newer version of `syn`.
The lint message tells users to update `actix-web`, since that's what
they're most likely to have control over. We could potentially tell them
to run `cargo update -p syn`, but I think it's more straightforward to
suggest an explicit change to the `Cargo.toml`
The `actori-web` fork had its last commit over a year ago, and appears
to just be a renamed fork of `actix-web`. Therefore, I've removed the
`actori-web` check entirely - any crates that actually get broken can
simply update `syn` themselves.
Allow registering tool lints with `register_tool`
Previously, there was no way to add a custom tool prefix, even if the tool
itself had registered a lint:
```rust
#![feature(register_tool)]
#![register_tool(xyz)]
#![warn(xyz::my_lint)]
```
```
$ rustc unknown-lint.rs --crate-type lib
error[E0710]: an unknown tool name found in scoped lint: `xyz::my_lint`
--> unknown-lint.rs:3:9
|
3 | #![warn(xyz::my_lint)]
| ^^^
```
This allows opting-in to lints from other tools using `register_tool`.
cc https://github.com/rust-lang/rust/issues/66079#issuecomment-788589193, ``@chorman0773``
r? ``@petrochenkov``
Extend `proc_macro_back_compat` lint to `procedural-masquerade`
We now lint on *any* use of `procedural-masquerade` crate. While this
crate still exists, its main reverse dependency (`cssparser`) no longer
depends on it. Any crates still depending off should stop doing so, as
it only exists to support very old Rust versions.
If a crate actually needs to support old versions of rustc via
`procedural-masquerade`, then they'll just need to accept the warning
until we remove it entirely (at the same time as the back-compat hack).
The latest version of `procedural-masquerade` does work with the
latest rustc, but trying to check for the version seems like more
trouble than it's worth.
While working on this, I realized that the `proc-macro-hack` check was
never actually doing anything. The corresponding enum variant in
`proc-macro-hack` is named `Value` or `Nested` - it has never been
called `Input`. Due to a strange Crater issue, the Crater run that
tested adding this did *not* end up testing it - some of the crates that
would have failed did not actually have their tests checked, making it
seem as though the `proc-macro-hack` check was working.
The Crater issue is being discussed at
https://rust-lang.zulipchat.com/#narrow/stream/242791-t-infra/topic/Nearly.20identical.20Crater.20runs.20processed.20a.20crate.20differently/near/230406661
Despite the `proc-macro-hack` check not actually doing anything, we
haven't gotten any reports from users about their build being broken.
I went ahead and removed it entirely, since it's clear that no one is
being affected by the `proc-macro-hack` regression in practice.
Make source-based code coverage compatible with MIR inlining
When codegenning code coverage use the instance that coverage data was
originally generated for, to ensure basic level of compatibility with
MIR inlining.
Fixes#83061
Emit error when trying to use assembler syntax directives in `asm!`
The `.intel_syntax` and `.att_syntax` assembler directives should not be used, in favor of not specifying a syntax for intel, and in favor of the explicit `att_syntax` option using the inline assembly options.
Closes#79869
ast/hir: Rename field-related structures
I always forget what `ast::Field` and `ast::StructField` mean despite working with AST for long time, so this PR changes the naming to less confusing and more consistent.
- `StructField` -> `FieldDef` ("field definition")
- `Field` -> `ExprField` ("expression field", not "field expression")
- `FieldPat` -> `PatField` ("pattern field", not "field pattern")
Various visiting and other methods working with the fields are renamed correspondingly too.
The second commit reduces the size of `ExprKind` by boxing fields of `ExprKind::Struct` in preparation for https://github.com/rust-lang/rust/pull/80080.
Don't warn about old rustdoc lint names (temporarily)
Since https://github.com/rust-lang/rust/pull/80527, rustdoc users have an unpleasant situation: they can either use the new tool lint names (`rustdoc::non_autolinks`) or they can use the old names (`non_autolinks`). If they use the tool lints, they get a hard error on stable compilers, because rustc rejects all tool names it doesn't recognize (https://github.com/rust-lang/rust/issues/66079#issuecomment-788589193). If they use the old name, they get a warning to rename the lint to the new name. The only way to compile without warnings is to add `#[allow(renamed_removed_lints)]`, which defeats the whole point of the change: we *want* people to switch to the new name.
To avoid people silencing the lint and never migrating to the tool lint, this avoids warning about the old name, while still allowing you to use the new name. Once the new `rustdoc` tool name makes it to the stable channel, we can change these lints to warn again.
This adds the new lint functions `register_alias` and `register_ignored` - I didn't see an existing way to do this.
r? `@Manishearth` cc `@rust-lang/rustdoc`
Do not insert impl_trait_in_bindings opaque definitions twice.
The reference to the item already appears inside the `OpaqueDef`. It does not need to be repeated as a statement.
More precise spans for HIR paths
`Ty::assoc_item` is lowered to `<Ty>::assoc_item` in HIR, but `Ty` got span from the whole path.
This PR fixes that, and adjusts some diagnostic code that relied on `Ty` having the whole path span.
This is a pre-requisite for https://github.com/rust-lang/rust/pull/82868 (we cannot report suggestions like `Tr::assoc` -> `<dyn Tr>::assoc` with the current imprecise spans).
r? ````@estebank````
Adjust `-Ctarget-cpu=native` handling in cg_llvm
When cg_llvm encounters the `-Ctarget-cpu=native` it computes an
explciit set of features that applies to the target in order to
correctly compile code for the host CPU (because e.g. `skylake` alone is
not sufficient to tell if some of the instructions are available or
not).
However there were a couple of issues with how we did this. Firstly, the
order in which features were overriden wasn't quite right – conceptually
you'd expect `-Ctarget-cpu=native` option to override the features that
are implicitly set by the target definition. However due to how other
`-Ctarget-cpu` values are handled we must adopt the following order
of priority:
* Features from -Ctarget-cpu=*; are overriden by
* Features implied by --target; are overriden by
* Features from -Ctarget-feature; are overriden by
* function specific features.
Another problem was in that the function level `target-features`
attribute would overwrite the entire set of the globally enabled
features, rather than just the features the
`#[target_feature(enable/disable)]` specified. With something like
`-Ctarget-cpu=native` we'd end up in a situation wherein a function
without `#[target_feature(enable)]` annotation would have a broader
set of features compared to a function with one such attribute. This
turned out to be a cause of heavy run-time regressions in some code
using these function-level attributes in conjunction with
`-Ctarget-cpu=native`, for example.
With this PR rustc is more careful about specifying the entire set of
features for functions that use `#[target_feature(enable/disable)]` or
`#[instruction_set]` attributes.
Sadly testing the original reproducer for this behaviour is quite
impossible – we cannot rely on `-Ctarget-cpu=native` to be anything in
particular on developer or CI machines.
cc https://github.com/rust-lang/rust/issues/83027 `@BurntSushi`
Implement (but don't use) valtree and refactor in preparation of use
This PR does not cause any functional change. It refactors various things that are needed to make valtrees possible. This refactoring got big enough that I decided I'd want it reviewed as a PR instead of trying to make one huge PR with all the changes.
cc `@rust-lang/wg-const-eval` on the following commits:
* 2027184 implement valtree
* eeecea9 fallible Scalar -> ScalarInt
* 042f663 ScalarInt convenience methods
cc `@eddyb` on ef04a6d
cc `@rust-lang/wg-mir-opt` for cf1700c (`mir::Constant` can now represent either a `ConstValue` or a `ty::Const`, and it is totally possible to have two different representations for the same value)
Previously, there was no way to add a custom tool prefix, even if the tool
itself had registered a lint:
```
#![feature(register_tool)]
#![register_tool(xyz)]
#![warn(xyz::my_lint)]
```
```
$ rustc unknown-lint.rs --crate-type lib
error[E0710]: an unknown tool name found in scoped lint: `xyz::my_lint`
--> unknown-lint.rs:3:9
|
3 | #![warn(xyz::my_lint)]
| ^^^
```
This allows opting-in to lints from other tools using `register_tool`.
As far as I can tell what we've been getting is llvm::MaybeAlign(), so
just use that for now. This is required sometime after
24539f1ef2471d07bd87f833cb0288fc0f251f4b.
This changed in 54fb3ca96e261f7107cb1b5778c34cb0e0808be6 - I'm not
entirely sure it's correct that we're leaving config empty, but the one
case in LLVM that looked similar did that.
When cg_llvm encounters the `-Ctarget-cpu=native` it computes an
explciit set of features that applies to the target in order to
correctly compile code for the host CPU (because e.g. `skylake` alone is
not sufficient to tell if some of the instructions are available or
not).
However there were a couple of issues with how we did this. Firstly, the
order in which features were overriden wasn't quite right – conceptually
you'd expect `-Ctarget-cpu=native` option to override the features that
are implicitly set by the target definition. However due to how other
`-Ctarget-cpu` values are handled we must adopt the following order
of priority:
* Features from -Ctarget-cpu=*; are overriden by
* Features implied by --target; are overriden by
* Features from -Ctarget-feature; are overriden by
* function specific features.
Another problem was in that the function level `target-features`
attribute would overwrite the entire set of the globally enabled
features, rather than just the features the
`#[target_feature(enable/disable)]` specified. With something like
`-Ctarget-cpu=native` we'd end up in a situation wherein a function
without `#[target_feature(enable)]` annotation would have a broader
set of features compared to a function with one such attribute. This
turned out to be a cause of heavy run-time regressions in some code
using these function-level attributes in conjunction with
`-Ctarget-cpu=native`, for example.
With this PR rustc is more careful about specifying the entire set of
features for functions that use `#[target_feature(enable/disable)]` or
`#[instruction_set]` attributes.
Sadly testing the original reproducer for this behaviour is quite
impossible – we cannot rely on `-Ctarget-cpu=native` to be anything in
particular on developer or CI machines.
2229: Handle patterns within closures correctly when `capture_disjoint_fields` is enabled
This PR fixes several issues related to handling patterns within closures when `capture_disjoint_fields` is enabled.
1. Matching is always considered a use of the place, even with `_` patterns
2. Compiler ICE when capturing fields in closures through `let` assignments
To do so, we
- Introduced new Fake Reads
- Delayed use of `Place` in favor of `PlaceBuilder`
- Ensured that `PlaceBuilder` can be resolved before attempting to extract `Place` in any of the pattern matching code
Closes rust-lang/project-rfc-2229/issues/27
Closes rust-lang/project-rfc-2229/issues/24
r? `@nikomatsakis`
Right now, rustdoc users have an unpleasant situation: they can either
use the new tool lint names (`rustdoc::non_autolinks`) or they can use
the old names (`non_autolinks`). If they use the tool lints, they get a
hard error on stable compilers, because rustc rejects all tool names it
doesn't recognize. If they use the old name, they get a warning to
rename the lint to the new name. The only way to compile without
warnings is to add `#[allow(renamed_removed_lints)]`, which defeats the
whole point of the change: we *want* people to switch to the new name.
To avoid people silencing the lint and never migrating to the tool lint,
this avoids warning about the old name, while still allowing you to use
the new name. Once the new `rustdoc` tool name makes it to the stable
channel, we can change these lints to warn again.
This adds the new lint functions `register_alias` and `register_ignored`
- I didn't see an existing way to do this.
Use delay_span_bug instead of panic in layout_scalar_valid_range
#83054 introduced validation of scalar range attributes, but panicking
code that uses the attribute remained reachable. Use `delay_span_bug`
instead to avoid the ICE.
Fixes#83180.
Allow rustdoc to handle asm! of foreign architectures
This allows rustdoc to process code containing `asm!` for architectures other than the current one. Since this never reaches codegen, we just replace target-specific registers and register classes with a dummy one.
Fixes#82869
StructField -> FieldDef ("field definition")
Field -> ExprField ("expression field", not "field expression")
FieldPat -> PatField ("pattern field", not "field pattern")
Also rename visiting and other methods working on them.
Add a `min_type_alias_impl_trait` feature gate
This new feature gate only permits type alias impl trait to be constrained by function and trait method return types. All other possible constraining sites like const/static types, closure return types and binding types are now forbidden and gated under the `type_alias_impl_trait` and `impl_trait_in_bindings` feature gates (which are both marked as incomplete, as they have various ways to ICE the compiler or cause query cycles where they shouldn't).
r? `@nikomatsakis`
This is best reviewed commit-by-commit
83054 introduced validation of scalar range attributes, but panicking
code that uses the attribute remained reachable. Use `delay_span_bug`
instead to avoid the ICE.
Consider functions to be reachable for code coverage purposes, either
when they reach the code generation directly, or indirectly as inlined
part of another function.
When codegenning code coverage use the instance that coverage data was
originally generated for, to ensure basic level of compatibility with
MIR inlining.
We now lint on *any* use of `procedural-masquerade` crate. While this
crate still exists, its main reverse dependency (`cssparser`) no longer
depends on it. Any crates still depending off should stop doing so, as
it only exists to support very old Rust versions.
If a crate actually needs to support old versions of rustc via
`procedural-masquerade`, then they'll just need to accept the warning
until we remove it entirely (at the same time as the back-compat hack).
The latest version of `procedural-masquerade` does not work with the
latest rustc, but trying to check for the version seems like more
trouble than it's worth.
While working on this, I realized that the `proc-macro-hack` check was
never actually doing anything. The corresponding enum variant in
`proc-macro-hack` is named `Value` or `Nested` - it has never been
called `Input`. Due to a strange Crater issue, the Crater run that
tested adding this did *not* end up testing it - some of the crates that
would have failed did not actually have their tests checked, making it
seem as though the `proc-macro-hack` check was working.
The Crater issue is being discussed at
https://rust-lang.zulipchat.com/#narrow/stream/242791-t-infra/topic/Nearly.20identical.20Crater.20runs.20processed.20a.20crate.20differently/near/230406661
Despite the `proc-macro-hack` check not actually doing anything, we
haven't gotten any reports from users about their build being broken.
I went ahead and removed it entirely, since it's clear that no one is
being affected by the `proc-macro-hack` regression in practice.
Don't encode file information for span with a dummy location
Fixes#83112
The location information for a dummy span isn't real, so don't encode
it. This brings the incr comp cache code into line with the Span
`StableHash` impl, which doesn't hash the location information for dummy
spans.
Previously, we would attempt to load the 'original' file from a dummy
span - if the file id changed (e.g. due to being moved on disk), we would get an
ICE, since the Span was still valid due to its hash being unchanged.
Introduce `proc_macro_back_compat` lint, and emit for `time-macros-impl`
Now that future-incompat-report support has landed in nightly Cargo, we
can start to make progress towards removing the various proc-macro
back-compat hacks that have accumulated in the compiler.
This PR introduces a new lint `proc_macro_back_compat`, which results in
a future-incompat-report entry being generated. All proc-macro
back-compat warnings will be grouped under this lint. Note that this
lint will never actually become a hard error - instead, we will remove
the special cases for various macros, which will cause older versions of
those crates to emit some other error.
I've added code to fire this lint for the `time-macros-impl` case. This
is the easiest case out of all of our current back-compat hacks - the
crate was renamed to `time-macros`, so seeing a filename with
`time-macros-impl` guarantees that an older version of the parent `time`
crate is in use.
When Cargo's future-incompat-report feature gets stabilized, affected
users will start to see future-incompat warnings when they build their
crates.
Remove unused `opt_local_def_id_to_hir_id` function
Found while investigating #82933 - all LocalDefIds are expected to have
HirIds, there's no point in pretending otherwise.
Find more invalid doc attributes
- Lint on `#[doc(123)]`, `#[doc("hello")]`, etc.
- Lint every attribute; e.g., will now report two warnings for `#[doc(foo, bar)]`
- Add hyphen to "crate level"
- Display paths like `#[doc(foo::bar)]` correctly instead of as an empty string
Custom error on literal names from other languages
This detects all Java literal types and all single word C data types, and suggests the corresponding Rust literal type.
Update to rustc-rayon 0.3.1
This pulls in rust-lang/rustc-rayon#8 to fix#81425. (h/t `@ammaraskar)`
That revealed weak constraints on `rustc_arena::DropArena`, because its
`DropType` was holding type-erased raw pointers to generic `T`. We can
implement `Send` for `DropType` (under `cfg(parallel_compiler)`) by
requiring all `T: Send` before they're type-erased.
Avoid sorting predicates by `DefId`
Fixes issue #82920
Even if an item does not change between compilation sessions, it may end
up with a different `DefId`, since inserting/deleting an item affects
the `DefId`s of all subsequent items. Therefore, we use a `DefPathHash`
in the incremental compilation system, which is stable in the face of
changes to unrelated items.
In particular, the query system will consider the inputs to a query to
be unchanged if any `DefId`s in the inputs have their `DefPathHash`es
unchanged. Queries are pure functions, so the query result should be
unchanged if the query inputs are unchanged.
Unfortunately, it's possible to inadvertantly make a query result
incorrectly change across compilations, by relying on the specific value
of a `DefId`. Specifically, if the query result is a slice that gets
sorted by `DefId`, the precise order will depend on how the `DefId`s got
assigned in a particular compilation session. If some definitions end up
with different `DefId`s (but the same `DefPathHash`es) in a subsequent
compilation session, we will end up re-computing a *different* value for
the query, even though the query system expects the result to unchanged
due to the unchanged inputs.
It turns out that we have been sorting the predicates computed during
`astconv` by their `DefId`. These predicates make their way into the
`super_predicates_that_define_assoc_type`, which ends up getting used to
compute the vtables of trait objects. This, re-ordering these predicates
between compilation sessions can lead to undefined behavior at runtime -
the query system will re-use code built with a *differently ordered*
vtable, resulting in the wrong method being invoked at runtime.
This PR avoids sorting by `DefId` in `astconv`, fixing the
miscompilation. However, it's possible that other instances of this
issue exist - they could also be easily introduced in the future.
To fully fix this issue, we should
1. Turn on `-Z incremental-verify-ich` by default. This will cause the
compiler to ICE whenver an 'unchanged' query result changes between
compilation sessions, instead of causing a miscompilation.
2. Remove the `Ord` impls for `CrateNum` and `DefId`. This will make it
difficult to introduce ICEs in the first place.
Now that future-incompat-report support has landed in nightly Cargo, we
can start to make progress towards removing the various proc-macro
back-compat hacks that have accumulated in the compiler.
This PR introduces a new lint `proc_macro_back_compat`, which results in
a future-incompat-report entry being generated. All proc-macro
back-compat warnings will be grouped under this lint. Note that this
lint will never actually become a hard error - instead, we will remove
the special cases for various macros, which will cause older versions of
those crates to emit some other error.
I've added code to fire this lint for the `time-macros-impl` case. This
is the easiest case out of all of our current back-compat hacks - the
crate was renamed to `time-macros`, so seeing a filename with
`time-macros-impl` guarantees that an older version of the parent `time`
crate is in use.
When Cargo's future-incompat-report feature gets stabilized, affected
users will start to see future-incompat warnings when they build their
crates.
Fixes#83112
The location information for a dummy span isn't real, so don't encode
it. This brings the incr comp cache code into line with the Span
`StableHash` impl, which doesn't hash the location information for dummy
spans.
Previously, we would attempt to load the 'original' file from a dummy
span - if the file id changed (e.g. due to being moved on disk), we would get an
ICE, since the Span was still valid due to its hash being unchanged.