Fix debuginfo for parameters passed via the ScalarPair abi on Windows
Mark all of these as locals so the debugger does not try to interpret
them as being a pointer to the value. This extends the approach used
in #81898.
Fixes#88625
Rollup of 12 pull requests
Successful merges:
- #88795 (Print a note if a character literal contains a variation selector)
- #89015 (core::ascii::escape_default: reduce struct size)
- #89078 (Cleanup: Remove needless reference in ParentHirIterator)
- #89086 (Stabilize `Iterator::map_while`)
- #89096 ([bootstrap] Improve the error message when `ninja` is not found to link to installation instructions)
- #89113 (dont `.ensure()` the `thir_abstract_const` query call in `mir_build`)
- #89114 (Fixes a technicality regarding the size of C's `char` type)
- #89115 (⬆️ rust-analyzer)
- #89126 (Fix ICE when `indirect_structural_match` is allowed)
- #89141 (Impl `Error` for `FromSecsError` without foreign type)
- #89142 (Fix match for placeholder region)
- #89147 (add case for checking const refs in check_const_value_eq)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Impl `Error` for `FromSecsError` without foreign type
Using it through the crate-local path in `std` means that it shouldn't make an "Implementations on Foreign Types" section in the `std::error::Error` docs.
Fix ICE when `indirect_structural_match` is allowed
Fixes#89088. The ICE is caused by `delay_good_path_bug()`, which is called (indirectly) from a `format!()` macro invocation. I have moved the macro invocation into the `decorate` closure of `struct_span_lint_hir()`, so that the macro is only invoked if the lint is not allowed (i.e., causes at least a warning, and thus prevents `delay_good_path_bug()` from firing).
Fixes a technicality regarding the size of C's `char` type
Specifically, ISO/IEC 9899:2018 — better known as "C18" — (and at least
C11, C99 and C89) do not specify the size of `byte` in bits.
Section 3.6 defines "byte" as "addressable unit of data storage" while
section 6.2.5 ("Types") only defines "char" as "large enough to store
any member of the basic execution set" giving it a lower bound of 7 bit
(since there are 96 characters in the basic execution set).
With section 6.5.3.4 paragraph 4 "When sizeof is applied to an operant
that has type char […] the result is 1" you could read this as the size
of `char` in bits being defined as exactly the same as the number of
bits in a byte but it's also valid to read that as an exception.
In general implementations take `char` as the smallest unit of
addressable memory, which for modern byte-addressed architectures is
overwhelmingly 8 bits to the point of this convention being completely
cemented into just about all of our software.
So is any of this actually relevant at all? I hope not. I sincerely hope
that this never, ever comes up.
But if for some reason a poor rustacean is having to interface with C
code running on a Cray X1 that in 2003 is still doing word-addressed
memory with 64-bit chars and they trust the docs here blindly it will
blow up in her face. And I'll be truly sorry for her to have to deal
with … all of that.
dont `.ensure()` the `thir_abstract_const` query call in `mir_build`
might fix an ICE seen in #89022 (note: this PR does not close that issue) about attempting to read stolen thir. I couldn't repro the ICE but this `.ensure` seems sus anyway.
r? `@lcnr`
[bootstrap] Improve the error message when `ninja` is not found to link to installation instructions
fixes#89091
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
Migrate in-tree crates to 2021
This replaces #89075 (cherry picking some of the commits from there), and closes#88637 and fixes#89074.
It excludes a migration of the library crates for now (see tidy diff) because we have some pending bugs around macro spans to fix there.
I instrumented bootstrap during the migration to make sure all crates moved from 2018 to 2021 had the compatibility warnings applied first.
Originally, the intent was to support cargo fix --edition within bootstrap, but this proved fairly difficult to pull off. We'd need to architect the check functionality to support running cargo check and cargo fix within the same x.py invocation, and only resetting sysroots on check. Further, it was found that cargo fix doesn't behave too well with "not quite workspaces", such as Clippy which has several crates. Bootstrap runs with --manifest-path ... for all the tools, and this makes cargo fix only attempt migration for that crate. We can't use e.g. --workspace due to needing to maintain sysroots for different phases of compilation appropriately.
It is recommended to skip the mass migration of Cargo.toml's to 2021 for review purposes; you can also use `git diff d6cd2c6c87 -I'^edition = .20...$'` to ignore the edition = 2018/21 lines in the diff.
rustc_codegen_llvm: make sse4.2 imply crc32 for LLVM 14
This fixes compiling things like the `snap` crate after
https://reviews.llvm.org/D105462. I added a test that verifies the
additional attribute gets specified, and confirmed that I can build
cargo with both LLVM 13 and 14 with this change applied.
r? `@nagisa` cc `@nikic`
Don't use projection cache or candidate cache in intercrate mode
Fixes#88969
It appears that *just* disabling the evaluation cache (in #88994)
leads to other issues involving intercrate mode caching. I suspect
that since we now always end up performing the full evaluation
in intercrate mode, we end up 'polluting' the candidate and projection
caches with results that depend on being in intercrate mode in some way.
Previously, we might have hit a cached evaluation (stored during
non-intercrate mode), and skipped doing this extra work in
intercrate mode.
The whole situation with intercrate mode caching is turning into
a mess. Ideally, we would remove intercrate mode entirely - however,
this might require waiting on Chalk.
Register normalization obligations instead of immediately normalizing in opaque type instantiation
For lazy TAIT we will need to instantiate opaque types from within `rustc_infer`, which cannot invoke normalization methods (they are in `rustc_trait_resolution`). So before we move the logic over to `rustc_infer`, we need make sure no normalization happens anymore. This PR resolves that by just registering normalization obligations and continuing.
This PR is best reviewed commit by commit
I also included f7ad36e which is just an independent cleanup that touches the same code and reduces diagnostics noise a bit
r? `@nikomatsakis` cc `@spastorino`
Using it through the crate-local path in `std` means that it shouldn't make an "Implementations on Foreign Types" section in the `std::error::Error` docs.
Lower only one HIR owner at a time
Based on https://github.com/rust-lang/rust/pull/83723
Additional diff is here: https://github.com/cjgillot/rust/compare/ownernode...lower-mono
Lowering is very tangled and has a tendency to intertwine the transformation of different items. This PR aims at simplifying the logic by:
- moving global analyses to the resolver (item_generics_num_lifetimes, proc_macros, trait_impls);
- removing a few special cases (non-exported macros and use statements);
- restricting the amount of available information at any one time;
- avoiding back-and-forth between different owners: an item must now be lowered all at once, and its parent cannot refer to its nodes.
I also removed the sorting of bodies by span. The diagnostic ordering changes marginally, since definitions are pretty much sorted already according to the AST. This uncovered a subtlety in thir-unsafeck.
(While these items could logically be in different PRs, the dependency between commits and the amount of conflicts force a monolithic PR.)
This also adjusts the lint docs generation to accept (and ignore) an allow
attribute, rather than expecting the documentation to be immediately followed by
the lint name.
Suggest replacing braces for brackets on array-esque invalid block expr
Newcomers may write `{1, 2, 3}` for making arrays, and the current error message is not informative enough to quickly convince them what is needed to fix the error.
This PR implements a diagnostic for this case, and its output looks like this:
```text
error: this code is interpreted as a block expression, not an array
--> src/lib.rs:1:22
|
1 | const FOO: [u8; 3] = {
| ______________________^
2 | | 1, 2, 3
3 | | };
| |_^
|
= note: to define an array, one would use square brackets instead of curly braces
help: try using [] instead of {}
|
1 | const FOO: [u8; 3] = [
2 | 1, 2, 3
3 | ];
|
```
Fix#87672
Fixes#88969
It appears that *just* disabling the evaluation cache (in #88994)
leads to other issues involving intercrate mode caching. I suspect
that since we now always end up performing the full evaluation
in intercrate mode, we end up 'polluting' the candidate and projection
caches with results that depend on being in intercrate mode in some way.
Previously, we might have hit a cached evaluation (stored during
non-intercrate mode), and skipped doing this extra work in
intercrate mode.
The whole situation with intercrate mode caching is turning into
a mess. Ideally, we would remove intercrate mode entirely - however,
this might require waiting on Chalk.
This fixes compiling things like the `snap` crate after
https://reviews.llvm.org/D105462. I added a test that verifies the
additional attribute gets specified, and confirmed that I can build
cargo with both LLVM 13 and 14 with this change applied.
This just applies the suggested fixes from the compatibility warnings,
leaving any that are in practice spurious in. This is primarily intended to
provide a starting point to identify possible fixes to the migrations (e.g., by
avoiding spurious warnings).
A secondary commit cleans these up where they are false positives (as is true in
many of the cases).
Add initial support for m68k
This patch series adds initial support for m68k making use of the new M68k
backend introduced with LLVM-13. Additional changes will be needed to be
able to actually use the backend for this target.
Specifically, ISO/IEC 9899:2018 — better known as "C18" — (and at least
C11, C99 and C89) do not specify the size of `byte` in bits.
Section 3.6 defines "byte" as "addressable unit of data storage" while
section 6.2.5 ("Types") only defines "char" as "large enough to store
any member of the basic execution set" giving it a lower bound of 7 bit
(since there are 96 characters in the basic execution set).
With section 6.5.3.4 paragraph 4 "When sizeof is applied to an operant
that has type char […] the result is 1" you could read this as the size
of `char` in bits being defined as exactly the same as the number of
bits in a byte but it's also valid to read that as an exception.
In general implementations take `char` as the smallest unit of
addressable memory, which for modern byte-addressed architectures is
overwhelmingly 8 bits to the point of this convention being completely
cemented into just about all of our software.
So is any of this actually relevant at all? I hope not. I sincerely hope
that this never, ever comes up.
But if for some reason a poor rustacean is having to interface with C
code running on a Cray X1 that in 2003 is still doing word-addressed
memory with 64-bit words and they trust the docs here blindly it will
blow up in her face. And I'll be truly sorry for her to have to deal
with … all of that.