Fix the `FileEncoder` buffer size.
It allows a variable size, but in practice we always use the default of 8192 bytes. This commit fixes it to that size, which makes things slightly faster because the size can be hard-wired in generated code.
The commit also:
- Rearranges some buffer capacity checks so they're all in the same form (`x > BUFSIZE`).
- Removes some buffer capacity assertions and comments about them. With an 8192 byte buffer, we're not in any danger of overflowing a `usize`.
r? `@WaffleLapkin`
Stabilize const slice::split_at
This stabilizes the use of the following method in const context:
```rust
impl<T> [T] {
pub const fn split_at(&self, mid: usize) -> (&[T], &[T]);
}
```
cc tracking issue #101158
Rename const error methods for consistency
renames `ty::Const`'s methods for creating a `ConstKind::Error` to be in the same naming style as `ty::Ty`'s equivalent methods.
r? `@BoxyUwU`
It allows a variable size, but in practice we always use the default of
8192 bytes. This commit fixes it to that size, which makes things
slightly faster because the size can be hard-wired in generated code.
The commit also:
- Rearranges some buffer capacity checks so they're all in the same form
(`x > BUFSIZE`).
- Removes some buffer capacity assertions and comments about them. With
an 8192 byte buffer, we're not in any danger of overflowing a `usize`.
Implement `AsHandle`/`AsSocket` for `Arc`/`Rc`/`Box` on Windows
Implement the Windows counterpart to #97437 and #107317: Implement `AsHandle` and `AsSocket` for `Arc<T>`, `Rc<T>`, and `Box<T>`.
Add midpoint function for all integers and floating numbers
This pull-request adds the `midpoint` function to `{u,i}{8,16,32,64,128,size}`, `NonZeroU{8,16,32,64,size}` and `f{32,64}`.
This new function is analog to the [C++ midpoint](https://en.cppreference.com/w/cpp/numeric/midpoint) function, and basically compute `(a + b) / 2` with a rounding towards ~~`a`~~ negative infinity in the case of integers. Or simply said: `midpoint(a, b)` is `(a + b) >> 1` as if it were performed in a sufficiently-large signed integral type.
Note that unlike the C++ function this pull-request does not implement this function on pointers (`*const T` or `*mut T`). This could be implemented in a future pull-request if desire.
### Implementation
For `f32` and `f64` the implementation in based on the `libcxx` [one](18ab892ff7/libcxx/include/__numeric/midpoint.h (L65-L77)). I originally tried many different approach but all of them failed or lead me with a poor version of the `libcxx`. Note that `libstdc++` has a very similar one; Microsoft STL implementation is also basically the same as `libcxx`. It unfortunately doesn't seems like a better way exist.
For unsigned integers I created the macro `midpoint_impl!`, this macro has two branches:
- The first one take `$SelfT` and is used when there is no unsigned integer with at least the double of bits. The code simply use this formula `a + (b - a) / 2` with the arguments in the correct order and signs to have the good rounding.
- The second branch is used when a `$WideT` (at least double of bits as `$SelfT`) is provided, using a wider number means that no overflow can occur, this greatly improve the codegen (no branch and less instructions).
For signed integers the code basically forwards the signed numbers to the unsigned version of midpoint by mapping the signed numbers to their unsigned numbers (`ex: i8 [-128; 127] to [0; 255]`) and vice versa.
I originally created a version that worked directly on the signed numbers but the code was "ugly" and not understandable. Despite this mapping "overhead" the codegen is better than my most optimized version on signed integers.
~~Note that in the case of unsigned numbers I tried to be smart and used `#[cfg(target_pointer_width = "64")]` to determine if using the wide version was better or not by looking at the assembly on godbolt. This was applied to `u32`, `u64` and `usize` and doesn't change the behavior only the assembly code generated.~~
Shorten lifetime of panic temporaries in panic_fmt case
This fixes an issue called out by `@fasterthanlime` in https://octodon.social/`@fasterthanlime/109304454114856561.` Macros like `todo!("…")` and `panic!("…", …)` drop their `format_args` temporary at the nearest enclosing semicolon **outside** the macro invocation, not inside the macro invocation. Due to the formatting internals being type-erased in a way that is not thread-safe, this blocks futures from being `Send` if there is an `await` anywhere between the panic call and the nearest enclosing semicolon.
**Example:**
```rust
#![allow(unreachable_code)]
async fn f(_: u8) {}
async fn g() {
f(todo!("...")).await;
}
fn require_send(_: impl Send) {}
fn main() {
require_send(g());
}
```
**Before:**
```console
error: future cannot be sent between threads safely
--> src/main.rs:15:18
|
15 | require_send(g());
| ^^^ future returned by `g` is not `Send`
|
= help: the trait `Sync` is not implemented for `core::fmt::Opaque`
note: future is not `Send` as this value is used across an await
--> src/main.rs:9:20
|
9 | f(todo!("...")).await;
| ------------ ^^^^^^ await occurs here, with `$crate::format_args!($($arg)+)` maybe used later
| |
| has type `ArgumentV1<'_>` which is not `Send`
note: `$crate::format_args!($($arg)+)` is later dropped here
--> src/main.rs:9:26
|
9 | f(todo!("...")).await;
| ^
note: required by a bound in `require_send`
--> src/main.rs:12:25
|
12 | fn require_send(_: impl Send) {}
| ^^^^ required by this bound in `require_send`
```
**After:** works.
Arguably there is a rustc fix that could work here too, instead of a standard library change. Rustc could be taught that the code shown above is fine to compile because the `await` is unreachable and so temporaries before the `await` do not need to get put in the anonymous compiler-generated `Future` struct, regardless of syntactically where they're supposed to be dropped according to the language semantics.
I would be open to that, though my recollection is that in the past we have been very hesitant about introducing any smarts into Drop placement. People want the language rules about where temporaries are dropped to be as simplistic and predictable as possible.
Use dynamic dispatch for queries
This replaces most concrete query values `V` with `MaybeUninit<[u8; { size_of::<V>() }]>` reducing the code instantiated by queries. The compile time of `rustc_query_impl` is reduced by 27%. It is an alternative to https://github.com/rust-lang/rust/pull/107937 which uses unstable const generics while this uses a `EraseType` trait which maps query values to their erased variant.
This is achieved by introducing an `Erased` type which does sanity check with `cfg(debug_assertions)`. The query caches gets instantiated with these erased types leaving the code in `rustc_query_system` unaware of them. `rustc_query_system` is changed to use instances of `QueryConfig` so that `rustc_query_impl` can pass in `DynamicConfig` which holds a pointer to a virtual table.
<table><tr><td rowspan="2">Benchmark</td><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th></tr><tr><td align="right">Time</td><td align="right">Time</td><td align="right">%</th></tr><tr><td>🟣 <b>clap</b>:check</td><td align="right">1.7055s</td><td align="right">1.6949s</td><td align="right"> -0.62%</td></tr><tr><td>🟣 <b>hyper</b>:check</td><td align="right">0.2547s</td><td align="right">0.2528s</td><td align="right"> -0.73%</td></tr><tr><td>🟣 <b>regex</b>:check</td><td align="right">0.9590s</td><td align="right">0.9553s</td><td align="right"> -0.39%</td></tr><tr><td>🟣 <b>syn</b>:check</td><td align="right">1.5457s</td><td align="right">1.5440s</td><td align="right"> -0.11%</td></tr><tr><td>🟣 <b>syntex_syntax</b>:check</td><td align="right">5.9092s</td><td align="right">5.9009s</td><td align="right"> -0.14%</td></tr><tr><td>Total</td><td align="right">10.3741s</td><td align="right">10.3479s</td><td align="right"> -0.25%</td></tr><tr><td>Summary</td><td align="right">1.0000s</td><td align="right">0.9960s</td><td align="right"> -0.40%</td></tr></table>
<table><tr><td rowspan="2">Benchmark</td><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th></tr><tr><td align="right">Time</td><td align="right">Time</td><td align="right">%</th></tr><tr><td>🟣 <b>clap</b>:check:initial</td><td align="right">2.0605s</td><td align="right">2.0575s</td><td align="right"> -0.15%</td></tr><tr><td>🟣 <b>hyper</b>:check:initial</td><td align="right">0.3218s</td><td align="right">0.3216s</td><td align="right"> -0.07%</td></tr><tr><td>🟣 <b>regex</b>:check:initial</td><td align="right">1.1848s</td><td align="right">1.1839s</td><td align="right"> -0.07%</td></tr><tr><td>🟣 <b>syn</b>:check:initial</td><td align="right">1.9409s</td><td align="right">1.9376s</td><td align="right"> -0.17%</td></tr><tr><td>🟣 <b>syntex_syntax</b>:check:initial</td><td align="right">7.3105s</td><td align="right">7.2928s</td><td align="right"> -0.24%</td></tr><tr><td>Total</td><td align="right">12.8185s</td><td align="right">12.7935s</td><td align="right"> -0.20%</td></tr><tr><td>Summary</td><td align="right">1.0000s</td><td align="right">0.9986s</td><td align="right"> -0.14%</td></tr></table>
<table><tr><td rowspan="2">Benchmark</td><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th></tr><tr><td align="right">Time</td><td align="right">Time</td><td align="right">%</th></tr><tr><td>🟣 <b>clap</b>:check:unchanged</td><td align="right">0.4606s</td><td align="right">0.4617s</td><td align="right"> 0.24%</td></tr><tr><td>🟣 <b>hyper</b>:check:unchanged</td><td align="right">0.1335s</td><td align="right">0.1336s</td><td align="right"> 0.08%</td></tr><tr><td>🟣 <b>regex</b>:check:unchanged</td><td align="right">0.3324s</td><td align="right">0.3346s</td><td align="right"> 0.65%</td></tr><tr><td>🟣 <b>syn</b>:check:unchanged</td><td align="right">0.6268s</td><td align="right">0.6307s</td><td align="right"> 0.64%</td></tr><tr><td>🟣 <b>syntex_syntax</b>:check:unchanged</td><td align="right">1.8248s</td><td align="right">1.8508s</td><td align="right">💔 1.43%</td></tr><tr><td>Total</td><td align="right">3.3779s</td><td align="right">3.4113s</td><td align="right"> 0.99%</td></tr><tr><td>Summary</td><td align="right">1.0000s</td><td align="right">1.0061s</td><td align="right"> 0.61%</td></tr></table>
It's based on https://github.com/rust-lang/rust/pull/108167.
r? `@cjgillot`
allow mutating function args through `&raw const`
Fixes https://github.com/rust-lang/rust/issues/111502 by "turning off the sketchy optimization while we figure out if this is ok", like `@JakobDegen` said.
The first commit in this PR removes some suspicious looking logic from the same method, but should have no functional changes, since it doesn't modify the `context` outside of the method. Best reviewed commit by commit.
r? opsem
[rustdoc] Convert more GUI tests colors to their original format
Follow-up of https://github.com/rust-lang/rust/pull/111459.
The update for the `browser-ui-test` version is about improvements for color handling (alpha for hex format in particular).
r? `@notriddle`
Allow MIR debuginfo to point to a variable's address
MIR optimizations currently do not to operate on borrowed locals.
When enabling #106285, many borrows will be left as-is because they are used in debuginfo. This pass allows to replace this pattern directly in MIR debuginfo:
```rust
a => _1
_1 = &raw? mut? _2
```
becomes
```rust
a => &_2
// No statement to borrow _2.
```
This pass is implemented as a drive-by in ReferencePropagation MIR pass.
This transformation allows following following MIR opts to treat _2 as an unborrowed local, and optimize it as such, even in builds with debuginfo.
In codegen, when encountering `a => &..&_2`, we create a list of allocas:
```llvm
store ptr %_2.dbg.spill, ptr %a.ref0.dbg.spill
store ptr %a.ref0.dbg.spill, ptr %a.ref1.dbg.spill
...
call void `@llvm.dbg.declare(metadata` ptr %a.ref{n}.dbg.spill, /* ... */)
```
Caveat: this transformation looses the exact type, we do not differentiate `a` as a immutable, mutable reference or a raw pointer. Everything is declared to `*mut` to codegen. I'm not convinced this is a blocker.
Generate shell completions for bootstrap with Clap
Now that #110693 has been merged, we can look at generating shell completions for x.py with `clap_complete`. Leaving this as draft for now as I'm not sure of the best way to integration the completion generator. Additionally, the generated completions for zsh are completely broken (will need to be resolved upstream, it doesn't seem to handle subcommands + global arguments well).
I don't have Fish installed and would be interested to know how well completions work there.
Alternative to #107827
Add a tidy check to find unexpected files in UI tests, and clean up the results
While looking at UI tests, I noticed several weird files that were not being tested, some from even pre-1.0. I added a tidy check that fails if any files not known to compiletest or not used in tests (via manual list) are present in the ui tests.
Unfortunately the root entry limit had to be raised by 1 to accommodate the stderr file for one of the tests.
r? `@fee1-dead`
Align unsized locals
Allocate an extra space for unsized locals and manually align the storage, since alloca doesn't support dynamic alignment.
Fixes#71416.
Fixes#71695.
Introduce `DynSend` and `DynSync` auto trait for parallel compiler
part of parallel-rustc #101566
This PR introduces `DynSend / DynSync` trait and `FromDyn / IntoDyn` structure in rustc_data_structure::marker. `FromDyn` can dynamically check data structures for thread safety when switching to parallel environments (such as calling `par_for_each_in`). This happens only when `-Z threads > 1` so it doesn't affect single-threaded mode's compile efficiency.
r? `@cjgillot`
Apply simulate-remapped-rust-src-base even if remap-debuginfo is set in config.toml
This is really a mess. Here is the situation before this change:
- UI tests depend on not having `rust-src` available. In particular, <3f374128ee/tests/ui/tuple/wrong_argument_ice.stderr (L7-L8)> is depending on the `note` being a single line and not showing the source code.
- When `download-rustc` is disabled, we pass `-Zsimulate-remapped-rust-src-base=/rustc/FAKE_PREFIX` `-Ztranslate-remapped-path-to-local-path=no`, which changes the diagnostic to something like ` --> /rustc/FAKE_PREFIX/library/alloc/src/collections/vec_deque/mod.rs:1657:12`
- When `download-rustc` is enabled, we still pass those flags, but they no longer have an effect. Instead rustc emits diagnostic paths like this: ` --> /rustc/39c6804b92aa202369e402525cee329556bc1db0/library/alloc/src/collections/vec_deque/mod.rs:1657:12`. Notice how there's a real commit and not `FAKE_PREFIX`. This happens because we set `CFG_VIRTUAL_RUST_SOURCE_BASE_DIR` during bootstrapping for CI artifacts, and rustc previously didn't allow for `simulate-remapped` to affect paths that had already been remapped.
- Pietro noticed this and decided the right thing was to normalize `/rustc/<commit>` to `$SRC_DIR` in compiletest: 470423c3d2
- After my change to `x test core`, which rebuilds stage 2 std from source so `build/stage2-std` and `build/stage2` use the same `.rlib` metadata, the compiler suddenly notices it has sources for `std` available and prints those in the diagnostic, causing the test to fail.
This changes `simulate-remapped-rust-src-base` to support remapping paths that have already been remapped, unblocking download-rustc.
Unfortunately, although this fixes the specific problem for
download-rustc, it doesn't seem to affect all the compiler's
diagnostics. In particular, various `mir-opt` tests are failing to
respect `simulate-remapped-path-prefix` (I looked into fixing this but
it seems non-trivial). As a result, we can't remove the normalization in
compiletest that maps `/rustc/<commit>` to `$SRC_DIR`, so this change is
currently untested anywhere except locally.
You can test this locally yourself by setting `rust.remap-debuginfo = true`, running any UI test with `ERROR` annotations, then rerunning the test manually with a dev toolchain to verify it prints `/rustc/FAKE_PREFIX`, not `/rustc/1.71.0`.
Helps with https://github.com/rust-lang/rust/issues/110352.
Rollup of 6 pull requests
Successful merges:
- #110454 (Require impl Trait in associated types to appear in method signatures)
- #111096 (Add support for `cfg(overflow_checks)`)
- #111451 (Note user-facing types of coercion failure)
- #111469 (Fix data race in llvm source code coverage)
- #111494 (Encode `VariantIdx` so we can decode ADT variants in the right order)
- #111499 (asm: loongarch64: Drop efiapi)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Encode `VariantIdx` so we can decode ADT variants in the right order
As far as I can tell, we don't guarantee anything about the ordering of `DefId`s and module children...
The code that motivated this PR (#111483) looks something like:
```rust
#[derive(Protocol)]
pub enum Data {
#[protocol(discriminator(0x00))]
Disconnect(Disconnect),
EncryptionRequest,
/* more variants... */
}
```
The specific macro ([`protocol`](https://github.com/dylanmckay/protocol)) doesn't really matter, but as far as I can tell (from calls to `build_reduced_graph`), the presence of that `#[protocol(..)]` helper attribute causes the def-id of the `Disconnect` enum variant to be collected *after* its siblings, and it shows up after the other variants in `module_children`.
When we decode the variants for `Data` in a child crate (an example test, in this case), this means that the `Disconnect` variant is moved to the end of the variants list, and all of the other variants now have incorrect relative discriminant data, causing the ICE.
This PR fixes this by sorting manually by variant index after they are decoded. I guess there are alternative ways of fixing this, such as not reusing `module_children_non_reexports` to encode the order-sensitive ADT variants, or to do some sorting in `rustc_resolve`... but none of those seemed particularly satisfying either.
~I really struggled to create a reproduction here -- it required at least 3 crates, one of which is a proc macro, and then some code to actually compute discriminants in the child crate... Needless to say, I failed to repro this in a test, but I can confirm that it fixes the regression in #111483.~ Test exists now.
r? `@petrochenkov` but feel free to reassign. ~Again, sorry for no test, but I hope the explanation at least suggests why a fix like this is likely necessary.~ Feedback is welcome.
Fix data race in llvm source code coverage
Fixes#91092 .
Before this patch, increment of counters for code coverage looks like this:
```
movq .L__profc__RNvCsd6wgJFC5r19_3lib6bugaga+8(%rip), %rax
addq $1, %rax
movq %rax, .L__profc__RNvCsd6wgJFC5r19_3lib6bugaga+8(%rip)
```
after this patch:
```
lock incq .L__profc__RNvCs3JgIB2SjHh2_3lib6bugaga+8(%rip)
```