Rollup of 11 pull requests
Successful merges:
- #128523 (Add release notes for 1.81.0)
- #129605 (Add missing `needs-llvm-components` directives for run-make tests that need target-specific codegen)
- #129650 (Clean up `library/profiler_builtins/build.rs`)
- #129651 (skip stage 0 target check if `BOOTSTRAP_SKIP_TARGET_SANITY` is set)
- #129684 (Enable Miri to pass pointers through FFI)
- #129762 (Update the `wasm-component-ld` binary dependency)
- #129782 (couple more crash tests)
- #129816 (tidy: say which feature gate has a stability issue mismatch)
- #129818 (make the const-unstable-in-stable error more clear)
- #129824 (Fix code examples buttons not appearing on click on mobile)
- #129826 (library: Fix typo in `core::mem`)
r? `@ghost`
`@rustbot` modify labels: rollup
make the const-unstable-in-stable error more clear
The default should be to add `rustc_const_unstable`, not `rustc_allow_const_fn_unstable`.
Also I discovered our check for missing const stability attributes on stable functions -- but strangely that check only kicks in for "reachable" functions. `check_missing_stability` checks for reachability since all reachable functions must have a stability attribute, but I would say if a function has `#[stable]` it should also have const-stability attributes regardless of reachability.
Enable Miri to pass pointers through FFI
Following https://github.com/rust-lang/rust/pull/126787, the purpose of this PR is to now enable Miri to execute native calls that make use of pointers.
> <details>
>
> <summary> Simple example </summary>
>
> ```rust
> extern "C" {
> fn ptr_printer(ptr: *mut i32);
> }
>
> fn main() {
> let ptr = &mut 42 as *mut i32;
> unsafe {
> ptr_printer(ptr);
> }
> }
> ```
> ```c
> void ptr_printer(int *ptr) {
> printf("printing pointer dereference from C: %d\n", *ptr);
> }
> ```
> should now show `printing pointer dereference from C: 42`.
>
> </details>
Note that this PR does not yet implement any logic involved in updating Miri's "analysis" state (byte initialization, provenance) upon such a native call.
r? ``@RalfJung``
const fn stability checking: also check declared language features
Fixes https://github.com/rust-lang/rust/issues/129656
`@oli-obk` I assume it is just an oversight that this didn't use `features().declared()`? Or is there a deep reason that this must only check `declared_lib_features`?
interpret: do not make const-eval query result depend on tcx.sess
The check against calling functions with missing target features uses `tcx.sess` to determine which target features are available. However, this can differ between different crates in a crate graph, so the same const-eval query can come to different conclusions about whether a constant evaluates successfully or not -- which is bad, we should consistently get the same result everywhere.
const-eval: do not make UbChecks behavior depend on current crate's flags
Fixes https://github.com/rust-lang/rust/issues/129552
Let's see if we can get away with just always enabling these checks.
Stop storing a special inner body for the coroutine by-move body for async closures
...and instead, just synthesize an item which is treated mostly normally by the MIR pipeline.
This PR does a few things:
* We synthesize a new `DefId` for the by-move body of a closure, which has its `mir_built` fed with the output of the `ByMoveBody` MIR transformation, and some other relevant queries.
* This has the `DefKind::ByMoveBody`, which we use to distinguish it from "real" bodies (that come from HIR) which need to be borrowck'd. Introduce `TyCtxt::is_synthetic_mir` to skip over `mir_borrowck` which is called by `mir_promoted`; borrowck isn't really possible to make work ATM since it heavily relies being called on a body generated from HIR, and is redundant by the construction of the by-move-body.
* Remove the special `PassManager` hacks for handling the inner `by_move_body` stored within the coroutine's mir body. Instead, this body is fed like a regular MIR body, so it's goes through all of the `tcx.*_mir` stages normally (build -> promoted -> ...etc... -> optimized) ✨.
* Remove the `InstanceKind::ByMoveBody` shim, since now we have a "regular" def id, we can just use `InstanceKind::Item`. This also allows us to remove the corresponding hacks from codegen, such as in `fn_sig_for_fn_abi` ✨.
Notable remarks:
* ~~I know it's kind of weird to be using `DefKind::Closure` here, since it's not a distinct closure but just a new MIR body. I don't believe it really matters, but I could also use a different `DefKind`... maybe one that we could use for synthetic MIR bodies in general?~~ edit: We're doing this now.
make it possible to enable const_precise_live_drops per-function
This makes const_precise_live_drops work with rustc_allow_const_fn_unstable so that we can stabilize individual functions that rely on const_precise_live_drops.
The goal is that we can use that to stabilize some of https://github.com/rust-lang/rust/issues/67441 without having to stabilize const_precise_live_drops.
miri weak memory emulation: put previous value into initial store buffer
Fixes https://github.com/rust-lang/miri/issues/2164 by doing a read before each atomic write so that we can initialize the store buffer. The read suppresses memory access hooks and UB exceptions, to avoid otherwise influencing the program behavior. If the read fails, we store that as `None` in the store buffer, so that when an atomic read races with the first atomic write to some memory and previously the memory was uninitialized, we can report UB due to reading uninit memory.
``@cbeuw`` this changes a bit the way we initialize the store buffers. Not sure if you still remember all this code, but if you could have a look to make sure this still makes sense, that would be great. :)
r? ``@saethlin``
const checking: properly compute the set of transient locals
For const-checking the MIR of a const/static initializer, we have to know the set of "transient" locals. The reason for this is that creating a mutable (or interior mutable) reference to a transient local is "safe" in the sense that this reference cannot possibly end up in the final value of the const -- even if it is turned into a raw pointer and stored in a union, we will see that pointer during interning and reliably reject it as dangling.
So far, we determined the set of transient locals as "locals that have a `StorageDead` somewhere". But that's not quite right -- if we had MIR like
```rust
StorageLive(_5);
StorageDead(_5);
StorageLive(_5);
// no further storage annotations for _5
```
Then we'd consider `_5` to be "transient", but it is not actually transient.
We do not currently generate such MIR, but I feel uneasy relying on subtle invariants like this. So this PR implements a proper analysis for computing the set of "transient" locals: a local is "transient" if it is guaranteed dead at all `Return` terminators.
Cc `@cjgillot`
make writes_through_immutable_pointer a hard error
This turns the lint added in https://github.com/rust-lang/rust/pull/118324 into a hard error. This has been reported in cargo's future-compat reports since Rust 1.76 (released in February). Given that const_mut_refs is still unstable, it should be impossible to even hit this error on stable: we did accidentally stabilize some functions that can cause this error, but that got reverted in https://github.com/rust-lang/rust/pull/117905. Still, let's do a crater run just to be sure.
Given that this should only affect unstable code, I don't think it needs an FCP, but let's Cc ``@rust-lang/lang`` anyway -- any objection to making this unambiguous UB into a hard error during const-eval? This can be viewed as part of https://github.com/rust-lang/rust/pull/129195 which is already nominated for discussion.
Use `bool` in favor of `Option<()>` for diagnostics
We originally only supported `Option<()>` for optional notes/labels, but we now support `bool`. Let's use that, since it usually leads to more readable code.
I'm not removing the support from the derive macro, though I guess we could error on it... 🤔
Shrink `TyKind::FnPtr`.
By splitting the `FnSig` within `TyKind::FnPtr` into `FnSigTys` and `FnHeader`, which can be packed more efficiently. This reduces the size of the hot `TyKind` type from 32 bytes to 24 bytes on 64-bit platforms. This reduces peak memory usage by a few percent on some benchmarks. It also reduces cache misses and page faults similarly, though this doesn't translate to clear cycles or wall-time improvements on CI.
r? `@compiler-errors`
miri: make vtable addresses not globally unique
Miri currently gives vtables a unique global address. That's not actually matching reality though. So this PR enables Miri to generate different addresses for the same type-trait pair.
To avoid generating an unbounded number of `AllocId` (and consuming unbounded amounts of memory), we use the "salt" technique that we also already use for giving constants non-unique addresses: the cache is keyed on a "salt" value n top of the actually relevant key, and Miri picks a random salt (currently in the range `0..16`) each time it needs to choose an `AllocId` for one of these globals -- that means we'll get up to 16 different addresses for each vtable. The salt scheme is integrated into the global allocation deduplication logic in `tcx`, and also used for functions and string literals. (So this also fixes the problem that casting the same function to a fn ptr over and over will consume unbounded memory.)
r? `@saethlin`
Fixes https://github.com/rust-lang/miri/issues/3737
Normalize struct tail properly for `dyn` ptr-to-ptr casting in new solver
Realized that the new solver didn't handle ptr-to-ptr casting correctly.
r? lcnr
Built on #128694
By splitting the `FnSig` within `TyKind::FnPtr` into `FnSigTys` and
`FnHeader`, which can be packed more efficiently. This reduces the size
of the hot `TyKind` type from 32 bytes to 24 bytes on 64-bit platforms.
This reduces peak memory usage by a few percent on some benchmarks. It
also reduces cache misses and page faults similarly, though this doesn't
translate to clear cycles or wall-time improvements on CI.
Pass the right `ParamEnv` to `might_permit_raw_init_strict`
Fixes#119620
`might_permit_raw_init_strict` currently passes an empty `ParamEnv` to the `InterpCx`, instead of the actual `ParamEnv` that was passed in to `check_validity_requirement` at callsite.
This leads to ICEs such as the linked issue where for `UnsafeCell<*mut T>` we initially get the layout with the right `ParamEnv` (which suceeds because it can prove that `T: Sized` and therefore `UnsafeCell<*mut T>` has a known layout) but then do the rest with an empty `ParamEnv` where `T: Sized` is not known to hold so getting the layout for `*mut T` later fails.
This runs into an assertion in other layout code where it's making the (valid) assumption that, when we already have a layout for a struct (`UnsafeCell<*mut T>`), getting the layout of one of its fields (`*mut T`) should also succeed, which wasn't the case here due to using the wrong `ParamEnv`.
So, this PR changes it to just use the same `ParamEnv` all the way throughout.
MIR required_consts, mentioned_items: ensure we do not forget to fill these lists
Bodies initially get created with empty required_consts and mentioned_items, but at some point those should be filled. Make sure we notice when that is forgotten.
raw_eq: using it on bytes with provenance is not UB (outside const-eval)
The current behavior of raw_eq violates provenance monotonicity. See https://github.com/rust-lang/rust/pull/124921 for an explanation of provenance monotonicity. It is violated in raw_eq because comparing bytes without provenance is well-defined, but adding provenance makes the operation UB.
So remove the no-provenance requirement from raw_eq. However, the requirement stays in-place for compile-time invocations of raw_eq, that indeed cannot deal with provenance.
Cc `@rust-lang/opsem`
miri: fix offset_from behavior on wildcard pointers
offset_from wouldn't behave correctly when the "end" pointer was a wildcard pointer (result of an int2ptr cast) just at the end of the allocation. Fix that by expressing the "same allocation" check in terms of two `check_ptr_access_signed` instead of something specific to offset_from, which is both more canonical and works better with wildcard pointers.
The second commit just improves diagnostics: I wanted the "pointer is dangling (has no provenance)" message to say how many bytes of memory it expected to see (since if it were 0 bytes, this would actually be legal, so it's good to tell the user that it's not 0 bytes). And then I was annoying that the error looks so different for when you deref a dangling pointer vs an out-of-bounds pointer so I made them more similar.
Fixes https://github.com/rust-lang/miri/issues/3767
Use `#[rustfmt::skip]` on some `use` groups to prevent reordering.
`use` declarations will be reformatted in #125443. Very rarely, there is a desire to force a group of `use` declarations together in a way that auto-formatting will break up. E.g. when you want a single comment to apply to a group. #126776 dealt with all of these in the codebase, ensuring that no comments intended for multiple `use` declarations would end up in the wrong place. But some people were unhappy with it.
This commit uses `#[rustfmt::skip]` to create these custom `use` groups in an idiomatic way for a few of the cases changed in #126776. This works because rustfmt treats any `use` item annotated with `#[rustfmt::skip]` as a barrier and won't reorder other `use` items around it.
r? `@cuviper`
interpret: add sanity check in dyn upcast to double-check what codegen does
For dyn receiver calls, we already have two codepaths: look up the function to call by indexing into the vtable, or alternatively resolve the DefId given the dynamic type of the receiver. With debug assertions enabled, the interpreter does both and compares the results. (Without debug assertions we always use the vtable as it is simpler.)
This PR does the same for dyn trait upcasts. However, for casts *not* using the vtable is the easier thing to do, so now the vtable path is the debug-assertion-only path. In particular, there are cases where the vtable does not contain a pointer for upcasts but instead reuses the old pointer: when the supertrait vtable is a prefix of the larger vtable. We don't want to expose this optimization and detect UB if people do a transmute assuming this optimization, so we cannot in general use the vtable indexing path.
r? ``@oli-obk``
`use` declarations will be reformatted in #125443. Very rarely, there is
a desire to force a group of `use` declarations together in a way that
auto-formatting will break up. E.g. when you want a single comment to
apply to a group. #126776 dealt with all of these in the codebase,
ensuring that no comments intended for multiple `use` declarations would
end up in the wrong place. But some people were unhappy with it.
This commit uses `#[rustfmt::skip]` to create these custom `use` groups
in an idiomatic way for a few of the cases changed in #126776. This
works because rustfmt treats any `use` item annotated with
`#[rustfmt::skip]` as a barrier and won't reorder other `use` items
around it.
Clean up more comments near use declarations
#125443 will reformat all use declarations in the repository. There are a few edge cases involving comments on use declarations that require care. This PR fixes them up so #125443 can go ahead with a simple `x fmt --all`. A follow-up to #126717.
r? ``@cuviper``
There are some comments describing multiple subsequent `use` items. When
the big `use` reformatting happens some of these `use` items will be
reordered, possibly moving them away from the comment. With this
additional level of formatting it's not really feasible to have comments
of this type. This commit removes them in various ways:
- merging separate `use` items when appropriate;
- inserting blank lines between the comment and the first `use` item;
- outright deletion (for comments that are relatively low-value);
- adding a separate "top-level" comment.
We also entirely skip formatting for four library files that contain
nothing but `pub use` re-exports, where reordering would be painful.
offset_from: always allow pointers to point to the same address
This PR implements the last remaining part of the t-opsem consensus in https://github.com/rust-lang/unsafe-code-guidelines/issues/472: always permits offset_from when both pointers have the same address, no matter how they are computed. This is required to achieve *provenance monotonicity*.
Tracking issue: https://github.com/rust-lang/rust/issues/117945
### What is provenance monotonicity and why does it matter?
Provenance monotonicity is the property that adding arbitrary provenance to any no-provenance pointer must never make the program UB. More specifically, in the program state, data in memory is stored as a sequence of [abstract bytes](https://rust-lang.github.io/unsafe-code-guidelines/glossary.html#abstract-byte), where each byte can optionally carry provenance. When a pointer is stored in memory, all of the bytes it is stored in carry that provenance. Provenance monotonicity means: if we take some byte that does not have provenance, and give it some arbitrary provenance, then that cannot change program behavior or introduce UB into a UB-free program.
We care about provenance monotonicity because we want to allow the optimizer to remove provenance-stripping operations. Removing a provenance-stripping operation effectively means the program after the optimization has provenance where the program before the optimization did not -- since the provenance removal does not happen in the optimized program. IOW, the compiler transformation added provenance to previously provenance-free bytes. This is exactly what provenance monotonicity lets us do.
We care about removing provenance-stripping operations because `*ptr = *ptr` is, in general, (likely) a provenance-stripping operation. Specifically, consider `ptr: *mut usize` (or any integer type), and imagine the data at `*ptr` is actually a pointer (i.e., we are type-punning between pointers and integers). Then `*ptr` on the right-hand side evaluates to the data in memory *without* any provenance (because [integers do not have provenance](https://rust-lang.github.io/rfcs/3559-rust-has-provenance.html#integers-do-not-have-provenance)). Storing that back to `*ptr` means that the abstract bytes `ptr` points to are the same as before, except their provenance is now gone. This makes `*ptr = *ptr` a provenance-stripping operation (Here we assume `*ptr` is fully initialized. If it is not initialized, evaluating `*ptr` to a value is UB, so removing `*ptr = *ptr` is trivially correct.)
### What does `offset_from` have to do with provenance monotonicity?
With `ptr = without_provenance(N)`, `ptr.offset_from(ptr)` is always well-defined and returns 0. By provenance monotonicity, I can now add provenance to the two arguments of `offset_from` and it must still be well-defined. Crucially, I can add *different* provenance to the two arguments, and it must still be well-defined. In other words, this must always be allowed: `ptr1.with_addr(N).offset_from(ptr2.with_addr(N))` (and it returns 0). But the current spec for `offset_from` says that the two pointers must either both be derived from an integer or both be derived from the same allocation, which is not in general true for arbitrary `ptr1`, `ptr2`.
To obtain provenance monotonicity, this PR hence changes the spec for offset_from to say that if both pointers have the same address, the function is always well-defined.
### What further consequences does this have?
It means the compiler can no longer transform `end2 = begin.offset(end.offset_from(begin))` into `end2 = end`. However, it can still be transformed into `end2 = begin.with_addr(end.addr())`, which later parts of the backend (when provenance has been erased) can trivially turn into `end2 = end`.
The only alternative I am aware of is a fundamentally different handling of zero-sized accesses, where a "no provenance" pointer is not allowed to do zero-sized accesses and instead we have a special provenance that indicates "may be used for zero-sized accesses (and nothing else)". `offset` and `offset_from` would then always be UB on a "no provenance" pointer, and permit zero-sized offsets on a "zero-sized provenance" pointer. This achieves provenance monotonicity. That is, however, a breaking change as it contradicts what we landed in https://github.com/rust-lang/rust/pull/117329. It's also a whole bunch of extra UB, which doesn't seem worth it just to achieve that transformation.
### What about the backend?
LLVM currently doesn't have an intrinsic for pointer difference, so we anyway cast to integer and subtract there. That's never UB so it is compatible with any relaxation we may want to apply.
If LLVM gets a `ptrsub` in the future, then plausibly it will be consistent with `ptradd` and [consider two equal pointers to be inbounds](https://github.com/rust-lang/rust/pull/124921#issuecomment-2205795829).
Support tail calls in mir via `TerminatorKind::TailCall`
This is one of the interesting bits in tail call implementation — MIR support.
This adds a new `TerminatorKind` which represents a tail call:
```rust
TailCall {
func: Operand<'tcx>,
args: Vec<Operand<'tcx>>,
fn_span: Span,
},
```
*Structurally* this is very similar to a normal `Call` but is missing a few fields:
- `destination` — tail calls don't write to destination, instead they pass caller's destination to the callee (such that eventual `return` will write to the caller of the function that used tail call)
- `target` — similarly to `destination` tail calls pass the caller's return address to the callee, so there is nothing to do
- `unwind` — I _think_ this is applicable too, although it's a bit confusing
- `call_source` — `become` forbids operators and is not created as a lowering of something else; tail calls always come from HIR (at least for now)
It might be helpful to read the interpreter implementation to understand what `TailCall` means exactly, although I've tried documenting it too.
-----
There are a few `FIXME`-questions still left, ideally we'd be able to answer them during review ':)
-----
r? `@oli-obk`
cc `@scottmcm` `@DrMeepster` `@JakobDegen`
offset_from, offset: clearly separate safety requirements the user needs to prove from corollaries that automatically follow
By landing https://github.com/rust-lang/rust/pull/116675 we decided that objects larger than `isize::MAX` cannot exist in the address space of a Rust program, which lets us simplify these rules.
For `offset_from`, we can even state that the *absolute* distance fits into an `isize`, and therefore exclude `isize::MIN`. This PR also changes Miri to treat an `isize::MIN` difference like the other isize-overflowing cases.
Miri function identity hack: account for possible inlining
Having a non-lifetime generic is not the only reason a function can be duplicated. Another possibility is that the function may be eligible for cross-crate inlining. So also take into account the inlining attribute in this Miri hack for function pointer identity.
That said, `cross_crate_inlinable` will still sometimes return true even for `inline(never)` functions:
- when they are `DefKind::Ctor(..) | DefKind::Closure` -- I assume those cannot be `InlineAttr::Never` anyway?
- when `cross_crate_inline_threshold == InliningThreshold::Always`
so maybe this is still not quite the right criterion to use for function pointer identity.
Re-implement a type-size based limit
r? lcnr
This PR reintroduces the type length limit added in #37789, which was accidentally made practically useless by the caching changes to `Ty::walk` in #72412, which caused the `walk` function to no longer walk over identical elements.
Hitting this length limit is not fatal unless we are in codegen -- so it shouldn't affect passes like the mir inliner which creates potentially very large types (which we observed, for example, when the new trait solver compiles `itertools` in `--release` mode).
This also increases the type length limit from `1048576 == 2 ** 20` to `2 ** 24`, which covers all of the code that can be reached with craterbot-check. Individual crates can increase the length limit further if desired.
Perf regression is mild and I think we should accept it -- reinstating this limit is important for the new trait solver and to make sure we don't accidentally hit more type-size related regressions in the future.
Fixes#125460
Implement new effects desugaring
cc `@rust-lang/project-const-traits.` Will write down notes once I have finished.
* [x] See if we want `T: Tr` to desugar into `T: Tr, T::Effects: Compat<true>`
* [x] Fix ICEs on `type Assoc: ~const Tr` and `type Assoc<T: ~const Tr>`
* [ ] add types and traits to minicore test
* [ ] update rustc-dev-guide
Fixes#119717Fixes#123664Fixes#124857Fixes#126148
Add a tidy rule to check that fluent messages and attrs don't end in `.`
This adds a new dependency on `fluent-parse` to `tidy` -- we already rely on it in rustc so I feel like it's not that big of a deal.
This PR also adjusts many error messages that currently end in `.`; not all of them since I added an `ALLOWLIST`, excluded `rustc_codegen_*` ftl files, and `.teach_note` attributes.
r? ``@estebank`` ``@oli-obk``
`PtrMetadata` doesn't care about `*const`/`*mut`/`&`/`&mut`, so GVN away those casts in its argument.
This includes updating MIR to allow calling PtrMetadata on references too, not just raw pointers. That means that `[T]::len` can be just `_0 = PtrMetadata(_1)`, for example.
# Conflicts:
# tests/mir-opt/pre-codegen/slice_index.slice_get_unchecked_mut_range.PreCodegen.after.panic-abort.mir
# tests/mir-opt/pre-codegen/slice_index.slice_get_unchecked_mut_range.PreCodegen.after.panic-unwind.mir
safe transmute: support non-ZST, variantful, uninhabited enums
Previously, `Tree::from_enum`'s implementation branched into three disjoint cases:
1. enums that uninhabited
2. enums for which all but one variant is uninhabited
3. enums with multiple variants
This branching (incorrectly) did not differentiate between variantful and variantless uninhabited enums. In both cases, we assumed (and asserted) that uninhabited enums are zero-sized types. This assumption is false for enums like:
enum Uninhabited { A(!, u128) }
...which, currently, has the same size as `u128`. This faulty assumption manifested as the ICE reported in #126460.
In this PR, we revise the first case of `Tree::from_enum` to consider only the narrow category of "enums that are uninhabited ZSTs". These enums, whose layouts are described with `Variants::Single { index }`, are special in their layouts otherwise resemble the `!` type and cannot be descended into like typical enums. This first case captures uninhabited enums like:
enum Uninhabited { A(!, !), B(!) }
The second case is revised to consider the broader category of "enums that defer their layout to one of their variants"; i.e., enums whose layouts are described with `Variants::Single { index }` and that do have a variant at `index`. This second case captures uninhabited enums that are not ZSTs, like:
enum Uninhabited { A(!, u128) }
...which represent their variants with `Variants::Single`.
Finally, the third case is revised to cover the broader category of "enums with multiple variants", which captures uninhabited enums like:
enum Uninhabited { A(u8, !), B(!, u32) }
...which represent their variants with `Variants::Multiple`.
This PR also adds a comment requested by ````@RalfJung```` in his review of #126358 to `compiler/rustc_const_eval/src/interpret/discriminant.rs`.
Fixes#126460
r? ````@compiler-errors````
Rename `InstanceDef` -> `InstanceKind`
Renames `InstanceDef` to `InstanceKind`. The `Def` here is confusing, and makes it hard to distinguish `Instance` and `InstanceDef`. `InstanceKind` makes this more obvious, since it's really just describing what *kind* of instance we have.
Not sure if this is large enough to warrant a types team MCP -- it's only 53 files. I don't personally think it does, but happy to write one if anyone disagrees. cc ``@rust-lang/types``
r? types
Rollup of 9 pull requests
Successful merges:
- #125829 (rustc_span: Add conveniences for working with span formats)
- #126361 (Unify intrinsics body handling in StableMIR)
- #126417 (Add `f16` and `f128` inline ASM support for `x86` and `x86-64`)
- #126424 ( Also sort `crt-static` in `--print target-features` output)
- #126428 (Polish `std::path::absolute` documentation.)
- #126429 (Add `f16` and `f128` const eval for binary and unary operationations)
- #126448 (End support for Python 3.8 in tidy)
- #126488 (Use `std::path::absolute` in bootstrap)
- #126511 (.mailmap: Associate both my work and my private email with me)
r? `@ghost`
`@rustbot` modify labels: rollup
Add `f16` and `f128` const eval for binary and unary operationations
Add const evaluation and Miri support for f16 and f128, including unary and binary operations. Casts are not yet included.
Fixes https://github.com/rust-lang/rust/issues/124583
r? ``@RalfJung``