Add `ConstParamTy` trait
This is a bit sketch, but idk.
r? `@BoxyUwU`
Yet to be done:
- [x] ~~Figure out if it's okay to implement `StructuralEq` for primitives / possibly remove their special casing~~ (it should be okay, but maybe not in this PR...)
- [ ] Maybe refactor the code a little bit
- [x] Use a macro to make impls a bit nicer
Future work:
- [ ] Actually™ use the trait when checking if a `const` generic type is allowed
- [ ] _Really_ refactor the surrounding code
- [ ] Refactor `marker.rs` into multiple modules for each "theme" of markers
Refactor core::char::EscapeDefault and co. structures
Change core::char::{EscapeUnicode, EscapeDefault and EscapeDebug}
structures from using a state machine to computing escaped sequence
upfront and during iteration just going through the characters.
This is arguably simpler since it’s easier to think about having
a buffer and start..end range to iterate over rather than thinking
about a state machine.
This also harmonises implementation of aforementioned iterators and
core::ascii::EscapeDefault struct. This is done by introducing a new
helper EscapeIterInner struct which holds the buffer and offers simple
methods for iterating over range.
As a side effect, this probably optimises Display implementation for
those types since rather than calling write_char repeatedly, write_str
is invoked once. On 64-bit platforms, it also reduces size of some of
the structs:
| Struct | Before | After |
|----------------------------+--------+-------+
| core::char::EscapeUnicode | 16 | 12 |
| core::char::EscapeDefault | 16 | 12 |
| core::char::EscapeDebug | 16 | 16 |
My ulterior motive and reason why I started looking into this is
addition of as_str method to the iterators. With this change this
will became trivial. It’s also going to be trivial to implement
DoubleEndedIterator if that’s ever desired.
Some of these relations were already mentioned in the text, but that
Send is implemented for &mut impl Send was not mentioned,
neither did the docs list when &T is Sync.
Make `mem::replace` simpler in codegen
Since they'd mentioned more intrinsics for simplifying stuff recently,
r? `@WaffleLapkin`
This is a continuation of me looking at foundational stuff that ends up with more instructions than it really needs. Specifically I noticed this one because `Range::next` isn't MIR-inlining, and one of the largest parts of it is a `replace::<usize>` that's a good dozen instructions instead of the two it could be.
So this means that `ptr::write` with a `Copy` type no longer generates worse IR than manually dereferencing (well, at least in LLVM -- MIR still has bonus pointer casts), and in doing so means that we're finally down to just the two essential `memcpy`s when emitting `mem::replace` for a large type, rather than the bonus-`alloca` and three `memcpy`s we emitted before this ([or the 6 we currently emit in 1.69 stable](https://rust.godbolt.org/z/67W8on6nP)). That said, LLVM does _usually_ manage to optimize the extra code away. But it's still nice for it not to have to do as much, thanks to (for example) not going through an `alloca` when `replace`ing a primitive like a `usize`.
(This is a new intrinsic, but one that's immediately lowered to existing MIR constructs, so not anything that MIRI or the codegen backends or MIR semantics needs to do work to handle.)
Tweak await span to not contain dot
Fixes a discrepancy between method calls and await expressions where the latter are desugared to have a span that *contains* the dot (i.e. `.await`) but method call identifiers don't contain the dot. This leads to weird suggestions suggestions in borrowck -- see linked issue.
Fixes#110761
This mostly touches a bunch of tests to tighten their `await` span.
`inline(always)` for `lt`/`le`/`ge`/`gt` on integers and floats
I happened to notice one of these not getting inlined as part of `Range::next` in <https://rust.godbolt.org/z/4WKWWxj1G>
```rust
bb1: {
StorageLive(_5);
_6 = &mut _4;
StorageLive(_21);
StorageLive(_14);
StorageLive(_15);
_15 = &((*_6).0: usize);
StorageLive(_16);
_16 = &((*_6).1: usize);
_14 = <usize as PartialOrd>::lt(move _15, move _16) -> bb7;
}
```
So since a call for something that's just one instruction is never the right choice, `#[inline(always)]` seems appropriate, like we have it on things like the rotate methods on integers.
Improve internal field comments on `slice::Iter(Mut)`
I wrote these in a previous PR that I ended up withdrawing, so might as well submit them separately.
`@bors` rollup=always
Make sure that some stdlib method signatures aren't accidental refinements
In the process of implementing https://rust-lang.github.io/rfcs/3245-refined-impls.html, I found a bunch of stdlib implementations that accidentally "refined" their method signatures by dropping (unnecessary) bounds.
This isn't currently a problem, but may become one if/when method signature refining is stabilized in the future. Shouldn't hurt to make these signatures a bit more accurate anyways.
NOTE (just to be clear lol): This does not affect behavior at all, since we don't actually take advantage of refined implementations yet!
Use MIR's `Offset` for pointer `add` too
~~Status: draft while waiting for #110822 to land, since this is built atop that.~~
~~r? `@ghost~~`
Canonical Rust code has mostly moved to `add`/`sub` on pointers, which take `usize`, instead of `offset` which takes `isize`. (And, relatedly, when `sub_ptr` was added it turned out it replaced every single in-tree use of `offset_from`, because `usize` is just so much more useful than `isize` in Rust.)
Unfortunately, `intrinsics::offset` could only accept `*const` and `isize`, so there's a *huge* amount of type conversions back and forth being done. They're identity conversions in the backend, but still end up producing quite a lot of unhelpful MIR.
This PR changes `intrinsics::offset` to accept `*const` *and* `*mut` along with `isize` *and* `usize`. Conveniently, the backends and CTFE already handle this, since MIR's `BinOp::Offset` [already supports all four combinations](adaac6b166/compiler/rustc_const_eval/src/transform/validate.rs (L523-L528)).
To demonstrate the difference, I added some `mir-opt/pre-codegen/` tests around slice indexing. Here's the difference to `[T]::get_mut`, since it uses `<*mut _>::add` internally:
```diff
`@@` -79,30 +70,21 `@@` fn slice_get_mut_usize(_1: &mut [u32], _2: usize) -> Option<&mut u32> {
StorageLive(_12); // scope 3 at $SRC_DIR/core/src/slice/index.rs:LL:COL
StorageLive(_9); // scope 6 at $SRC_DIR/core/src/slice/index.rs:LL:COL
_9 = _8 as *mut u32 (PtrToPtr); // scope 11 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageLive(_13); // scope 13 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- _13 = _2 as isize (IntToInt); // scope 13 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageLive(_14); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageLive(_15); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- _15 = _9 as *const u32 (Pointer(MutToConstPointer)); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- _14 = Offset(move _15, _13); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageDead(_15); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- _7 = move _14 as *mut u32 (PtrToPtr); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageDead(_14); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageDead(_13); // scope 13 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
+ _7 = Offset(_9, _2); // scope 13 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
StorageDead(_9); // scope 6 at $SRC_DIR/core/src/slice/index.rs:LL:COL
StorageDead(_12); // scope 3 at $SRC_DIR/core/src/slice/index.rs:LL:COL
StorageDead(_11); // scope 3 at $SRC_DIR/core/src/slice/index.rs:LL:COL
```
1c1c8e442a (diff-a841b6a4538657add3f39bc895744331453d0625e7aace128b1f604f0b63c8fdR80)
I happened to notice one of these not getting inlined as part of `Range::next` in <https://rust.godbolt.org/z/4WKWWxj1G>
```rust
bb1: {
StorageLive(_5);
_6 = &mut _4;
StorageLive(_21);
StorageLive(_14);
StorageLive(_15);
_15 = &((*_6).0: usize);
StorageLive(_16);
_16 = &((*_6).1: usize);
_14 = <usize as PartialOrd>::lt(move _15, move _16) -> bb7;
}
```
So since a call for something this trivial is never the right choice, `#[inline(always)]` seems appropriate.
More core::fmt::rt cleanup.
- Removes the `V1` suffix from the `Argument` and `Flag` types.
- Moves more of the format_args lang items into the `core::fmt::rt` module. (The only remaining lang item in `core::fmt` is `Arguments` itself, which is a public type.)
Part of https://github.com/rust-lang/rust/issues/99012
Follow-up to https://github.com/rust-lang/rust/pull/110616
Spelling library
Split per https://github.com/rust-lang/rust/pull/110392
I can squash once people are happy w/ the changes. It's really uncommon for large sets of changes to be perfectly acceptable w/o at least some changes.
I probably won't have time to respond until tomorrow or the next day
black_box doc corrections for clarification - Issue #107957
Made a complete pass through the docs to help resolve https://github.com/rust-lang/rust/issues/107957
No code changes, just documentation
`@rustbot` label +T-libs-api -T-libs
Fix no_global_oom_handling build
`provide_sorted_batch` in core is incorrectly marked with `#[cfg(not(no_global_oom_handling))]` which prevents core from building with the cfg enabled.
Nothing in `core` allocates memory (including this function). The `cfg` gate is incorrect.
cc ``@dpaoliello``
r? ``@wesleywiser``
The cfg was added by #107191
Add shortcut for Grisu3 algorithm.
While Grisu3 is way more faster for most numbers compare to Dragon4, the fall back to Dragon4 procedure for certain numbers could cause some performance regressions compare to use Dragon4 directly. Mitigating the regression caused by falling back is important for a largely used core library.
In Grisu3 algorithm implementation, there's a shortcut to jump out earlier when the fractional or integrals cannot meet the requirement of requested digits. This could significantly improve the performance of converting floating number to string as it falls back even without starting trying the algorithm.
The original idea is from the [.NET implementation](https://github.com/dotnet/runtime/blob/main/src/libraries/System.Private.CoreLib/src/System/Number.Grisu3.cs#L602-L615) and the code was originally added in [this PR](https://github.com/dotnet/coreclr/pull/14646#issuecomment-350942050). This shortcut has been shipped long time ago and has been proved working.
Fix#110129
Check requested digit length and the fractional or integral parts of the number. Falls back earlier without trying the Grisu algorithm if the specific condition meets.
Fix#110129