Remove calls to `mem::forget` and `mem::replace` in `Option::get_or_insert_with`.
This removes the unneeded calls to `mem::forget` and `mem::replace` in `Option::get_or_insert_with`.
clean up `transmute`s in `core`
* Use `transmute_unchecked` instead of `transmute_copy` for `MaybeUninit::transpose`.
* Use manual transmute for `Option<Ordering>` → `i8`.
Add cross-language LLVM CFI support to the Rust compiler
This PR adds cross-language LLVM Control Flow Integrity (CFI) support to the Rust compiler by adding the `-Zsanitizer-cfi-normalize-integers` option to be used with Clang `-fsanitize-cfi-icall-normalize-integers` for normalizing integer types (see https://reviews.llvm.org/D139395).
It provides forward-edge control flow protection for C or C++ and Rust -compiled code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code share the same virtual address space). For more information about LLVM CFI and cross-language LLVM CFI support for the Rust compiler, see design document in the tracking issue #89653.
Cross-language LLVM CFI can be enabled with -Zsanitizer=cfi and -Zsanitizer-cfi-normalize-integers, and requires proper (i.e., non-rustc) LTO (i.e., -Clinker-plugin-lto).
Thank you again, ``@bjorn3,`` ``@nikic,`` ``@samitolvanen,`` and the Rust community for all the help!
Implement tuple<->array convertions via `From`
This PR adds the following impls that convert between homogeneous tuples and arrays of the corresponding lengths:
```rust
impl<T> From<[T; 1]> for (T,) { ... }
impl<T> From<[T; 2]> for (T, T) { ... }
/* ... */
impl<T> From<[T; 12]> for (T, T, T, T, T, T, T, T, T, T, T, T) { ... }
impl<T> From<(T,)> for [T; 1] { ... }
impl<T> From<(T, T)> for [T; 2] { ... }
/* ... */
impl<T> From<(T, T, T, T, T, T, T, T, T, T, T, T)> for [T; 12] { ... }
```
IMO these are quite uncontroversial but note that they are, just like any other trait impls, insta-stable.
This commit adds cross-language LLVM Control Flow Integrity (CFI)
support to the Rust compiler by adding the
`-Zsanitizer-cfi-normalize-integers` option to be used with Clang
`-fsanitize-cfi-icall-normalize-integers` for normalizing integer types
(see https://reviews.llvm.org/D139395).
It provides forward-edge control flow protection for C or C++ and Rust
-compiled code "mixed binaries" (i.e., for when C or C++ and Rust
-compiled code share the same virtual address space). For more
information about LLVM CFI and cross-language LLVM CFI support for the
Rust compiler, see design document in the tracking issue #89653.
Cross-language LLVM CFI can be enabled with -Zsanitizer=cfi and
-Zsanitizer-cfi-normalize-integers, and requires proper (i.e.,
non-rustc) LTO (i.e., -Clinker-plugin-lto).
Add `ConstParamTy` trait
This is a bit sketch, but idk.
r? `@BoxyUwU`
Yet to be done:
- [x] ~~Figure out if it's okay to implement `StructuralEq` for primitives / possibly remove their special casing~~ (it should be okay, but maybe not in this PR...)
- [ ] Maybe refactor the code a little bit
- [x] Use a macro to make impls a bit nicer
Future work:
- [ ] Actually™ use the trait when checking if a `const` generic type is allowed
- [ ] _Really_ refactor the surrounding code
- [ ] Refactor `marker.rs` into multiple modules for each "theme" of markers
Refactor core::char::EscapeDefault and co. structures
Change core::char::{EscapeUnicode, EscapeDefault and EscapeDebug}
structures from using a state machine to computing escaped sequence
upfront and during iteration just going through the characters.
This is arguably simpler since it’s easier to think about having
a buffer and start..end range to iterate over rather than thinking
about a state machine.
This also harmonises implementation of aforementioned iterators and
core::ascii::EscapeDefault struct. This is done by introducing a new
helper EscapeIterInner struct which holds the buffer and offers simple
methods for iterating over range.
As a side effect, this probably optimises Display implementation for
those types since rather than calling write_char repeatedly, write_str
is invoked once. On 64-bit platforms, it also reduces size of some of
the structs:
| Struct | Before | After |
|----------------------------+--------+-------+
| core::char::EscapeUnicode | 16 | 12 |
| core::char::EscapeDefault | 16 | 12 |
| core::char::EscapeDebug | 16 | 16 |
My ulterior motive and reason why I started looking into this is
addition of as_str method to the iterators. With this change this
will became trivial. It’s also going to be trivial to implement
DoubleEndedIterator if that’s ever desired.
Some of these relations were already mentioned in the text, but that
Send is implemented for &mut impl Send was not mentioned,
neither did the docs list when &T is Sync.
Make `mem::replace` simpler in codegen
Since they'd mentioned more intrinsics for simplifying stuff recently,
r? `@WaffleLapkin`
This is a continuation of me looking at foundational stuff that ends up with more instructions than it really needs. Specifically I noticed this one because `Range::next` isn't MIR-inlining, and one of the largest parts of it is a `replace::<usize>` that's a good dozen instructions instead of the two it could be.
So this means that `ptr::write` with a `Copy` type no longer generates worse IR than manually dereferencing (well, at least in LLVM -- MIR still has bonus pointer casts), and in doing so means that we're finally down to just the two essential `memcpy`s when emitting `mem::replace` for a large type, rather than the bonus-`alloca` and three `memcpy`s we emitted before this ([or the 6 we currently emit in 1.69 stable](https://rust.godbolt.org/z/67W8on6nP)). That said, LLVM does _usually_ manage to optimize the extra code away. But it's still nice for it not to have to do as much, thanks to (for example) not going through an `alloca` when `replace`ing a primitive like a `usize`.
(This is a new intrinsic, but one that's immediately lowered to existing MIR constructs, so not anything that MIRI or the codegen backends or MIR semantics needs to do work to handle.)
Tweak await span to not contain dot
Fixes a discrepancy between method calls and await expressions where the latter are desugared to have a span that *contains* the dot (i.e. `.await`) but method call identifiers don't contain the dot. This leads to weird suggestions suggestions in borrowck -- see linked issue.
Fixes#110761
This mostly touches a bunch of tests to tighten their `await` span.
`inline(always)` for `lt`/`le`/`ge`/`gt` on integers and floats
I happened to notice one of these not getting inlined as part of `Range::next` in <https://rust.godbolt.org/z/4WKWWxj1G>
```rust
bb1: {
StorageLive(_5);
_6 = &mut _4;
StorageLive(_21);
StorageLive(_14);
StorageLive(_15);
_15 = &((*_6).0: usize);
StorageLive(_16);
_16 = &((*_6).1: usize);
_14 = <usize as PartialOrd>::lt(move _15, move _16) -> bb7;
}
```
So since a call for something that's just one instruction is never the right choice, `#[inline(always)]` seems appropriate, like we have it on things like the rotate methods on integers.
Improve internal field comments on `slice::Iter(Mut)`
I wrote these in a previous PR that I ended up withdrawing, so might as well submit them separately.
`@bors` rollup=always
Make sure that some stdlib method signatures aren't accidental refinements
In the process of implementing https://rust-lang.github.io/rfcs/3245-refined-impls.html, I found a bunch of stdlib implementations that accidentally "refined" their method signatures by dropping (unnecessary) bounds.
This isn't currently a problem, but may become one if/when method signature refining is stabilized in the future. Shouldn't hurt to make these signatures a bit more accurate anyways.
NOTE (just to be clear lol): This does not affect behavior at all, since we don't actually take advantage of refined implementations yet!
Use MIR's `Offset` for pointer `add` too
~~Status: draft while waiting for #110822 to land, since this is built atop that.~~
~~r? `@ghost~~`
Canonical Rust code has mostly moved to `add`/`sub` on pointers, which take `usize`, instead of `offset` which takes `isize`. (And, relatedly, when `sub_ptr` was added it turned out it replaced every single in-tree use of `offset_from`, because `usize` is just so much more useful than `isize` in Rust.)
Unfortunately, `intrinsics::offset` could only accept `*const` and `isize`, so there's a *huge* amount of type conversions back and forth being done. They're identity conversions in the backend, but still end up producing quite a lot of unhelpful MIR.
This PR changes `intrinsics::offset` to accept `*const` *and* `*mut` along with `isize` *and* `usize`. Conveniently, the backends and CTFE already handle this, since MIR's `BinOp::Offset` [already supports all four combinations](adaac6b166/compiler/rustc_const_eval/src/transform/validate.rs (L523-L528)).
To demonstrate the difference, I added some `mir-opt/pre-codegen/` tests around slice indexing. Here's the difference to `[T]::get_mut`, since it uses `<*mut _>::add` internally:
```diff
`@@` -79,30 +70,21 `@@` fn slice_get_mut_usize(_1: &mut [u32], _2: usize) -> Option<&mut u32> {
StorageLive(_12); // scope 3 at $SRC_DIR/core/src/slice/index.rs:LL:COL
StorageLive(_9); // scope 6 at $SRC_DIR/core/src/slice/index.rs:LL:COL
_9 = _8 as *mut u32 (PtrToPtr); // scope 11 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageLive(_13); // scope 13 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- _13 = _2 as isize (IntToInt); // scope 13 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageLive(_14); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageLive(_15); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- _15 = _9 as *const u32 (Pointer(MutToConstPointer)); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- _14 = Offset(move _15, _13); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageDead(_15); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- _7 = move _14 as *mut u32 (PtrToPtr); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageDead(_14); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageDead(_13); // scope 13 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
+ _7 = Offset(_9, _2); // scope 13 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
StorageDead(_9); // scope 6 at $SRC_DIR/core/src/slice/index.rs:LL:COL
StorageDead(_12); // scope 3 at $SRC_DIR/core/src/slice/index.rs:LL:COL
StorageDead(_11); // scope 3 at $SRC_DIR/core/src/slice/index.rs:LL:COL
```
1c1c8e442a (diff-a841b6a4538657add3f39bc895744331453d0625e7aace128b1f604f0b63c8fdR80)
I happened to notice one of these not getting inlined as part of `Range::next` in <https://rust.godbolt.org/z/4WKWWxj1G>
```rust
bb1: {
StorageLive(_5);
_6 = &mut _4;
StorageLive(_21);
StorageLive(_14);
StorageLive(_15);
_15 = &((*_6).0: usize);
StorageLive(_16);
_16 = &((*_6).1: usize);
_14 = <usize as PartialOrd>::lt(move _15, move _16) -> bb7;
}
```
So since a call for something this trivial is never the right choice, `#[inline(always)]` seems appropriate.
More core::fmt::rt cleanup.
- Removes the `V1` suffix from the `Argument` and `Flag` types.
- Moves more of the format_args lang items into the `core::fmt::rt` module. (The only remaining lang item in `core::fmt` is `Arguments` itself, which is a public type.)
Part of https://github.com/rust-lang/rust/issues/99012
Follow-up to https://github.com/rust-lang/rust/pull/110616
Spelling library
Split per https://github.com/rust-lang/rust/pull/110392
I can squash once people are happy w/ the changes. It's really uncommon for large sets of changes to be perfectly acceptable w/o at least some changes.
I probably won't have time to respond until tomorrow or the next day
black_box doc corrections for clarification - Issue #107957
Made a complete pass through the docs to help resolve https://github.com/rust-lang/rust/issues/107957
No code changes, just documentation
`@rustbot` label +T-libs-api -T-libs
Fix no_global_oom_handling build
`provide_sorted_batch` in core is incorrectly marked with `#[cfg(not(no_global_oom_handling))]` which prevents core from building with the cfg enabled.
Nothing in `core` allocates memory (including this function). The `cfg` gate is incorrect.
cc ``@dpaoliello``
r? ``@wesleywiser``
The cfg was added by #107191
Add shortcut for Grisu3 algorithm.
While Grisu3 is way more faster for most numbers compare to Dragon4, the fall back to Dragon4 procedure for certain numbers could cause some performance regressions compare to use Dragon4 directly. Mitigating the regression caused by falling back is important for a largely used core library.
In Grisu3 algorithm implementation, there's a shortcut to jump out earlier when the fractional or integrals cannot meet the requirement of requested digits. This could significantly improve the performance of converting floating number to string as it falls back even without starting trying the algorithm.
The original idea is from the [.NET implementation](https://github.com/dotnet/runtime/blob/main/src/libraries/System.Private.CoreLib/src/System/Number.Grisu3.cs#L602-L615) and the code was originally added in [this PR](https://github.com/dotnet/coreclr/pull/14646#issuecomment-350942050). This shortcut has been shipped long time ago and has been proved working.
Fix#110129
Check requested digit length and the fractional or integral parts of the number. Falls back earlier without trying the Grisu algorithm if the specific condition meets.
Fix#110129
Add `intrinsics::transmute_unchecked`
This takes a whole 3 lines in `compiler/` since it lowers to `CastKind::Transmute` in MIR *exactly* the same as the existing `intrinsics::transmute` does, it just doesn't have the fancy checking in `hir_typeck`.
Added to enable experimenting with the request in <https://github.com/rust-lang/rust/pull/106281#issuecomment-1496648190> and because the portable-simd folks might be interested for dependently-sized array-vector conversions.
It also simplifies a couple places in `core`.
See also https://github.com/rust-lang/rust/pull/108442#issuecomment-1474777273, where `CastKind::Transmute` was added having exactly these semantics before the lang meeting (which I wasn't in) independently expressed interest.
This takes a whole 3 lines in `compiler/` since it lowers to `CastKind::Transmute` in MIR *exactly* the same as the existing `intrinsics::transmute` does, it just doesn't have the fancy checking in `hir_typeck`.
Added to enable experimenting with the request in <https://github.com/rust-lang/rust/pull/106281#issuecomment-1496648190> and because the portable-simd folks might be interested for dependently-sized array-vector conversions.
It also simplifies a couple places in `core`.
Report allocation errors as panics
OOM is now reported as a panic but with a custom payload type (`AllocErrorPanicPayload`) which holds the layout that was passed to `handle_alloc_error`.
This should be review one commit at a time:
- The first commit adds `AllocErrorPanicPayload` and changes allocation errors to always be reported as panics.
- The second commit removes `#[alloc_error_handler]` and the `alloc_error_hook` API.
ACP: https://github.com/rust-lang/libs-team/issues/192Closes#51540Closes#51245
More `IS_ZST` in `library`
I noticed that `post_inc_start` and `pre_dec_end` were doing this check in different ways
d19b64fb54/library/core/src/slice/iter/macros.rs (L76-L93)
so started making this PR, then added a few more I found since I was already making changes anyway.
Add offset_of! macro (RFC 3308)
Implements https://github.com/rust-lang/rfcs/pull/3308 (tracking issue #106655) by adding the built in macro `core::mem::offset_of`. Two of the future possibilities are also implemented:
* Nested field accesses (without array indexing)
* DST support (for `Sized` fields)
I wrote this a few months ago, before the RFC merged. Now that it's merged, I decided to rebase and finish it.
cc `@thomcc` (RFC author)
I noticed that `post_inc_start` and `pre_dec_end` were doing this check in different ways
d19b64fb54/library/core/src/slice/iter/macros.rs (L76-L93)
so started making this PR, then added a few more I found since I was already making changes anyway.
`provide_sorted_batch` in core is incorrectly marked with
`#[cfg(not(no_global_oom_handling))]` which prevents core
from building with the cfg enabled.
Nothing in core allocates memory including this function, so
the `cfg` gate is incorrect.
Negating a non-zero integer currently requires unpacking to a
primitive and re-wrapping. Since negation of non-zero signed
integers always produces a non-zero result, it is safe to
implement `Neg` for `NonZeroI{N}`.
The new `impl` is marked as stable because trait implementations
for two stable types can't be marked unstable.
Use `Display` in top-level example for `PanicInfo`
Addresses https://github.com/rust-lang/rust/issues/110098.
This confused me as well, when I was writing a `no_std` panic handler for the first time, so here's a better top-level example.
`Display` is stable, prints the `.message()` if available, and falls back to `.payload().downcast_ref<&str>()` if the message is not available. So this example should provide strictly more information and also work for formatted panics.
The old example still exists on the `payload` method.
Add links from `core::cmp` derives to their traits
Fixes#109946
Adds intra-doc links from the `core::cmp` derives to their respective traits, and a link to their derive behaviour
`@rustbot` label +A-docs
Add `tidy-alphabetical` to features in `core`
So that people have to keep them sorted in future, rather than just sticking them on the end where they conflict more often.
Remove obsolete test case
This test case was supposed to cover issue #31109 at some point.
It never did anything, as the issue was still open at the time of its creation.
When the issue was resolved, the `issue31109` test case was created,
making the existence of this test pointless.
Improve safe transmute error reporting
This patch updates the error reporting when Safe Transmute is not possible between 2 types by including the reason.
Also, fix some small bugs that occur when computing the `Answer` for transmutability.
This patch updates the error reporting when Safe Transmute is not
possible between 2 types by including the reason.
Also, fix some small bugs that occur when computing the `Answer` for
transmutability.
Added diagnostic for pin! macro in addition to Box::pin if Unpin isn't implemented
I made a PR earlier, but accidentally renamed a branch and that deleted the PR... sorry for the duplicate
Currently, if an operation on `Pin<T>` is performed that requires `T` to implement `Unpin`, the diagnostic suggestion is to use `Box::pin` ("note: consider using `Box::pin`").
This PR suggests pin! as well, as that's another valid way of pinning a value, and avoids a heap allocation. Appropriate diagnostic suggestions were included to highlight the difference in semantics (local pinning for pin! vs non-local for Box::pin).
Fixes#109964
Custom MIR: Support `BinOp::Offset`
Since offset doesn't have an infix operator, a new function `Offset` is added which is lowered to `Rvalue::BinaryOp(BinOp::Offset, ..)`
r? ```@oli-obk``` or ```@tmiasko``` or ```@JakobDegen```
Improve the floating point parser in dec2flt.
Greetings everyone,
I've benn studying the rust floating point parser recently and made the following tweaks:
* Remove all remaining traces of `unsafe`. The parser is now 100% safe Rust.
* The trick in which eight digits are processed in parallel is now in a loop.
* Parsing of inf/NaN values has been reworked.
On my system, the changes result in performance improvements for some input values.
To avoid link time dependency between core and compiler-builtins, when
using opt-level that implicitly enables -Zshare-generics.
While compiler-builtins should be compiled with -Zshare-generics
disabled, the -Zbuild-std does not ensure this at the moment.
Change core::char::{EscapeUnicode, EscapeDefault and EscapeDebug}
structures from using a state machine to computing escaped sequence
upfront and during iteration just going through the characters.
This is arguably simpler since it’s easier to think about having
a buffer and start..end range to iterate over rather than thinking
about a state machine.
This also harmonises implementation of aforementioned iterators and
core::ascii::EscapeDefault struct. This is done by introducing a new
helper EscapeIterInner struct which holds the buffer and offers simple
methods for iterating over range.
As a side effect, this probably optimises Display implementation for
those types since rather than calling write_char repeatedly, write_str
is invoked once. On 64-bit platforms, it also reduces size of some of
the structs:
| Struct | Before | After |
|----------------------------+--------+-------+
| core::char::EscapeUnicode | 16 | 12 |
| core::char::EscapeDefault | 16 | 12 |
| core::char::EscapeDebug | 16 | 16 |
My ulterior motive and reason why I started looking into this is
addition of as_str method to the iterators. With this change this
will became trivial. It’s also going to be trivial to implement
DoubleEndedIterator if that’s ever desired.
Improve grammar of Iterator.partition_in_place
This is my first PR against Rust, please let me know if there's anything I should be providing here! I didn't find any instructions specific to documentation grammar in the [std-dev guide](https://std-dev-guide.rust-lang.org/documentation/summary.html).
Optimize `LazyCell` size
`LazyCell` can only store either the initializing function or the data it produces, so it does not need to reserve the space for both. Similar to #107329, but uses an `enum` instead of a `union`.
Move `doc(primitive)` future incompat warning to `invalid_doc_attributes`
Fixes#88070.
It's been a while since this was turned into a "future incompatible lint" so I think we can now turn it into a hard error without problem.
r? `@jyn514`
Insert alignment checks for pointer dereferences when debug assertions are enabled
Closes https://github.com/rust-lang/rust/issues/54915
- [x] Jake tells me this sounds like a place to use `MirPatch`, but I can't figure out how to insert a new basic block with a new terminator in the middle of an existing basic block, using `MirPatch`. (if nobody else backs up this point I'm checking this as "not actually a good idea" because the code looks pretty clean to me after rearranging it a bit)
- [x] Using `CastKind::PointerExposeAddress` is definitely wrong, we don't want to expose. Calling a function to get the pointer address seems quite excessive. ~I'll see if I can add a new `CastKind`.~ `CastKind::Transmute` to the rescue!
- [x] Implement a more helpful panic message like slice bounds checking.
r? `@oli-obk`
Rollup of 7 pull requests
Successful merges:
- #106985 (Enhanced doucmentation of binary search methods for `slice` and `VecDeque` for unsorted instances)
- #109509 (compiletest: Don't allow tests with overlapping prefix names)
- #109719 (RELEASES: Add "Only support Android NDK 25 or newer" to 1.68.0)
- #109748 (Don't ICE on `DiscriminantKind` projection in new solver)
- #109749 (Canonicalize float var as float in new solver)
- #109761 (Drop binutils on powerpc-unknown-freebsd)
- #109766 (Fix title for openharmony.md)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Enhanced doucmentation of binary search methods for `slice` and `VecDeque` for unsorted instances
Fixes#106746. Issue #106746 raises the concern that the binary search methods for slices and deques aren't explicit enough about the fact that they are only applicable to sorted slices/deques. I changed the explanation for these methods. I took the relatively harsh description of the behaviour of binary search on unsorted collections ("unspecified and meaningless") from the description of the [`partition_point`](https://doc.rust-lang.org/std/primitive.slice.html#method.partition_point) method:
> If this slice is not partitioned, the returned result is unspecified and meaningless, as this method performs a kind of binary search.
Rollup of 8 pull requests
Successful merges:
- #91793 (socket ancillary data implementation for FreeBSD (from 13 and above).)
- #92284 (Change advance(_back)_by to return the remainder instead of the number of processed elements)
- #102472 (stop special-casing `'static` in evaluation)
- #108480 (Use Rayon's TLV directly)
- #109321 (Erase impl regions when checking for impossible to eagerly monomorphize items)
- #109470 (Correctly substitute GAT's type used in `normalize_param_env` in `check_type_bounds`)
- #109562 (Update ar_archive_writer to 0.1.3)
- #109629 (remove obsolete `givens` from regionck)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Add a builtin `FnPtr` trait that is implemented for all function pointers
r? `@ghost`
Rebased version of https://github.com/rust-lang/rust/pull/99531 (plus adjustments mentioned in the PR).
If perf is happy with this version, I would like to land it, even if the diagnostics fix in 9df8e1befb5031a5bf9d8dfe25170620642d3c59 only works for `FnPtr` specifically, and does not generally improve blanket impls.
Change advance(_back)_by to return the remainder instead of the number of processed elements
When advance_by can't advance the iterator by the number of requested elements it now returns the amount by which it couldn't be advanced instead of the amount by which it did.
This simplifies adapters like chain, flatten or cycle because the remainder doesn't have to be calculated as the difference between requested steps and completed steps anymore.
Additionally switching from `Result<(), usize>` to `Result<(), NonZeroUsize>` reduces the size of the result and makes converting from/to a usize representing the number of remaining steps cheap.
Add `#[inline]` to CStr trait implementations
Fixes#109674
I noticed other usages of traits on `CStr` weren't being inlined, so also added hints to the other implementations
A successful advance is now signalled by returning `0` and other values now represent the remaining number
of steps that couldn't be advanced as opposed to the amount of steps that have been advanced during a partial advance_by.
This simplifies adapters a bit, replacing some `match`/`if` with arithmetic. Whether this is beneficial overall depends
on whether `advance_by` is mostly used as a building-block for other iterator methods and adapters or whether
we also see uses by users where `Result` might be more useful.
Stabilize `nonnull_slice_from_raw_parts`
FCP is done: https://github.com/rust-lang/rust/issues/71941#issuecomment-1100910416
Note that this doesn't const-stabilize `NonNull::slice_from_raw_parts` as `slice_from_raw_parts_mut` isn't const-stabilized yet. Given #67456 and #57349, it's not likely available soon, meanwhile, stabilizing only the feature makes some sense, I think.
Closes#71941
Add #[inline] to as_deref
While working on https://github.com/rust-lang/rust/pull/109247 I found an `as_deref` call in the compiler that should have been inlined. This fixes the missing inlining (but doesn't address the perf issues I was chasing).
r? `@thomcc`
Clarify that copied allocators must behave the same
Currently, the safety documentation for `Allocator` says that a cloned or moved allocator must behave the same as the original. However, it does not specify that a copied allocator must behave the same, and it's possible to construct an allocator that permits being moved or cloned, but sometimes produces a new allocator when copied.
<details>
<summary>Contrived example which results in a Miri error</summary>
```rust
#![feature(allocator_api, once_cell, strict_provenance)]
use std::{
alloc::{AllocError, Allocator, Global, Layout},
collections::HashMap,
hint,
marker::PhantomPinned,
num::NonZeroUsize,
pin::Pin,
ptr::{addr_of, NonNull},
sync::{LazyLock, Mutex},
};
mod source_allocator {
use super::*;
// `SourceAllocator` has 3 states:
// - invalid value: is_cloned == false, source != self.addr()
// - source value: is_cloned == false, source == self.addr()
// - cloned value: is_cloned == true
pub struct SourceAllocator {
is_cloned: bool,
source: usize,
_pin: PhantomPinned,
}
impl SourceAllocator {
// Returns a pinned source value (pointing to itself).
pub fn new_source() -> Pin<Box<Self>> {
let mut b = Box::new(Self {
is_cloned: false,
source: 0,
_pin: PhantomPinned,
});
b.source = b.addr();
Box::into_pin(b)
}
fn addr(&self) -> usize {
addr_of!(*self).addr()
}
// Invalid values point to source 0.
// Source values point to themselves.
// Cloned values point to their corresponding source.
fn source(&self) -> usize {
if self.is_cloned || self.addr() == self.source {
self.source
} else {
0
}
}
}
// Copying an invalid value produces an invalid value.
// Copying a source value produces an invalid value.
// Copying a cloned value produces a cloned value with the same source.
impl Copy for SourceAllocator {}
// Cloning an invalid value produces an invalid value.
// Cloning a source value produces a cloned value with that source.
// Cloning a cloned value produces a cloned value with the same source.
impl Clone for SourceAllocator {
fn clone(&self) -> Self {
if self.is_cloned || self.addr() != self.source {
*self
} else {
Self {
is_cloned: true,
source: self.source,
_pin: PhantomPinned,
}
}
}
}
static SOURCE_MAP: LazyLock<Mutex<HashMap<NonZeroUsize, usize>>> =
LazyLock::new(Default::default);
// SAFETY: Wraps `Global`'s methods with additional tracking.
// All invalid values share blocks with each other.
// Each source value shares blocks with all cloned values pointing to it.
// Cloning an allocator always produces a compatible allocator:
// - Cloning an invalid value produces another invalid value.
// - Cloning a source value produces a cloned value pointing to it.
// - Cloning a cloned value produces another cloned value with the same source.
// Moving an allocator always produces a compatible allocator:
// - Invalid values remain invalid when moved.
// - Source values cannot be moved, since they are always pinned to the heap.
// - Cloned values keep the same source when moved.
unsafe impl Allocator for SourceAllocator {
fn allocate(&self, layout: Layout) -> Result<NonNull<[u8]>, AllocError> {
let mut map = SOURCE_MAP.lock().unwrap();
let block = Global.allocate(layout)?;
let block_addr = block.cast::<u8>().addr();
map.insert(block_addr, self.source());
Ok(block)
}
unsafe fn deallocate(&self, block: NonNull<u8>, layout: Layout) {
let mut map = SOURCE_MAP.lock().unwrap();
let block_addr = block.addr();
// SAFETY: `block` came from an allocator that shares blocks with this allocator.
if map.remove(&block_addr) != Some(self.source()) {
hint::unreachable_unchecked()
}
Global.deallocate(block, layout)
}
}
}
use source_allocator::SourceAllocator;
// SAFETY: `alloc1` and `alloc2` must share blocks.
unsafe fn test_same(alloc1: &SourceAllocator, alloc2: &SourceAllocator) {
let ptr = alloc1.allocate(Layout:🆕:<i32>()).unwrap();
alloc2.deallocate(ptr.cast(), Layout:🆕:<i32>());
}
fn main() {
let orig = &*SourceAllocator::new_source();
let orig_cloned1 = &orig.clone();
let orig_cloned2 = &orig.clone();
let copied = &{ *orig };
let copied_cloned1 = &copied.clone();
let copied_cloned2 = &copied.clone();
unsafe {
test_same(orig, orig_cloned1);
test_same(orig_cloned1, orig_cloned2);
test_same(copied, copied_cloned1);
test_same(copied_cloned1, copied_cloned2);
test_same(orig, copied); // error
}
}
```
</details>
This could result in issues in the future for algorithms that specialize on `Copy` types. Right now, nothing in the standard library that depends on `Allocator + Clone` is susceptible to this issue, but I still think it would make sense to specify that copying an allocator is always as valid as cloning it.
Implement Default for some alloc/core iterators
Add `Default` impls to the following collection iterators:
* slice::{Iter, IterMut}
* binary_heap::IntoIter
* btree::map::{Iter, IterMut, Keys, Values, Range, IntoIter, IntoKeys, IntoValues}
* btree::set::{Iter, IntoIter, Range}
* linked_list::IntoIter
* vec::IntoIter
and these adapters:
* adapters::{Chain, Cloned, Copied, Rev, Enumerate, Flatten, Fuse, Rev}
For iterators which are generic over allocators it only implements it for the global allocator because we can't conjure an allocator from nothing or would have to turn the allocator field into an `Option` just for this change.
These changes will be insta-stable.
ACP: https://github.com/rust-lang/libs-team/issues/77
Add #[inline] to the Into for From impl
I was skimming through the standard library MIR and I noticed a handful of very suspicious `Into::into` calls in `alloc`. ~Since this is a trivial wrapper function, `#[inline(always)]` seems appropriate.;~ `#[inline]` works too and is a lot less spooky.
r? `@thomcc`
Shrink unicode case-mapping LUTs by 24k
I was looking into the binary bloat of a small program using `str::to_lowercase` and `str::to_uppercase`, and noticed that the lookup tables used for case mapping had a lot of zero-bytes in them. The reason for this is that since some characters map to up to three other characters when lower or uppercased, the LUTs store a `[char; 3]` for each character. However, the vast majority of cases only map to a single new character, in other words most of the entries are e.g. `(lowerc, [upperc, '\0', '\0'])`.
This PR introduces a new encoding scheme for these tables.
The changes reduces the size of my test binary by about 24K.
I've also done some `#[bench]`marks on unicode-heavy test data, and found that the performance of both `str::to_lowercase` and `str::to_uppercase` improves by up to 20%. These measurements are obviously very dependent on the character distribution of the data.
Someone else will have to decide whether this more complex scheme is worth it or not, I was just goofing around a bit and here's what came out of it 🤷♂️ No hard feelings if this isn't wanted!
panic_immediate_abort requires abort as a panic strategy
Guide `panic_immediate_abort` users away from `-Cpanic=unwind` and towards `-Cpanic=abort` to avoid an accidental use of the feature with the unwind strategy, e.g., on a targets where unwind is the default.
The `-Cpanic=unwind` combination doesn't offer the same benefits, since the code would still be generated under the assumption that functions implemented in Rust can unwind.
Updates `interpret`, `codegen_ssa`, and `codegen_cranelift` to consume the new cast instead of the intrinsic.
Includes `CastTransmute` for custom MIR building, to be able to test the extra UB.
Custom MIR: Allow optional RET type annotation
This currently doesn't compile because the type of `RET` is inferred, which fails if RET is a composite type and fields are initialised separately.
```rust
#![feature(custom_mir, core_intrinsics)]
extern crate core;
use core::intrinsics::mir::*;
#[custom_mir(dialect = "runtime", phase = "optimized")]
fn fn0() -> (i32, bool) {
mir! ({
RET.0 = 0;
RET.1 = true;
Return()
})
}
```
```
error[E0282]: type annotations needed
--> src/lib.rs:8:9
|
8 | RET.0 = 0;
| ^^^ cannot infer type
For more information about this error, try `rustc --explain E0282`.
```
This PR allows the user to manually specify the return type with `type RET = ...;` if required:
```rust
#[custom_mir(dialect = "runtime", phase = "optimized")]
fn fn0() -> (i32, bool) {
mir! (
type RET = (i32, bool);
{
RET.0 = 0;
RET.1 = true;
Return()
}
)
}
```
The syntax is not optimal, I'm happy to see other suggestions. Ideally I wanted it to be a normal type annotation like `let RET: ...;`, but this runs into the multiple parsing options error during macro expansion, as it can be parsed as a normal `let` declaration as well.
r? ```@oli-obk``` or ```@tmiasko``` or ```@JakobDegen```
move Option::as_slice to intrinsic
````@scottmcm```` suggested on #109095 I use a direct approach of unpacking the operation in MIR lowering, so here's the implementation.
cc ````@nikic```` as this should hopefully unblock #107224 (though perhaps other changes to the prior implementation, which I left for bootstrapping, are needed).
Document `Iterator::sum/product` for Option/Result
Closes#105266
We already document the similar behavior for `collect()` so I believe it makes sense to add this too. The Option/Result implementations *are* documented on their respective pages and the page for `Sum`, but buried amongst many other trait impls which doesn't make it very discoverable.
`````@rustbot````` label +A-docs
Add inlining annotations in `dec2flt`.
Currently, the combination of `dec2flt` being generic and the `FromStr` implementaions
containing inline anttributes causes massive amounts of assembly to be generated whenever
these implementation are used. In addition, the assembly has calls to function which ought to
be inlined, but they are not (even when using lto).
This Pr fixes this.
Improve `Iterator::collect_into` documentation
This improves the examples in the documentation of `Iterator::collect_into`, replacing the usages of `println!` with `assert_eq!` as suggested on [IRLO](https://internals.rust-lang.org/t/18534/9).
Beautify pin! docs
This makes pin docs a little bit less jargon-y and easier to read, by
* splitting up the sentences
* making them less interrupted by punctuation
* turning the footnotes into paragraphs, as they contain useful information that shouldn't be hidden in footnotes. Footnotes also interrupt the read flow.
Use `size_of_val` instead of manual calculation
Very minor thing that I happened to notice in passing, but it's both shorter and [means it gets `mul nsw`](https://rust.godbolt.org/z/Y9KxYETv5), so why not.