Commit Graph

649 Commits

Author SHA1 Message Date
Yuki Okushi
50527690e2
Rollup merge of #99306 - JohnTitor:stabilize-future-poll-fn, r=joshtriplett
Stabilize `future_poll_fn`

FCP is done: https://github.com/rust-lang/rust/issues/72302#issuecomment-1179620512
Closes #72302

r? `@joshtriplett` as you started FCP

Signed-off-by: Yuki Okushi <jtitor@2k36.org>
2022-07-17 13:08:52 +09:00
bors
db41351753 Auto merge of #98866 - nagisa:nagisa/align-offset-wroom, r=Mark-Simulacrum
Add a special case for align_offset /w stride != 1

This generalizes the previous `stride == 1` special case to apply to any
situation where the requested alignment is divisible by the stride. This
in turn allows the test case from #98809 produce ideal assembly, along
the lines of:

    leaq 15(%rdi), %rax
    andq $-16, %rax

This also produces pretty high quality code for situations where the
alignment of the input pointer isn’t known:

    pub unsafe fn ptr_u32(slice: *const u32) -> *const u32 {
        slice.offset(slice.align_offset(16) as isize)
    }

    // =>

    movl %edi, %eax
    andl $3, %eax
    leaq 15(%rdi), %rcx
    andq $-16, %rcx
    subq %rdi, %rcx
    shrq $2, %rcx
    negq %rax
    sbbq %rax, %rax
    orq  %rcx, %rax
    leaq (%rdi,%rax,4), %rax

Here LLVM is smart enough to replace the `usize::MAX` special case with
a branch-less bitwise-OR approach, where the mask is constructed using
the neg and sbb instructions. This appears to work across various
architectures I’ve tried.

This change ends up introducing more branches and code in situations
where there is less knowledge of the arguments. For example when the
requested alignment is entirely unknown. This use-case was never really
a focus of this function, so I’m not particularly worried, especially
since llvm-mca is saying that the new code is still appreciably faster,
despite all the new branching.

Fixes #98809.
Sadly, this does not help with #72356.
2022-07-16 23:28:28 +00:00
Simonas Kazlauskas
62a182cf7f Add a special case for align_offset /w stride != 1
This generalizes the previous `stride == 1` special case to apply to any
situation where the requested alignment is divisible by the stride. This
in turn allows the test case from #98809 produce ideal assembly, along
the lines of:

    leaq 15(%rdi), %rax
    andq $-16, %rax

This also produces pretty high quality code for situations where the
alignment of the input pointer isn’t known:

    pub unsafe fn ptr_u32(slice: *const u32) -> *const u32 {
        slice.offset(slice.align_offset(16) as isize)
    }

    // =>

    movl %edi, %eax
    andl $3, %eax
    leaq 15(%rdi), %rcx
    andq $-16, %rcx
    subq %rdi, %rcx
    shrq $2, %rcx
    negq %rax
    sbbq %rax, %rax
    orq  %rcx, %rax
    leaq (%rdi,%rax,4), %rax

Here LLVM is smart enough to replace the `usize::MAX` special case with
a branch-less bitwise-OR approach, where the mask is constructed using
the neg and sbb instructions. This appears to work across various
architectures I’ve tried.

This change ends up introducing more branches and code in situations
where there is less knowledge of the arguments. For example when the
requested alignment is entirely unknown. This use-case was never really
a focus of this function, so I’m not particularly worried, especially
since llvm-mca is saying that the new code is still appreciably faster,
despite all the new branching.

Fixes #98809.
Sadly, this does not help with #72356.
2022-07-17 01:27:37 +03:00
Yuki Okushi
084ad59622
Stabilize future_poll_fn
Signed-off-by: Yuki Okushi <jtitor@2k36.org>
2022-07-16 10:04:14 +09:00
bors
24699bcbad Auto merge of #95956 - yaahc:stable-in-unstable, r=cjgillot
Support unstable moves via stable in unstable items

part of https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/moving.20items.20to.20core.20unstably and a blocker of https://github.com/rust-lang/rust/pull/90328.

The libs-api team needs the ability to move an already stable item to a new location unstably, in this case for Error in core. Otherwise these changes are insta-stable making them much harder to merge.

This PR attempts to solve the problem by checking the stability of path segments as well as the last item in the path itself, which is currently the only thing checked.
2022-07-14 13:42:09 +00:00
Dylan DPC
103b8602b7
Rollup merge of #98315 - joshtriplett:stabilize-core-ffi-c, r=Mark-Simulacrum
Stabilize `core::ffi:c_*` and rexport in `std::ffi`

This only stabilizes the base types, not the non-zero variants, since
those have their own separate tracking issue and have not gone through
FCP to stabilize.
2022-07-14 14:14:20 +05:30
Josh Triplett
d431338b25 Stabilize core::ffi:c_* and rexport in std::ffi
This only stabilizes the base types, not the non-zero variants, since
those have their own separate tracking issue and have not gone through
FCP to stabilize.
2022-07-13 19:28:20 -07:00
Scott McMurray
a32305a80f Re-optimize Layout::array
This way it's one check instead of two, so hopefully it'll be better

Nightly:
```
layout_array_i32:
	movq	%rdi, %rax
	movl	$4, %ecx
	mulq	%rcx
	jo	.LBB1_2
	movabsq	$9223372036854775805, %rcx
	cmpq	%rcx, %rax
	jae	.LBB1_2
	movl	$4, %edx
	retq
.LBB1_2:
	…
```

This PR:
```
	movq	%rcx, %rax
	shrq	$61, %rax
	jne	.LBB2_1
	shlq	$2, %rcx
	movl	$4, %edx
	movq	%rcx, %rax
	retq
.LBB2_1:
	…
```
2022-07-13 17:07:41 -07:00
Konrad Borowski
0753fd117b Partially stabilize const_slice_from_raw_parts
This doesn't stabilize methods working on mutable pointers.
2022-07-09 23:20:02 +02:00
Jane Losare-Lusby
d68cb1f9a3 revert changes to unicode stability 2022-07-08 21:18:15 +00:00
Guillaume Gomez
4755173cf6
Rollup merge of #96935 - thomcc:atomicptr-strict-prov, r=dtolnay
Allow arithmetic and certain bitwise ops on AtomicPtr

This is mainly to support migrating from `AtomicUsize`, for the strict provenance experiment.

This is a pretty dubious set of APIs, but it should be sufficient to allow code that's using `AtomicUsize` to manipulate a tagged pointer atomically. It's under a new feature gate, `#![feature(strict_provenance_atomic_ptr)]`, but I'm not sure if it needs its own tracking issue. I'm happy to make one, but it's not clear that it's needed.

I'm unsure if it needs changes in the various non-LLVM backends. Because we just cast things to integers anyway (and were already doing so), I doubt it.

API change proposal: https://github.com/rust-lang/libs-team/issues/60

Fixes #95492
2022-07-06 20:43:23 +02:00
Nick Cameron
0c72be3e1a core::any: replace some unstable generic types with impl Trait
Signed-off-by: Nick Cameron <nrc@ncameron.org>
2022-07-05 15:06:31 +01:00
Dylan DPC
8fa1ed8f12
Rollup merge of #97712 - RalfJung:untyped, r=scottmcm
ptr::copy and ptr::swap are doing untyped copies

The consensus in https://github.com/rust-lang/rust/issues/63159 seemed to be that these operations should be "untyped", i.e., they should treat the data as raw bytes, should work when these bytes violate the validity invariant of `T`, and should exactly preserve the initialization state of the bytes that are being copied. This is already somewhat implied by the description of "copying/swapping size*N bytes" (rather than "N instances of `T`").

The implementations mostly already work that way (well, for LLVM's intrinsics the documentation is not precise enough to say what exactly happens to poison, but if this ever gets clarified to something that would *not* perfectly preserve poison, then I strongly assume there will be some way to make a copy that *does* perfectly preserve poison). However, I had to adjust `swap_nonoverlapping`; after ``@scottmcm's`` [recent changes](https://github.com/rust-lang/rust/pull/94212), that one (sometimes) made a typed copy. (Note that `mem::swap`, which works on mutable references, is unchanged. It is documented as "swapping the values at two mutable locations", which to me strongly indicates that it is indeed typed. It is also safe and can rely on `&mut T` pointing to a valid `T` as part of its safety invariant.)

On top of adding a test (that will be run by Miri), this PR then also adjusts the documentation to indeed stably promise the untyped semantics. I assume this means the PR has to go through t-libs (and maybe t-lang?) FCP.

Fixes https://github.com/rust-lang/rust/issues/63159
2022-07-05 16:04:31 +05:30
5225225
5f5ca88958 Add size assert in transmute_copy 2022-07-03 10:46:20 +01:00
Ben Kimock
7919e4208b Fix slice::ChunksMut aliasing 2022-07-03 00:15:15 -04:00
Pietro Albini
6b2d3d5f3c
update cfg(bootstrap)s 2022-07-01 15:48:23 +02:00
Thom Chiovoloni
e65ecee90e
Rename AtomicPtr::fetch_{add,sub}{,_bytes} 2022-07-01 06:21:19 -07:00
Thom Chiovoloni
2f872afdb5
Allow arithmetic and certain bitwise ops on AtomicPtr
This is mainly to support migrating from AtomicUsize, for the strict
provenance experiment.

Fixes #95492
2022-07-01 06:21:18 -07:00
Ralf Jung
8c977cfda8 libcore tests: avoid int2ptr casts 2022-06-27 13:30:44 -04:00
Ross MacArthur
bbdff1fff4
Add Iterator::next_chunk 2022-06-21 08:57:02 +02:00
Chase Wilson
59be3e856f
Stabilized Option::unzip() 2022-06-17 11:54:55 -05:00
Maybe Waffle
7c360dc117 Move/rename lazy::{OnceCell, Lazy} to cell::{OnceCell, LazyCell} 2022-06-16 19:53:59 +04:00
bors
e9aff9c42c Auto merge of #91970 - nrc:provide-any, r=scottmcm
Add the Provider api to core::any

This is an implementation of [RFC 3192](https://github.com/rust-lang/rfcs/pull/3192) ~~(which is yet to be merged, thus why this is a draft PR)~~. It adds an API for type-driven requests and provision of data from trait objects. A primary use case is for the `Error` trait, though that is not implemented in this PR. The only major difference to the RFC is that the functionality is added to the `any` module, rather than being in a sibling `provide_any` module (as discussed in the RFC thread).

~~Still todo: improve documentation on items, including adding examples.~~

cc `@yaahc`
2022-06-10 01:10:59 +00:00
Yuki Okushi
2b58e6314a
Stabilize const_intrinsic_copy 2022-06-08 20:17:28 +09:00
Ben Kimock
5dd5244423 Use repr(C) when depending on struct layout in ptr tests 2022-06-07 19:24:09 -04:00
Nick Cameron
843f90cbb7 Add some more tests
Signed-off-by: Nick Cameron <nrc@ncameron.org>
2022-06-06 12:10:14 +01:00
Nick Cameron
57d027d23a Modify the signature of the request_* methods so that trait_upcasting is not required
Signed-off-by: Nick Cameron <nrc@ncameron.org>
2022-06-06 12:10:13 +01:00
Nick Cameron
bb5db85f74 Add the Provider api to core::any
Signed-off-by: Nick Cameron <nrc@ncameron.org>
2022-06-06 12:10:13 +01:00
Ralf Jung
b96d1e45f1 change ptr::swap methods to do untyped copies 2022-06-05 10:09:41 -04:00
Ralf Jung
4990021082 test const_copy to make sure bytewise pointer copies are working 2022-06-03 09:20:42 -04:00
Stovent
1266099742 Implement carrying_add and borrowing_sub on signed numbers 2022-05-30 18:32:27 -04:00
est31
cdb8e64bc7 Use Box::new() instead of box syntax in core tests 2022-05-29 01:44:11 +02:00
bors
9fed13030c Auto merge of #94954 - SimonSapin:null-thin3, r=yaahc
Extend ptr::null and null_mut to all thin (including extern) types

Fixes https://github.com/rust-lang/rust/issues/93959

This change was accepted in https://rust-lang.github.io/rfcs/2580-ptr-meta.html

Note that this changes the signature of **stable** functions. The change should be backward-compatible, but it is **insta-stable** since it cannot (easily, at all?) be made available only through a `#![feature(…)]` opt-in.

The RFC also proposed the same change for `NonNull::dangling`, which makes sense it terms of its signature but not in terms of its implementation. `dangling` uses `align_of()` as an address. But what `align_of()` should be for extern types or whether it should be allowed at all remains an open question.

This commit depends on https://github.com/rust-lang/rust/pull/93977, which is not yet part of the bootstrap compiler. So `#[cfg]` is used to only apply the change in stage 1+. As far a I know bounds cannot be made conditional with `#[cfg]`, so the entire functions are duplicated. This is unfortunate but temporary.

Since this duplication makes it less obvious in the diff, the new definitions differ in:

* More permissive bounds (`Thin` instead of implied `Sized`)
* Different implementation
* Having `rustc_allow_const_fn_unstable(const_fn_trait_bound)`
* Having `rustc_allow_const_fn_unstable(ptr_metadata)`
2022-05-25 13:58:51 +00:00
Caio
664e8a9ce5 [RFC 2011] Library code 2022-05-22 07:18:32 -03:00
bors
bb5e6c984d Auto merge of #97265 - JohnTitor:rollup-kgthnjt, r=JohnTitor
Rollup of 6 pull requests

Successful merges:

 - #97144 (Fix rusty grammar in `std::error::Reporter` docs)
 - #97225 (Fix `Display` for `cell::{Ref,RefMut}`)
 - #97228 (Omit stdarch workspace from rust-src)
 - #97236 (Recover when resolution did not resolve lifetimes.)
 - #97245 (Fix typo in futex RwLock::write_contended.)
 - #97259 (Fix typo in Mir phase docs)

Failed merges:

r? `@ghost`
`@rustbot` modify labels: rollup
2022-05-22 04:27:10 +00:00
Josh Stone
83abb7c18f Fix Display for cell::{Ref,RefMut}
These guards changed to pointers in #97027, but their `Display` was
formatting that field directly, which made it show the raw pointer
value. Now we go through `Deref` to display the real value again.
2022-05-20 11:16:30 -07:00
Caio
d917112606 Stabilize core::array::from_fn 2022-05-20 11:04:13 -03:00
Mark Rousskov
32fdc6b207 Stage-step cfgs 2022-05-18 12:29:35 -04:00
bors
9fbbe75fd7 Auto merge of #95602 - scottmcm:faster-array-intoiter-fold, r=the8472
Fix `array::IntoIter::fold` to use the optimized `Range::fold`

It was using `Iterator::by_ref` in the implementation, which ended up pessimizing it enough that, for example, it didn't vectorize when we tried it in the <https://rust-lang.zulipchat.com/#narrow/stream/257879-project-portable-simd/topic/Reducing.20sum.20into.20wider.20types> conversation.

Demonstration that the codegen test doesn't pass on the current nightly: <https://rust.godbolt.org/z/Taxev5eMn>
2022-05-14 03:12:53 +00:00
Simon Sapin
7ccc09b210 Extend ptr::null and null_mut to all thin (including extern) types
Fixes https://github.com/rust-lang/rust/issues/93959

This change was accepted in https://rust-lang.github.io/rfcs/2580-ptr-meta.html

Note that this changes the signature of **stable** functions.
The change should be backward-compatible, but it is **insta-stable**
since it cannot (easily, at all?) be made available only
through a `#![feature(…)]` opt-in.

The RFC also proposed the same change for `NonNull::dangling`,
which makes sense it terms of its signature but not in terms of its implementation.
`dangling` uses `align_of()` as an address. But what `align_of()` should be for
extern types or whether it should be allowed at all remains an open question.

This commit depends on https://github.com/rust-lang/rust/pull/93977, which is not yet
part of the bootstrap compiler. So `#[cfg]` is used to only apply the change in
stage 1+. As far a I know bounds cannot be made conditional with `#[cfg]`, so the
entire functions are duplicated. This is unfortunate but temporary.

Since this duplication makes it less obvious in the diff,
the new definitions differ in:

* More permissive bounds (`Thin` instead of implied `Sized`)
* Different implementation
* Having `rustc_allow_const_fn_unstable(ptr_metadata)`
2022-05-13 18:03:06 +02:00
bors
8c4fc9d9a4 Auto merge of #94598 - scottmcm:prefix-free-hasher-methods, r=Amanieu
Add a dedicated length-prefixing method to `Hasher`

This accomplishes two main goals:
- Make it clear who is responsible for prefix-freedom, including how they should do it
- Make it feasible for a `Hasher` that *doesn't* care about Hash-DoS resistance to get better performance by not hashing lengths

This does not change rustc-hash, since that's in an external crate, but that could potentially use it in future.

Fixes #94026

r? rust-lang/libs

---

The core of this change is the following two new methods on `Hasher`:

```rust
pub trait Hasher {
    /// Writes a length prefix into this hasher, as part of being prefix-free.
    ///
    /// If you're implementing [`Hash`] for a custom collection, call this before
    /// writing its contents to this `Hasher`.  That way
    /// `(collection![1, 2, 3], collection![4, 5])` and
    /// `(collection![1, 2], collection![3, 4, 5])` will provide different
    /// sequences of values to the `Hasher`
    ///
    /// The `impl<T> Hash for [T]` includes a call to this method, so if you're
    /// hashing a slice (or array or vector) via its `Hash::hash` method,
    /// you should **not** call this yourself.
    ///
    /// This method is only for providing domain separation.  If you want to
    /// hash a `usize` that represents part of the *data*, then it's important
    /// that you pass it to [`Hasher::write_usize`] instead of to this method.
    ///
    /// # Examples
    ///
    /// ```
    /// #![feature(hasher_prefixfree_extras)]
    /// # // Stubs to make the `impl` below pass the compiler
    /// # struct MyCollection<T>(Option<T>);
    /// # impl<T> MyCollection<T> {
    /// #     fn len(&self) -> usize { todo!() }
    /// # }
    /// # impl<'a, T> IntoIterator for &'a MyCollection<T> {
    /// #     type Item = T;
    /// #     type IntoIter = std::iter::Empty<T>;
    /// #     fn into_iter(self) -> Self::IntoIter { todo!() }
    /// # }
    ///
    /// use std:#️⃣:{Hash, Hasher};
    /// impl<T: Hash> Hash for MyCollection<T> {
    ///     fn hash<H: Hasher>(&self, state: &mut H) {
    ///         state.write_length_prefix(self.len());
    ///         for elt in self {
    ///             elt.hash(state);
    ///         }
    ///     }
    /// }
    /// ```
    ///
    /// # Note to Implementers
    ///
    /// If you've decided that your `Hasher` is willing to be susceptible to
    /// Hash-DoS attacks, then you might consider skipping hashing some or all
    /// of the `len` provided in the name of increased performance.
    #[inline]
    #[unstable(feature = "hasher_prefixfree_extras", issue = "88888888")]
    fn write_length_prefix(&mut self, len: usize) {
        self.write_usize(len);
    }

    /// Writes a single `str` into this hasher.
    ///
    /// If you're implementing [`Hash`], you generally do not need to call this,
    /// as the `impl Hash for str` does, so you can just use that.
    ///
    /// This includes the domain separator for prefix-freedom, so you should
    /// **not** call `Self::write_length_prefix` before calling this.
    ///
    /// # Note to Implementers
    ///
    /// The default implementation of this method includes a call to
    /// [`Self::write_length_prefix`], so if your implementation of `Hasher`
    /// doesn't care about prefix-freedom and you've thus overridden
    /// that method to do nothing, there's no need to override this one.
    ///
    /// This method is available to be overridden separately from the others
    /// as `str` being UTF-8 means that it never contains `0xFF` bytes, which
    /// can be used to provide prefix-freedom cheaper than hashing a length.
    ///
    /// For example, if your `Hasher` works byte-by-byte (perhaps by accumulating
    /// them into a buffer), then you can hash the bytes of the `str` followed
    /// by a single `0xFF` byte.
    ///
    /// If your `Hasher` works in chunks, you can also do this by being careful
    /// about how you pad partial chunks.  If the chunks are padded with `0x00`
    /// bytes then just hashing an extra `0xFF` byte doesn't necessarily
    /// provide prefix-freedom, as `"ab"` and `"ab\u{0}"` would likely hash
    /// the same sequence of chunks.  But if you pad with `0xFF` bytes instead,
    /// ensuring at least one padding byte, then it can often provide
    /// prefix-freedom cheaper than hashing the length would.
    #[inline]
    #[unstable(feature = "hasher_prefixfree_extras", issue = "88888888")]
    fn write_str(&mut self, s: &str) {
        self.write_length_prefix(s.len());
        self.write(s.as_bytes());
    }
}
```

With updates to the `Hash` implementations for slices and containers to call `write_length_prefix` instead of `write_usize`.

`write_str` defaults to using `write_length_prefix` since, as was pointed out in the issue, the `write_u8(0xFF)` approach is insufficient for hashers that work in chunks, as those would hash `"a\u{0}"` and `"a"` to the same thing.  But since `SipHash` works byte-wise (there's an internal buffer to accumulate bytes until a full chunk is available) it overrides `write_str` to continue to use the add-non-UTF-8-byte approach.

---

Compatibility:

Because the default implementation of `write_length_prefix` calls `write_usize`, the changed hash implementation for slices will do the same thing the old one did on existing `Hasher`s.
2022-05-06 09:43:57 +00:00
Scott McMurray
98054377ee Add a dedicated length-prefixing method to Hasher
This accomplishes two main goals:
- Make it clear who is responsible for prefix-freedom, including how they should do it
- Make it feasible for a `Hasher` that *doesn't* care about Hash-DoS resistance to get better performance by not hashing lengths

This does not change rustc-hash, since that's in an external crate, but that could potentially use it in future.
2022-05-06 00:03:38 -07:00
Matthias Krüger
47801413d9
Rollup merge of #95359 - jhpratt:int_roundings, r=joshtriplett
Update `int_roundings` methods from feedback

This updates `#![feature(int_roundings)]` (#88581) from feedback. All methods now take `NonZeroX`. The documentation makes clear that they panic in debug mode and wrap in release mode.

r? `@joshtriplett`

`@rustbot` label +T-libs +T-libs-api +S-waiting-on-review
2022-05-05 15:43:00 +02:00
Jacob Pratt
dde590d180
Update int_roundings methods from feedback 2022-05-04 23:20:29 -04:00
Josh Triplett
0fc5c524f5 Stabilize bool::then_some 2022-05-04 13:22:08 +02:00
Dylan DPC
93db30aa7f
Rollup merge of #96149 - est31:remove_unused_macro_matchers, r=petrochenkov
Remove unused macro rules

Removes rules of internal macros that weren't triggered.
2022-04-26 01:21:20 +02:00
bors
d5ae66c12c Auto merge of #92287 - JulianKnodt:slice_remainder, r=yaahc
Add slice::remainder

This adds a remainder function to the Slice iterator, so that a caller can access unused
elements if iteration stops.

Addresses #91733
2022-04-18 23:34:24 +00:00
est31
3c1e1661e7 Remove unused macro rules 2022-04-18 23:28:06 +02:00
est31
9e7a319f01 Replace u8to64_le macro with u64::from_le_bytes
The macro was a reimplementation of the function.
2022-04-17 22:55:33 +02:00
kadmin
494901ced6 Add slice::remainder
This adds a remainder function to the Slice iterator, so that a caller can access unused
elements if iteration stops.
2022-04-17 17:19:45 +00:00
bors
4e1927db3c Auto merge of #95399 - gilescope:plan_b, r=scottmcm
Faster parsing for lower numbers for radix up to 16 (cont.)

( Continuation of https://github.com/rust-lang/rust/pull/83371 )

With LingMan's change I think this is potentially ready.
2022-04-12 05:54:50 +00:00
Giles Cope
3ee7bb19c6
better def of is signed in tests. 2022-04-11 07:37:53 +01:00
liangyongrui
03b2588837 fix Layout struct member naming style 2022-04-11 13:35:18 +08:00
Giles Cope
515906a669
Use Add, Sub, Mul traits instead of unsafe 2022-04-10 18:13:48 +01:00
Dylan DPC
e4b4bf1535
Rollup merge of #95361 - scottmcm:valid-align, r=Mark-Simulacrum
Make non-power-of-two alignments a validity error in `Layout`

Inspired by the zulip conversation about how `Layout` should better enforce `size <= isize::MAX as usize`, this uses an N-variant enum on N-bit platforms to require at the validity level that the existing invariant of "must be a power of two" is upheld.

This was MIRI can catch it, and means there's a more-specific type for `Layout` to store than just `NonZeroUsize`.

It's left as `pub(crate)` here; a future PR could consider giving it a tracking issue for non-internal usage.
2022-04-09 18:26:25 +02:00
Scott McMurray
fe0c08a4f2 Make non-power-of-two alignments a validity error in Layout
Inspired by the zulip conversation about how `Layout` should better enforce `size < isize::MAX as usize`, this uses an N-variant enum on N-bit platforms to require at the validity level that the existing invariant of "must be a power of two" is upheld.

This was MIRI can catch it, and means there's a more-specific type for `Layout` to store than just `NonZeroUsize`.
2022-04-08 20:17:38 -07:00
Dylan DPC
d5232c6b93
Rollup merge of #95579 - Cyborus04:slice_flatten, r=scottmcm
Add `<[[T; N]]>::flatten{_mut}`

Adds `flatten` to convert `&[[T; N]]` to `&[T]` (and `flatten_mut` for `&mut [[T; N]]` to `&mut [T]`)
2022-04-08 11:48:21 +02:00
Cyborus04
06788fd7a4 add <[[T; N]]>::flatten, <[[T; N]]>::flatten_mut, and Vec::<[T; N]>::into_flattened 2022-04-08 00:54:39 -04:00
Giles Cope
82e9d9ebac
from_u32(0) can just be default() 2022-04-04 15:53:53 +01:00
Scott McMurray
83595f9242 Fix array::IntoIter::fold to use the optimized Range::fold
It was using `Iterator::by_ref` in the implementation, which ended up pessimizing it enough that, for example, it didn't vectorize when we tried it in the <https://rust-lang.zulipchat.com/#narrow/stream/257879-project-portable-simd/topic/Reducing.20sum.20into.20wider.20types> conversation.

Demonstration that the codegen test doesn't pass on the current nightly: <https://rust.godbolt.org/z/Taxev5eMn>
2022-04-02 14:29:41 -07:00
Dylan DPC
d6f6084b24
Rollup merge of #95556 - declanvk:nonnull-provenance, r=dtolnay
Implement provenance preserving methods on NonNull

### Description
 Add the `addr`, `with_addr`, `map_addr` methods to the `NonNull` type, and map the address type to `NonZeroUsize`.

 ### Motivation
 The `NonNull` type is useful for implementing pointer types which have  the 0-niche. It is currently possible to implement these provenance  preserving functions by calling `NonNull::as_ptr` and `new_unchecked`. The adding these methods makes it more ergonomic.

 ### Testing
 Added a unit test of a non-null tagged pointer type. This is based on some real code I have elsewhere, that currently routes the pointer through a `NonZeroUsize` and back out to produce a usable pointer. I wanted to produce an ideal version of the same tagged pointer struct that preserved pointer provenance.

### Related

Extension of APIs proposed in #95228 . I can also split this out into a separate tracking issue if that is better (though I may need some pointers on how to do that).
2022-04-02 03:34:24 +02:00
Dylan DPC
d7a24003d8
Rollup merge of #95354 - dtolnay:rustc_const_stable, r=lcnr
Handle rustc_const_stable attribute in library feature collector

The library feature collector in [compiler/rustc_passes/src/lib_features.rs](551b4fa395/compiler/rustc_passes/src/lib_features.rs) has only been looking at `#[stable(…)]`, `#[unstable(…)]`, and `#[rustc_const_unstable(…)]` attributes, while ignoring `#[rustc_const_stable(…)]`. The consequences of this were:

- When any const feature got stabilized (changing one or more `rustc_const_unstable` to `rustc_const_stable`), users who had previously enabled that unstable feature using `#![feature(…)]` would get told "unknown feature", rather than rustc's nicer "the feature … has been stable since … and no longer requires an attribute to enable".

    This can be seen in the way that https://github.com/rust-lang/rust/pull/93957#issuecomment-1079794660 failed after rebase:

    ```console
    error[E0635]: unknown feature `const_ptr_offset`
      --> $DIR/offset_from_ub.rs:1:35
       |
    LL | #![feature(const_ptr_offset_from, const_ptr_offset)]
       |                                   ^^^^^^^^^^^^^^^^
    ```

- We weren't enforcing that a particular feature is either stable everywhere or unstable everywhere, and that a feature that has been stabilized has the same stabilization version everywhere, both of which we enforce for the other stability attributes.

This PR updates the library feature collector to handle `rustc_const_stable`, and fixes places in the standard library and test suite where `rustc_const_stable` was being used in a way that does not meet the rules for a stability attribute.
2022-04-02 03:34:21 +02:00
Matthias Krüger
c37aeb0299
Rollup merge of #95528 - RalfJung:miri-is-too-slow, r=scottmcm
skip slow int_log tests in Miri

Iterating over i16::MAX many things takes a long time in Miri, let's not do that.
I added https://github.com/rust-lang/miri/pull/2044 on the Miri side to still give us some test coverage.
2022-04-01 12:07:03 +02:00
Declan Kelly
2a827635ba Implement provenance preserving method on NonNull
**Description**
 Add the `addr`, `with_addr, `map_addr` methods to the `NonNull` type,
 and map the address type to `NonZeroUsize`.

 **Motiviation**
 The `NonNull` type is useful for implementing pointer types which have
 the 0-niche. It is currently possible to implement these provenance
 preserving functions by calling `NonNull::as_ptr` and `new_unchecked`.
 The addition of these methods simply make it more ergonomic to use.

 **Testing**
 Added a unit test of a nonnull tagged pointer type. This is based on
 some real code I have elsewhere, that currently routes the pointer
 through a `NonZeroUsize` and back out to produce a usable pointer.
2022-04-01 00:23:09 -07:00
David Tolnay
4246916619
Adjust feature names that disagree on const stabilization version 2022-03-31 12:34:48 -07:00
Ralf Jung
487bd8184f skip slow int_log tests in Miri 2022-03-31 11:48:51 -04:00
Ralf Jung
907ba11490 ptr_metadata test: avoid ptr-to-int transmutes 2022-03-31 09:32:30 -04:00
David Tolnay
2ac9efbe95
Debug print char 0 as '\0' rather than '\u{0}' 2022-03-27 04:49:10 -07:00
Jendrik
5f88c23c39 add #[must_use] to functions of slice and its iterators. 2022-03-26 10:24:25 +01:00
bjorn3
4af755baf5 Limit test_variadic_fnptr to unix 2022-03-22 22:27:13 +01:00
bjorn3
56939ffe7d Don't declare test_variadic_fnptr with two conflicting signatures
It is UB for LLVM and results in a compile error for Cranelift
2022-03-20 21:09:35 +01:00
Matthias Krüger
4ead6d9dc7
Rollup merge of #95017 - zachs18:cmp_ordering_derive_eq, r=Dylan-DPC
Derive Eq for std::cmp::Ordering, instead of using manual impl.

This allows consts of type Ordering to be used in patterns, and with feature(adt_const_params) allows using `Ordering` as a const generic parameter.

Currently, `std::cmp::Ordering` implements `Eq` using a manually written `impl Eq for Ordering {}`, instead of `derive(Eq)`. This means that it does not implement `StructuralEq`.

This commit removes the manually written impl, and adds `derive(Eq)` to `Ordering`, so that it will implement `StructuralEq`.
2022-03-18 21:50:48 +01:00
Matthias Krüger
c183d4a510
Rollup merge of #94115 - scottmcm:iter-process-by-ref, r=yaahc
Let `try_collect` take advantage of `try_fold` overrides

No public API changes.

With this change, `try_collect` (#94047) is no longer going through the `impl Iterator for &mut impl Iterator`, and thus will be able to use `try_fold` overrides instead of being forced through `next` for every element.

Here's the test added, to see that it fails before this PR (once a new enough nightly is out): https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=462f2896f2fed2c238ee63ca1a7e7c56

This might as well go to the same person as my last `try_process` PR  (#93572), so
r? ``@yaahc``
2022-03-18 21:50:44 +01:00
Zachary S
b13b495b91 Add test for StructuralEq for std::cmp::Ordering.
Added test in library/core/tests/cmp.rs that ensures that `const`s of type `Ordering`s can be used in patterns.
2022-03-16 14:01:48 -05:00
bors
21b0325c68 Auto merge of #94738 - Urgau:rustbuild-check-cfg-values, r=Mark-Simulacrum
Enable conditional checking of values in the Rust codebase

This pull-request enable conditional checking of (well known) values in the Rust codebase.

Well known values were added in https://github.com/rust-lang/rust/pull/94362. All the `target_*` values are taken from all the built-in targets which is why some extra values were needed do be added as they are not (yet ?) defined in any built-in targets.

r? `@Mark-Simulacrum`
2022-03-13 18:34:00 +00:00
T-O-R-U-S
72a25d05bf Use implicit capture syntax in format_args
This updates the standard library's documentation to use the new syntax. The
documentation is worthwhile to update as it should be more idiomatic
(particularly for features like this, which are nice for users to get acquainted
with). The general codebase is likely more hassle than benefit to update: it'll
hurt git blame, and generally updates can be done by folks updating the code if
(and when) that makes things more readable with the new format.

A few places in the compiler and library code are updated (mostly just due to
already having been done when this commit was first authored).
2022-03-10 10:23:40 -05:00
Scott McMurray
7ef74bc8b9 Let try_collect take advantage of try_fold overrides
Without this change it's going through `&mut impl Iterator`, which handles `?Sized` and thus currently can't forward generics.

Here's the test added, to see that it fails before this PR (once a new enough nightly is out): https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=462f2896f2fed2c238ee63ca1a7e7c56
2022-03-10 00:16:06 -08:00
Matthias Krüger
d5c05fcc8a
Rollup merge of #93057 - frengor:iter_collect_into, r=m-ou-se
Add Iterator::collect_into

This PR adds `Iterator::collect_into` as proposed by ``@cormacrelf`` in #48597 (see https://github.com/rust-lang/rust/pull/48597#issuecomment-842083688).
Followup of #92982.

This adds the following method to the Iterator trait:

```rust
fn collect_into<E: Extend<Self::Item>>(self, collection: &mut E) -> &mut E
```
2022-03-09 23:14:11 +01:00
Loïc BRANSTETT
e3ea59ada5 Remove unexpected #[cfg(target_pointer_width = "8")] in tests 2022-03-09 00:30:17 +01:00
Ralf Jung
e9149b6773 Miri can run this test now 2022-03-03 14:54:18 -05:00
Ralf Jung
d233570fab fix a warning when building core tests with cfg(miri) 2022-03-03 14:54:18 -05:00
Ralf Jung
6739299d18 Miri/CTFE: properly treat overflow in (signed) division/rem as UB 2022-03-01 20:39:51 -05:00
Matthias Krüger
770ee32b34
Rollup merge of #89793 - ibraheemdev:from_ptr_range, r=m-ou-se
Add `slice::{from_ptr_range, from_mut_ptr_range} `

Adds `slice::{from_ptr_range, from_mut_ptr_range}` as counterparts to `slice::{as_ptr_range, as_mut_ptr_range}`.
2022-02-28 12:57:44 +01:00
Ibraheem Ahmed
aac0281d30 add slice::{from_ptr_range, from_mut_ptr_range} 2022-02-27 16:53:26 -05:00
Mark Rousskov
22c3a71de1 Switch bootstrap cfgs 2022-02-25 08:00:52 -05:00
fren_gor
04b3162764
Add collect_into 2022-02-20 01:57:32 +01:00
Matthias Krüger
f19adc7acc
Rollup merge of #93658 - cchiw:issue-77443-fix, r=joshtriplett
Stabilize `#[cfg(panic = "...")]`

[Stabilization PR](https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr) for #77443
2022-02-19 06:45:29 +01:00
Arthur Lafrance
47d5196a00 Add a try_collect() helper method to Iterator
Tweaked `try_collect()` to accept more `Try` types

Updated feature attribute for tracking issue
2022-02-16 14:26:39 -08:00
Daniel Henry-Mantilla
42d69e2793 Write {ui,} tests for pin_macro and pin! 2022-02-14 16:56:37 +01:00
SaltyKitkat
3c142b0ffe
stabilize const_ptr_offset 2022-02-13 15:26:14 +08:00
Charisee
dbeab9c532 added space 2022-02-10 22:30:51 +00:00
Charisee
a889079b29 add cfg_panic bootstrap 2022-02-10 22:10:08 +00:00
Charisee
d018a8b624 remove mention of cfg_panic from library tests 2022-02-10 22:09:11 +00:00
Charisee
5e6be7df94 replace feature expression (cfg_panic) in lib and remove expression from tests
Rebase commit
2022-02-10 22:06:47 +00:00
Amanieu d'Antras
49d4823112 Stabilize cfg_target_has_atomic
Closes #32976
2022-02-09 18:45:44 +00:00
tamaron
83242897fb add tests 2022-02-02 23:07:02 +09:00
Matthias Krüger
a643e59800
Rollup merge of #91828 - oxalica:feat/waker-getters, r=dtolnay
Implement `RawWaker` and `Waker` getters for underlying pointers

implement #87021

New APIs:
- `RawWaker::data(&self) -> *const ()`
- `RawWaker::vtable(&self) -> &'static RawWakerVTable`
- ~`Waker::as_raw_waker(&self) -> &RawWaker`~ `Waker::as_raw(&self) -> &RawWaker`

This third one is an auxiliary function to make the two APIs above more useful. Since we can only get `&Waker` in `Future::poll`, without this, we need to `transmute` it into `&RawWaker` (relying on `repr(transparent)`) in order to access its data/vtable pointers.

~Not sure if it should be named `as_raw` or `as_raw_waker`. Seems we always use `as_<something-raw>` instead of just `as_raw`. But `as_raw_waker` seems not quite consistent with `Waker::from_raw`.~ As suggested in https://github.com/rust-lang/rust/pull/91828#discussion_r770729837, use `as_raw`.
2022-02-01 16:08:02 +01:00
bors
547f2ba06b Auto merge of #86988 - thomcc:chunky-splitz-says-no-checking, r=the8472
Carefully remove bounds checks from some chunk iterator functions

So, I was writing code that requires the equivalent of `rchunks(N).rev()` (which isn't the same as forward `chunks(N)` — in particular, if the buffer length is not a multiple of `N`, I must handle the "remainder" first).

I happened to look at the codegen output of the function (I was actually interested in whether or not a nested loop was being unrolled — it was), and noticed that in the outer `rchunks(n).rev()` loop, LLVM seemed to be unable to remove the bounds checks from the iteration: https://rust.godbolt.org/z/Tnz4MYY8f (this panic was from the split_at in `RChunks::next_back`).

After doing some experimentation, it seems all of the `next_back` in the non-exact chunk iterators have the issue: (`Chunks::next_back`, `RChunks::next_back`, `ChunksMut::next_back`, and `RChunksMut::next_back`)...

Even worse, the forward `rchunks` iterators sometimes have the issue as well (... but only sometimes). For example https://rust.godbolt.org/z/oGhbqv53r has bounds checks, but if I uncomment the loop body, it manages to remove the check (which is bizarre, since I'd expect the opposite...). I suspect it's highly dependent on the surrounding code, so I decided to remove the bounds checks from them anyway. Overall, this change includes:
- All `next_back` functions on the non-`Exact` iterators (e.g. `R?Chunks(Mut)?`).
- All `next` functions on the non-exact rchunks iterators (e.g. `RChunks(Mut)?`).

I wasn't able to catch any of the other chunk iterators failing to remove the bounds checks (I checked iterations over `r?chunks(_exact)?(_mut)?` with constant chunk sizes under `-O3`, `-Os`, and `-Oz`), which makes sense, since these were the cases where it was harder to prove the bounds check correct to remove...

In fact, it took quite a bit of thinking to convince myself that using unchecked_ here was valid — so I'm not really surprised that LLVM had trouble (although compilers are slightly better at this sort of reasoning than humans). A consequence of that is the fact that the `// SAFETY` comment for these are... kinda long...

---

I didn't do this for, or even think about it for, any of the other iteration methods; just `next` and `next_back` (where it mattered). If this PR is accepted, I'll file a follow up for someone (possibly me) to look at the others later (in particular, `nth`/`nth_back` looked like they had similar logic), but I wanted to do this now, as IMO `next`/`next_back` are the most important here, since they're what gets used by the iteration protocol.

---

Note: While I don't expect this to impact performance directly, the panic is a side effect, which would otherwise not exist in these loops. That is, this could prevent the compiler from being able to move/remove/otherwise rework a loop over these iterators (as an example, it could not delete the code for a loop whose body computes a value which doesn't get used).

Also, some like to be able to have confidence this code has no panicking branches in the optimized code, and "no bounds checks" is kinda part of the selling point of Rust's iterators anyway.
2022-02-01 10:11:59 +00:00
Thom Chiovoloni
9c62455e2f
Improve test coverage of {Chunks,RChunks,RChunksMut}::{next,next_back} 2022-01-31 17:35:19 -08:00
bors
86f5e177bc Auto merge of #93498 - matthiaskrgr:rollup-k5shwrc, r=matthiaskrgr
Rollup of 8 pull requests

Successful merges:

 - #90277 (Improve terminology around "after typeck")
 - #92918 (Allow eliding GATs in expression position)
 - #93039 (Don't suggest inaccessible fields)
 - #93155 (Switch pretty printer to block-based indentation)
 - #93214 (Respect doc(hidden) when suggesting available fields)
 - #93347 (Make `char::DecodeUtf16::size_hist` more precise)
 - #93392 (Clarify documentation on char::MAX)
 - #93444 (Fix some CSS warnings and errors from VS Code)

Failed merges:

r? `@ghost`
`@rustbot` modify labels: rollup
2022-01-31 11:24:03 +00:00
Matthias Krüger
b0cdf7e995
Rollup merge of #93480 - est31:remove_unstable_deprecated, r=Mark-Simulacrum
Remove deprecated and unstable slice_partition_at_index functions

They have been deprecated since commit 01ac5a97c9
which was part of the 1.49.0 release, so from the point of nightly,
11 releases ago.
2022-01-31 07:00:45 +01:00
Matthias Krüger
76857fb3fb
Rollup merge of #93347 - WaffleLapkin:better_char_decode_utf16_size_hint, r=dtolnay
Make `char::DecodeUtf16::size_hist` more precise

New implementation takes into account contents of `self.buf` and rounds lower bound up instead of down.

Fixes #88762
Revival of #88763
2022-01-31 06:58:31 +01:00
Eric Huss
0610d4fa66
Rollup merge of #92887 - pietroalbini:pa-bootstrap-update, r=Mark-Simulacrum
Bootstrap compiler update

r? ``@Mark-Simulacrum``
2022-01-30 08:37:46 -08:00
est31
105a7461b9 Remove deprecated and unstable slice_partition_at_index functions
They have been deprecated since commit 01ac5a97c9
which was part of the 1.49.0 release, so from the point of nightly,
11 releases ago.
2022-01-30 16:19:03 +01:00
Maybe Waffle
17cd2cd592 Fix an edge case in chat::DecodeUtf16::size_hint
There are cases, when data in the buf might or might not be an error.
2022-01-30 15:32:21 +03:00
Matthias Krüger
37e9cb34e5
Rollup merge of #93236 - woppopo:const_nonnull_new, r=oli-obk
Make `NonNull::new` `const`

Tracking issue: #93235
2022-01-29 14:46:31 +01:00
Matthias Krüger
9e86a434a7
Rollup merge of #92274 - woppopo:const_deallocate, r=oli-obk
Add `intrinsics::const_deallocate`

Tracking issue: #79597
Related: #91884

This allows deallocation of a memory allocated by `intrinsics::const_allocate`. At the moment, this can be only used to reduce memory usage, but in the future this may be useful to detect memory leaks (If an allocated memory remains after evaluation, raise an error...?).
2022-01-29 14:46:30 +01:00
Pietro Albini
5b3462c556
update cfg(bootstrap)s 2022-01-28 15:01:07 +01:00
woppopo
cdd0873db6 Add a test case for using NonNull::new in const context 2022-01-28 18:41:35 +09:00
Maybe Waffle
2c97d1012e Fix wrong assumption in DecodeUtf16::size_hint
`self.buf` can contain a surrogate, but only a leading one.
2022-01-28 12:40:59 +03:00
woppopo
7a7144f413 test_const_allocate_at_runtime 2022-01-28 17:27:33 +09:00
Maybe Waffle
9c8cd1ff37 Add a test for char::DecodeUtf16::size_hint 2022-01-27 00:50:34 +03:00
woppopo
29932db09b const_deallocate: Don't deallocate memory allocated in an another const. Does nothing at runtime.
`const_allocate`:  Returns a null pointer at runtime.
2022-01-26 13:06:09 +09:00
Matthias Krüger
55a1f8b955
Rollup merge of #91122 - dtolnay:not, r=m-ou-se
impl Not for !

The lack of this impl caused trouble for me in some degenerate cases of macro-generated code of the form `if !$cond {...}`, even without `feature(never_type)` on a stable compiler. Namely if `$cond` contains a `return` or `break` or similar diverging expression, which would otherwise be perfectly legal in boolean position, the code previously failed to compile with:

```console
error[E0600]: cannot apply unary operator `!` to type `!`
   --> library/core/tests/ops.rs:239:8
    |
239 |     if !return () {}
    |        ^^^^^^^^^^ cannot apply unary operator `!`
```
2022-01-23 01:09:41 +01:00
Matthias Krüger
f511360fd2
Rollup merge of #92747 - swenson:bignum-bit-length-optimization, r=scottmcm
Simplification of BigNum::bit_length

As indicated in the comment, the BigNum::bit_length function could be
optimized by using CLZ, which is often a single instruction instead a
loop.

I think the code is also simpler now without the loop.

I added some additional tests for Big8x3 and Big32x40 to ensure that
there were no regressions.
2022-01-15 11:28:22 +01:00
Matthias Krüger
558da934c1
Rollup merge of #92768 - ojeda:stabilize-maybe_uninit_extra, r=Mark-Simulacrum
Partially stabilize `maybe_uninit_extra`

This covers:

```rust
impl<T> MaybeUninit<T> {
    pub unsafe fn assume_init_read(&self) -> T { ... }
    pub unsafe fn assume_init_drop(&mut self) { ... }
}
```

It does not cover the const-ness of `write` under `const_maybe_uninit_write` nor the const-ness of `assume_init_read` (this commit adds `const_maybe_uninit_assume_init_read` for that).

FCP: https://github.com/rust-lang/rust/issues/63567#issuecomment-958590287.

Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
2022-01-14 07:47:33 +01:00
Miguel Ojeda
8680a44c0f Partially stabilize maybe_uninit_extra
This covers:

    impl<T> MaybeUninit<T> {
        pub unsafe fn assume_init_read(&self) -> T { ... }
        pub unsafe fn assume_init_drop(&mut self) { ... }
    }

It does not cover the const-ness of `write` under
`const_maybe_uninit_write` nor the const-ness of
`assume_init_read` (this commit adds
`const_maybe_uninit_assume_init_read` for that).

FCP: https://github.com/rust-lang/rust/issues/63567#issuecomment-958590287.

Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
2022-01-11 17:01:13 +01:00
Christopher Swenson
424f38f211 Simplification of BigNum::bit_length
As indicated in the comment, the BigNum::bit_length function could be
optimized by using CLZ, which is often a single instruction instead a
loop.

I think the code is also simpler now without the loop.

I added some additional tests for Big8x3 and Big32x40 to ensure that
there were no regressions.
2022-01-10 14:18:28 -08:00
Lucas Kent
08829853d3 eplace usages of vec![].into_iter with [].into_iter 2022-01-09 14:09:25 +11:00
bors
fca4b155a7 Auto merge of #92226 - woppopo:const_black_box, r=joshtriplett
Constify `core::intrinsics::black_box` and `core::hint::black_box`.

`core::intrinsics::black_box` is already constified, but it wasn't marked as const (see: https://github.com/rust-lang/rust/blob/master/compiler/rustc_const_eval/src/interpret/intrinsics.rs#L471).

Tracking issue: None
2021-12-24 10:02:54 +00:00
Matthias Krüger
94b9b5f35f
Rollup merge of #92121 - RalfJung:miri-core-test, r=kennytm
disable test with self-referential generator on Miri

Running the libcore test suite in Miri currently fails due to the known incompatibility of self-referential generators with Miri's aliasing checks (https://github.com/rust-lang/unsafe-code-guidelines/issues/148). So let's disable that test in Miri for now.
2021-12-23 17:48:30 +01:00
woppopo
72f15ea22a Constify core::intrinsics::black_box 2021-12-23 20:07:41 +09:00
Matthias Krüger
60625a6ef0
Rollup merge of #88858 - spektom:to_lower_upper_rev, r=dtolnay
Allow reverse iteration of lowercase'd/uppercase'd chars

The PR implements `DoubleEndedIterator` trait for `ToLowercase` and `ToUppercase`.

This enables reverse iteration of lowercase/uppercase variants of character sequences.
One of use cases:  determining whether a char sequence is a suffix of another one.

Example:

```rust
fn endswith_ignore_case(s1: &str, s2: &str) -> bool {
    for eob in s1
        .chars()
        .flat_map(|c| c.to_lowercase())
        .rev()
        .zip_longest(s2.chars().flat_map(|c| c.to_lowercase()).rev())
    {
        match eob {
            EitherOrBoth::Both(c1, c2) => {
                if c1 != c2 {
                    return false;
                }
            }
            EitherOrBoth::Left(_) => return true,
            EitherOrBoth::Right(_) => return false,
        }
    }
    true
}
```
2021-12-23 00:28:51 +01:00
Ralf Jung
5994990088 disable test with self-referential generator on Miri 2021-12-20 12:33:55 +01:00
Matthias Krüger
6d2689526b
Rollup merge of #91141 - jhpratt:int_roundings, r=joshtriplett
Revert "Temporarily rename int_roundings functions to avoid conflicts"

This reverts commit 3ece63b64e.

This should be okay because #90329 has been merged.

r? `@joshtriplett`
2021-12-19 10:45:50 +01:00
bors
d3f300477b Auto merge of #92062 - matthiaskrgr:rollup-en3p4sb, r=matthiaskrgr
Rollup of 7 pull requests

Successful merges:

 - #91439 (Mark defaulted `PartialEq`/`PartialOrd` methods as const)
 - #91516 (Improve suggestion to change struct field to &mut)
 - #91896 (Remove `in_band_lifetimes` for `rustc_passes`)
 - #91909 (⬆️ rust-analyzer)
 - #91922 (Remove `in_band_lifetimes` from `rustc_mir_dataflow`)
 - #92025 (Revert "socket ancillary data implementation for dragonflybsd.")
 - #92030 (Update stdlib to the 2021 edition)

Failed merges:

r? `@ghost`
`@rustbot` modify labels: rollup
2021-12-18 10:20:24 +00:00
Matthias Krüger
359c88e426
Rollup merge of #91439 - ecstatic-morse:const-cmp-trait-default-methods, r=oli-obk
Mark defaulted `PartialEq`/`PartialOrd` methods as const

WIthout it, `const` impls of these traits are unpleasant to write. I think this kind of change is allowed now. although it looks like it might require some Miri tweaks. Let's find out.

r? ```@fee1-dead```
2021-12-18 10:26:35 +01:00
Matthias Krüger
fcc59794a7
Rollup merge of #91928 - fee1-dead:constification1, r=oli-obk
Constify (most) `Option` methods

r? ``@oli-obk``
2021-12-18 08:16:29 +01:00
bors
7abab1efb2 Auto merge of #91838 - scottmcm:array-slice-eq-via-arrays-not-slices, r=dtolnay
Do array-slice equality via array equality, rather than always via slices

~~Draft because it needs a rebase after #91766 eventually gets through bors.~~

This enables the optimizations from #85828 to be used for array-to-slice comparisons too, not just array-to-array.

For example, <https://play.rust-lang.org/?version=nightly&mode=release&edition=2021&gist=5f9ba69b3d5825a782f897c830d3a6aa>
```rust
pub fn demo(x: &[u8], y: [u8; 4]) -> bool {
    *x == y
}
```
Currently writes the array to stack for no reason:
```nasm
	sub	rsp, 4
	mov	dword ptr [rsp], edx
	cmp	rsi, 4
	jne	.LBB0_1
	mov	eax, dword ptr [rdi]
	cmp	eax, dword ptr [rsp]
	sete	al
	add	rsp, 4
	ret

.LBB0_1:
	xor	eax, eax
	add	rsp, 4
	ret
```
Whereas with the change in this PR it just compares it directly:
```nasm
	cmp	rsi, 4
	jne	.LBB1_1
	cmp	dword ptr [rdi], edx
	sete	al
	ret

.LBB1_1:
	xor	eax, eax
	ret
```
2021-12-17 19:17:29 +00:00
Deadbeef
6b5f63c3fc
Constify (most) Option methods 2021-12-17 20:46:47 +08:00
Dylan MacKenzie
2049287030 Disable test on bootstrap compiler 2021-12-16 22:11:17 -08:00
Dylan MacKenzie
1606335a93 Test const impl of cmp traits 2021-12-16 21:35:25 -08:00
oxalica
bae0da8361
Implement data and vtable getters for RawWaker 2021-12-17 04:30:13 +08:00
Matthias Krüger
13fb051074
Rollup merge of #91918 - fee1-dead:constification0-the-great-constification-begins, r=oli-obk
Constify `bool::then{,_some}`

Note on `~const Drop`: it has no effect when called from runtime functions, when called from const contexts, the trait system ensures that the type can be dropped in const contexts.
2021-12-15 10:57:03 +01:00
Scott McMurray
a0b96902e4 Do array-slice equality via arrays, rather than always via slices
This'll still go via slices eventually for large arrays, but this way slice comparisons to short arrays can use the same memcmp-avoidance tricks.

Added some tests for all the combinations to make sure I didn't accidentally infinitely-recurse something.
2021-12-14 13:15:15 -08:00
Deadbeef
4f4b2c7271
Constify bool::then{,_some} 2021-12-15 00:11:23 +08:00
Frank Steffahn
a957cefda6 Fix a bunch of typos 2021-12-14 16:40:43 +01:00
bors
f7fd79ac1d Auto merge of #91841 - matthiaskrgr:rollup-zlhsg5a, r=matthiaskrgr
Rollup of 5 pull requests

Successful merges:

 - #91086 (Implement `TryFrom<&'_ mut [T]>` for `[T; N]`)
 - #91091 (Stabilize `ControlFlow::{is_break, is_continue}`)
 - #91749 (BTree: improve public descriptions and comments)
 - #91819 (rustbot: Add autolabeling for `T-compiler`)
 - #91824 (Make `(*mut T)::write_bytes` `const`)

Failed merges:

r? `@ghost`
`@rustbot` modify labels: rollup
2021-12-13 00:56:18 +00:00
Matthias Krüger
9e662d0c03
Rollup merge of #91824 - woppopo:const_ptr_write_bytes, r=oli-obk
Make `(*mut T)::write_bytes` `const`

Tracking issue: #86302
2021-12-13 00:20:10 +01:00
Matthias Krüger
42f8d4833f
Rollup merge of #91086 - rhysd:issue-91085, r=m-ou-se
Implement `TryFrom<&'_ mut [T]>` for `[T; N]`

Fixes #91085.
2021-12-13 00:20:06 +01:00
woppopo
7f5dc0f609 Make (*mut T)::write_bytes const 2021-12-12 14:02:53 +09:00
Deadbeef
e22fe4008c
Revert "Auto merge of #89450 - usbalbin:const_try_revert, r=oli-obk"
This reverts commit a8387aef8c, reversing
changes made to 6e12110812.
2021-12-12 12:34:59 +08:00
Matthias Krüger
9383a49cd4
Rollup merge of #90081 - woppopo:const_write_bytes, r=oli-obk
Make `intrinsics::write_bytes` const

This is required to constify `MaybeUninit::zeroed` and `(*mut T)::write_bytes`.

Tracking issue: #86302
2021-12-11 23:31:48 +01:00
Matthias Krüger
ed81098fcc
Rollup merge of #91721 - danielhenrymantilla:patch-1, r=joshtriplett
Minor improvements to `future::join!`'s implementation

This is a follow-up from #91645, regarding [some remarks I made](https://rust-lang.zulipchat.com/#narrow/stream/187312-wg-async-foundations/topic/join!/near/264293660).

Mainly:
  - it hides the recursive munching through a private `macro`, to avoid leaking such details (a corollary is getting rid of the need to use ``@`` to disambiguate);
  - it uses a `match` binding, _outside_ the `async move` block, to better match the semantics from function-like syntax;
  - it pre-pins the future before calling into `poll_fn`, since `poll_fn`, alone, cannot guarantee that its capture does not move (to clarify: I believe the previous code was sound, thanks to the outer layer of `async`. But I find it clearer / more robust to refactorings this way 🙂).
  - it uses `@ibraheemdev's` very neat `.ready()?`;
  - it renames `Took` to `Taken` for consistency with `Done` (tiny nit 😄).

~~TODO~~Done:

  - [x] Add unit tests to enforce the function-like `:value` semantics are respected.

r? `@nrc`
2021-12-11 17:35:27 +01:00
Jethro Beekman
203cf2d366 Add rsplit_array variants to slices and arrays 2021-12-10 21:34:19 +01:00
Josh Triplett
67ab53daee
Update library/core/tests/future.rs
Co-authored-by: Daniel Henry-Mantilla <daniel.henry.mantilla@gmail.com>
2021-12-10 05:07:52 -08:00
Daniel Henry-Mantilla
f8dc13db43 Add tests asserting the function-like semantics of join!() 2021-12-09 22:57:30 +01:00
Matthias Krüger
90c3e9a2c2
Rollup merge of #91645 - ibraheemdev:future-join, r=joshtriplett
Implement `core::future::join!`

`join!` polls multiple futures concurrently and returns their outputs.

```rust
async fn run() {
    let (a, b) = join!(async { 0 }, async { 1 });
}
```

cc `@rust-lang/wg-async-foundations`
2021-12-09 05:02:22 +01:00
Ibraheem Ahmed
a8c9314100 remove implicit .await from core::future::join 2021-12-08 16:44:48 -05:00
bors
ce0f7baf56 Auto merge of #91512 - scottmcm:array-intoiter-advance, r=Mark-Simulacrum
Override `Iterator::advance(_back)_by` for `array::IntoIter`

Because I happened to notice that `nth` is currently getting codegen'd as a loop even for `Copy` types: <https://rust.godbolt.org/z/fPqv7Gvs7>

<details>
<summary>LLVM before and after</summary>

Rust:

```rust
#[no_mangle]
pub fn array_intoiter_nth(it: &mut std::array::IntoIter<i32, 100>, n: usize) -> Option<i32> {
    it.nth(n)
}
```

Current nightly:
```llvmir
define { i32, i32 } `@array_intoiter_nth(%"core::array::iter::IntoIter<i32,` 100_usize>"* noalias nocapture align 8 dereferenceable(416) %it, i64 %n) unnamed_addr #0 personality i32 (i32, i32, i64, %"unwind::libunwind::_Unwind_Exception"*, %"unwind::libunwind::_Unwind_Context"*)* `@rust_eh_personality` !dbg !6 {
start:
  %_3.i.i.i4.i.i = getelementptr inbounds %"core::array::iter::IntoIter<i32, 100_usize>", %"core::array::iter::IntoIter<i32, 100_usize>"* %it, i64 0, i32 0, i32 0
  %_4.i.i.i5.i.i = getelementptr inbounds %"core::array::iter::IntoIter<i32, 100_usize>", %"core::array::iter::IntoIter<i32, 100_usize>"* %it, i64 0, i32 0, i32 1
  %_4.i.i.i.i.i.i = load i64, i64* %_4.i.i.i5.i.i, align 8, !alias.scope !10
  %.not.i.i = icmp eq i64 %n, 0, !dbg !15
  %_3.i.i.i.i.pre.i = load i64, i64* %_3.i.i.i4.i.i, align 8, !dbg !40, !alias.scope !41
  br i1 %.not.i.i, label %bb4.i, label %bb4.preheader.i.i, !dbg !42

bb4.preheader.i.i:                                ; preds = %start
  %umax.i = tail call i64 `@llvm.umax.i64(i64` %_3.i.i.i.i.pre.i, i64 %_4.i.i.i.i.i.i) #3, !dbg !43
  %0 = sub i64 %umax.i, %_3.i.i.i.i.pre.i, !dbg !43
  br label %bb4.i.i, !dbg !43

bb4.i.i:                                          ; preds = %bb3.i.i.i.i, %bb4.preheader.i.i
  %_3.i.i.i.i.i.i = phi i64 [ %2, %bb3.i.i.i.i ], [ %_3.i.i.i.i.pre.i, %bb4.preheader.i.i ], !dbg !52
  %iter.sroa.0.016.i.i = phi i64 [ %1, %bb3.i.i.i.i ], [ 0, %bb4.preheader.i.i ]
  %1 = add nuw i64 %iter.sroa.0.016.i.i, 1, !dbg !54
  %exitcond.not.i = icmp eq i64 %iter.sroa.0.016.i.i, %0, !dbg !52
  br i1 %exitcond.not.i, label %core::iter::traits::iterator::Iterator::nth.exit, label %bb3.i.i.i.i, !dbg !43

bb3.i.i.i.i:                                      ; preds = %bb4.i.i
  %2 = add nuw i64 %_3.i.i.i.i.i.i, 1, !dbg !63
  store i64 %2, i64* %_3.i.i.i4.i.i, align 8, !dbg !66, !alias.scope !75
  %exitcond.not.i.i = icmp eq i64 %1, %n, !dbg !15
  br i1 %exitcond.not.i.i, label %bb4.i, label %bb4.i.i, !dbg !42

bb4.i:                                            ; preds = %bb3.i.i.i.i, %start
  %_3.i.i.i.i.i = phi i64 [ %_3.i.i.i.i.pre.i, %start ], [ %2, %bb3.i.i.i.i ], !dbg !84
  %3 = icmp ult i64 %_3.i.i.i.i.i, %_4.i.i.i.i.i.i, !dbg !84
  br i1 %3, label %bb3.i.i.i, label %core::iter::traits::iterator::Iterator::nth.exit, !dbg !89

bb3.i.i.i:                                        ; preds = %bb4.i
  %4 = add nuw i64 %_3.i.i.i.i.i, 1, !dbg !90
  store i64 %4, i64* %_3.i.i.i4.i.i, align 8, !dbg !93, !alias.scope !96
  %5 = getelementptr inbounds %"core::array::iter::IntoIter<i32, 100_usize>", %"core::array::iter::IntoIter<i32, 100_usize>"* %it, i64 0, i32 1, i64 %_3.i.i.i.i.i, !dbg !105
  %6 = load i32, i32* %5, align 4, !dbg !131, !alias.scope !141, !noalias !144
  br label %core::iter::traits::iterator::Iterator::nth.exit, !dbg !149

core::iter::traits::iterator::Iterator::nth.exit: ; preds = %bb4.i.i, %bb4.i, %bb3.i.i.i
  %.sroa.3.0.i = phi i32 [ %6, %bb3.i.i.i ], [ undef, %bb4.i ], [ undef, %bb4.i.i ], !dbg !40
  %.sroa.0.0.i = phi i32 [ 1, %bb3.i.i.i ], [ 0, %bb4.i ], [ 0, %bb4.i.i ], !dbg !40
  %7 = insertvalue { i32, i32 } undef, i32 %.sroa.0.0.i, 0, !dbg !150
  %8 = insertvalue { i32, i32 } %7, i32 %.sroa.3.0.i, 1, !dbg !150
  ret { i32, i32 } %8, !dbg !151
}
```

With this PR:
```llvmir
define { i32, i32 } `@array_intoiter_nth(%"core::array::iter::IntoIter<i32,` 100_usize>"* noalias nocapture align 8 dereferenceable(416) %it, i64 %n) unnamed_addr #0 personality i32 (...)* `@__CxxFrameHandler3` {
start:
  %0 = getelementptr inbounds %"core::array::iter::IntoIter<i32, 100_usize>", %"core::array::iter::IntoIter<i32, 100_usize>"* %it, i64 0, i32 0, i32 1
  %_2.i.i.i.i = load i64, i64* %0, align 8, !alias.scope !6, !noalias !13
  %1 = getelementptr inbounds %"core::array::iter::IntoIter<i32, 100_usize>", %"core::array::iter::IntoIter<i32, 100_usize>"* %it, i64 0, i32 0, i32 0
  %_3.i.i.i.i = load i64, i64* %1, align 8, !alias.scope !16
  %2 = sub i64 %_2.i.i.i.i, %_3.i.i.i.i
  %3 = icmp ult i64 %2, %n
  %.0.sroa.speculated.i.i.i.i.i = select i1 %3, i64 %2, i64 %n
  %_10.i.i = add i64 %.0.sroa.speculated.i.i.i.i.i, %_3.i.i.i.i
  store i64 %_10.i.i, i64* %1, align 8, !alias.scope !16
  %.not.i = xor i1 %3, true
  %4 = icmp ult i64 %_10.i.i, %_2.i.i.i.i
  %or.cond.i = select i1 %.not.i, i1 %4, i1 false
  br i1 %or.cond.i, label %bb3.i.i.i, label %_ZN4core4iter6traits8iterator8Iterator3nth17hcbc727011e9e2a3bE.exit

bb3.i.i.i:                                        ; preds = %start
  %5 = add nuw i64 %_10.i.i, 1
  store i64 %5, i64* %1, align 8, !alias.scope !17
  %6 = getelementptr inbounds %"core::array::iter::IntoIter<i32, 100_usize>", %"core::array::iter::IntoIter<i32, 100_usize>"* %it, i64 0, i32 1, i64 %_10.i.i
  %7 = load i32, i32* %6, align 4, !alias.scope !26, !noalias !29
  br label %_ZN4core4iter6traits8iterator8Iterator3nth17hcbc727011e9e2a3bE.exit

_ZN4core4iter6traits8iterator8Iterator3nth17hcbc727011e9e2a3bE.exit: ; preds = %start, %bb3.i.i.i
  %.sroa.3.0.i = phi i32 [ undef, %start ], [ %7, %bb3.i.i.i ]
  %.sroa.0.0.i = phi i32 [ 0, %start ], [ 1, %bb3.i.i.i ]
  %8 = insertvalue { i32, i32 } undef, i32 %.sroa.0.0.i, 0
  %9 = insertvalue { i32, i32 } %8, i32 %.sroa.3.0.i, 1
  ret { i32, i32 } %9
}
```
</details>
2021-12-08 07:54:30 +00:00
Ibraheem Ahmed
d07cef22b0 add tests for core::future::join 2021-12-07 21:20:58 -05:00
bors
11fb21fd0e Auto merge of #91484 - workingjubilee:simd-remove-autosplats, r=Mark-Simulacrum
Sync portable-simd to remove autosplats

This PR syncs portable-simd in up to a8385522ad in order to address the type inference breakages documented on nightly in https://github.com/rust-lang/rust/issues/90904 by removing the vector + scalar binary operations (called "autosplats", "broadcasting", or "rank promotion", depending on who you ask) that allow `{scalar} + &'_ {scalar}` to fail in some cases, because it becomes possible the programmer may have meant `{scalar} + &'_ {vector}`.

A few quality-of-life improvements make their way in as well:
- Lane counts can now go to 64, as LLVM seems to have fixed their miscompilation for those.
- `{i,u}8x64` to `__m512i` is now available.
- a bunch of `#[must_use]` notes appear throughout the module.
- Some implementations, mostly instances of `impl core::ops::{Op}<Simd> for Simd` that aren't `{vector} + {vector}` (e.g. `{vector} + &'_ {vector}`), leverage some generics and `where` bounds now to make them easier to understand by reducing a dozen implementations into one (and make it possible for people to open the docs on less burly devices).
- And some internal-only improvements.

None of these changes should affect a beta backport, only actual users of `core::simd` (and most aren't even visible in the programmatic sense), though I can extract an even more minimal changeset for beta if necessary. It seemed simpler to just keep moving forward.
2021-12-08 01:37:59 +00:00
Mara Bos
1acb44f03c Use IntoIterator for array impl everywhere. 2021-12-04 19:40:33 +01:00
Scott McMurray
eb846dbaca Override Iterator::advance(_back)_by for array::IntoIter
Because I happened to notice that `nth` is currently getting codegen'd as a loop even for `Copy` types: <https://rust.godbolt.org/z/fPqv7Gvs7>
2021-12-03 21:36:51 -08:00
kit
aef59e4fb8 Add a try_reduce method to the Iterator trait 2021-12-04 15:17:14 +11:00
bors
d47a6cc3f2 Auto merge of #91286 - scottmcm:residual-trait, r=joshtriplett
Make `array::{try_from_fn, try_map}` and `Iterator::try_find` generic over `Try`

Fixes #85115

This only updates unstable functions.

`array::try_map` didn't actually exist before; this adds it under the still-open tracking issue #79711 from the old PR #79713.

Tracking issue for the new trait: #91285

This would also solve the return type question in for the proposed `Iterator::try_reduce` in #87054
2021-12-03 10:15:11 +00:00
Jubilee Young
eef4371a98 Force splatting in SIMD test 2021-12-02 19:22:00 -08:00
Matthias Krüger
fbfa003016
Rollup merge of #91444 - RalfJung:miri-tests, r=dtolnay
disable tests in Miri that take too long

Comparing slices of length `usize::MAX` diverges in Miri. In fact these tests even diverge in rustc unless `-O` is passed. I tried this code to check that:
```rust
#![feature(slice_take)]

const EMPTY_MAX: &'static [()] = &[(); usize::MAX];

fn main() {
    let mut slice: &[_] = &[(); usize::MAX];
    println!("1");
    assert_eq!(Some(&[] as _), slice.take(usize::MAX..));
    println!("2");
    let remaining: &[_] = EMPTY_MAX;
    println!("3");
    assert_eq!(remaining, slice);
    println!("4");
}
```
So, disable these tests in Miri for now.
2021-12-02 22:16:14 +01:00
Scott McMurray
b96b9b4093 Make array::{try_from_fn, try_map} and Iterator::try_find generic over Try
Fixes 85115

This only updates unstable functions.

`array::try_map` didn't actually exist before, despite the tracking issue 79711 still being open from the old PR 79713.
2021-12-02 11:23:50 -08:00
Matthias Krüger
d96ce3ea8e
Rollup merge of #91394 - Mark-Simulacrum:bump-stage0, r=pietroalbini
Bump stage0 compiler

r? `@pietroalbini` (or anyone else)
2021-12-02 15:52:03 +01:00
Ralf Jung
b11d88006c disable tests in Miri that take too long 2021-12-01 22:48:59 -05:00
Matthias Krüger
9f1f42897d
Rollup merge of #88502 - ibraheemdev:slice-take, r=dtolnay
Add slice take methods

Revival of #62282

This PR adds the following slice methods:

- `take`
- `take_mut`
- `take_first`
- `take_first_mut`
- `take_last`
- `take_last_mut`

r? `@LukasKalbertodt`
2021-12-01 20:57:42 +01:00
Mark Rousskov
b221c877e8 Apply cfg-bootstrap switch 2021-11-30 10:51:42 -05:00
Jacob Pratt
44b5b838d2
Add test for const MaybeUninit 2021-11-28 01:31:25 -05:00
bors
5fd3a5c7c1 Auto merge of #89916 - the8472:advance_by-avoid-err-0, r=dtolnay
Fix Iterator::advance_by contract inconsistency

The `advance_by(n)` docs state that in the error case `Err(k)` that k is always less than n.
It also states that `advance_by(0)` may return `Err(0)` to indicate an exhausted iterator.
These statements are inconsistent.
Since only one implementation (Skip) actually made use of that I changed it to return Ok(()) in that case too.

While adding some tests I also found a bug in `Take::advance_back_by`.
2021-11-27 11:31:26 +00:00
woppopo
89b2e0c9d5 Make intrinsics::write_bytes const 2021-11-24 13:05:26 +09:00
Jacob Pratt
41f70f3491
Revert "Temporarily rename int_roundings functions to avoid conflicts"
This reverts commit 3ece63b64e.
2021-11-22 15:49:04 -05:00
Jacob Pratt
88b0d7cfc5
Partially stabilize duration_consts_2
Methods that were only blocked on `const_panic` have been stabilized.
The remaining methods of `duration_consts_2` are all related to floats,
and as such have been placed behind the `duration_consts_float` feature
gate.
2021-11-22 13:09:08 -05:00
David Tolnay
a4bff741d0
Test not never
Currently fails to build:

    error[E0600]: cannot apply unary operator `!` to type `!`
       --> library/core/tests/ops.rs:239:8
        |
    239 |     if !return () {}
        |        ^^^^^^^^^^ cannot apply unary operator `!`
2021-11-21 19:10:39 -08:00
rhysd
72b411fd89 Implement TryFrom<&'_ mut [T]> for [T; N] 2021-11-20 23:05:08 +09:00
Loïc BRANSTETT
a8ee0e9c2c Implement IEEE 754-2019 minimun and maximum functions for f32/f64 2021-11-20 10:14:03 +01:00
The8472
3f9b26dc64 Fix Iterator::advance_by contract inconsistency
The `advance_by(n)` docs state that in the error case `Err(k)` that k is always less than n.
It also states that `advance_by(0)` may return `Err(0)` to indicate an exhausted iterator.
These statements are inconsistent.
Since only one implementation (Skip) actually made use of that I changed it to return Ok(()) in that case too.

While adding some tests I also found a bug in `Take::advance_back_by`.
2021-11-19 13:00:23 +01:00
Yuki Okushi
35dd1f65e9
Rollup merge of #90909 - RalfJung:miri-no-portable-simd, r=workingjubilee
disable portable SIMD tests in Miri

Until https://github.com/rust-lang/miri/issues/1912 is resolved, we'll have to skip these tests in Miri.
2021-11-16 09:14:23 +09:00
Ralf Jung
60595f7bde disable portable SIMD tests in Miri 2021-11-14 12:26:35 -05:00
bors
d212d902ae Auto merge of #89551 - jhpratt:stabilize-const_raw_ptr_deref, r=oli-obk
Stabilize `const_raw_ptr_deref` for `*const T`

This stabilizes dereferencing immutable raw pointers in const contexts.
It does not stabilize `*mut T` dereferencing. This is behind the
same feature gate as mutable references.

closes https://github.com/rust-lang/rust/issues/51911
2021-11-13 17:10:15 +00:00
Ibraheem Ahmed
8db85a3c78 add slice take methods 2021-11-12 23:08:27 -05:00
bors
032dfe4360 Auto merge of #89167 - workingjubilee:use-simd, r=MarkSimulacrum
pub use core::simd;

A portable abstraction over SIMD has been a major pursuit in recent years for several programming languages. In Rust, `std::arch` offers explicit SIMD acceleration via compiler intrinsics, but it does so at the cost of having to individually maintain each and every single such API, and is almost completely `unsafe` to use.  `core::simd` offers safe abstractions that are resolved to the appropriate SIMD instructions by LLVM during compilation, including scalar instructions if that is all that is available.

`core::simd` is enabled by the `#![portable_simd]` nightly feature tracked in https://github.com/rust-lang/rust/issues/86656 and is introduced here by pulling in the https://github.com/rust-lang/portable-simd repository as a subtree. We built the repository out-of-tree to allow faster compilation and a stochastic test suite backed by the proptest crate to verify that different targets, features, and optimizations produce the same result, so that using this library does not introduce any surprises. As these tests are technically non-deterministic, and thus can introduce overly interesting Heisenbugs if included in the rustc CI, they are visible in the commit history of the subtree but do nothing here. Some tests **are** introduced via the documentation, but these use deterministic asserts.

There are multiple unsolved problems with the library at the current moment, including a want for better documentation, technical issues with LLVM scalarizing and lowering to libm, room for improvement for the APIs, and so far I have not added the necessary plumbing for allowing the more experimental or libm-dependent APIs to be used. However, I thought it would be prudent to open this for review in its current condition, as it is both usable and it is likely I am going to learn something else needs to be fixed when bors tries this out.

The major types are
- `core::simd::Simd<T, N>`
- `core::simd::Mask<T, N>`

There is also the `LaneCount` struct, which, together with the SimdElement and SupportedLaneCount traits, limit the implementation's maximum support to vectors we know will actually compile and provide supporting logic for bitmasks. I'm hoping to simplify at least some of these out of the way as the compiler and library evolve.
2021-11-13 02:17:20 +00:00
Jubilee Young
7c3d72d069 Test core::simd works
These tests just verify some basic APIs of core::simd function, and
guarantees that attempting to access the wrong things doesn't work.
The majority of tests are stochastic, and so remain upstream, but
a few deterministic tests arrive in the subtree as doc tests.
2021-11-12 16:58:47 -08:00
Jacob Pratt
0cdbeaa2a3
Stabilize const_raw_ptr_deref for *const T
This stabilizes dereferencing immutable raw pointers in const contexts.
It does not stabilize `*mut T` dereferencing. This is placed behind the
`const_raw_mut_ptr_deref` feature gate.
2021-11-06 17:05:15 -04:00
Matthias Krüger
d4bdcdb1ec
Rollup merge of #89951 - ojeda:stable-unwrap_unchecked, r=dtolnay
Stabilize `option_result_unwrap_unchecked`

Closes https://github.com/rust-lang/rust/issues/81383.

Stabilization report: https://github.com/rust-lang/rust/issues/81383#issuecomment-944498212.

```@rustbot``` label +A-option-result +T-libs-api
2021-10-31 09:20:27 +01:00
Matthias Krüger
a26b1d2259
Rollup merge of #89835 - jkugelman:must-use-expensive-computations, r=joshtriplett
Add #[must_use] to expensive computations

The unifying theme for this commit is weak, admittedly. I put together a list of "expensive" functions when I originally proposed this whole effort, but nobody's cared about that criterion. Still, it's a decent way to bite off a not-too-big chunk of work.

Given the grab bag nature of this commit, the messages I used vary quite a bit. I'm open to wording changes.

For some reason clippy flagged four `BTreeSet` methods but didn't say boo about equivalent ones on `HashSet`. I stared at them for a while but I can't figure out the difference so I added the `HashSet` ones in.

```rust
// Flagged by clippy.
alloc::collections::btree_set::BTreeSet<T>   fn difference<'a>(&'a self, other: &'a BTreeSet<T>) -> Difference<'a, T>;
alloc::collections::btree_set::BTreeSet<T>   fn symmetric_difference<'a>(&'a self, other: &'a BTreeSet<T>) -> SymmetricDifference<'a, T>
alloc::collections::btree_set::BTreeSet<T>   fn intersection<'a>(&'a self, other: &'a BTreeSet<T>) -> Intersection<'a, T>;
alloc::collections::btree_set::BTreeSet<T>   fn union<'a>(&'a self, other: &'a BTreeSet<T>) -> Union<'a, T>;

// Ignored by clippy, but not by me.
std::collections::HashSet<T, S>              fn difference<'a>(&'a self, other: &'a HashSet<T, S>) -> Difference<'a, T, S>;
std::collections::HashSet<T, S>              fn symmetric_difference<'a>(&'a self, other: &'a HashSet<T, S>) -> SymmetricDifference<'a, T, S>
std::collections::HashSet<T, S>              fn intersection<'a>(&'a self, other: &'a HashSet<T, S>) -> Intersection<'a, T, S>;
std::collections::HashSet<T, S>              fn union<'a>(&'a self, other: &'a HashSet<T, S>) -> Union<'a, T, S>;
```

Parent issue: #89692

r? ```@joshtriplett```
2021-10-31 09:20:24 +01:00
bors
235d9853d8 Auto merge of #90042 - pietroalbini:1.56-master, r=Mark-Simulacrum
Bump bootstrap compiler to 1.57

Fixes https://github.com/rust-lang/rust/issues/90152

r? `@Mark-Simulacrum`
2021-10-25 11:31:47 +00:00
Matthias Krüger
c16ee19dd4
Rollup merge of #90162 - WaffleLapkin:const_array_slice_from_ref_mut, r=oli-obk
Mark `{array, slice}::{from_ref, from_mut}` as const fn

This PR marks the following APIs as `const`:
```rust
// core::array
pub const fn from_ref<T>(s: &T) -> &[T; 1];
pub const fn from_mut<T>(s: &mut T) -> &mut [T; 1];

// core::slice
pub const fn from_ref<T>(s: &T) -> &[T];
pub const fn from_mut<T>(s: &mut T) -> &mut [T];
```

Note that `from_ref` methods require `const_raw_ptr_deref` feature (which seems totally fine, since it's being stabilized, see #89551), `from_mut` methods require `const_mut_refs` (which seems fine too since this PR marks `from_mut` functions as const unstable).

r? ````@oli-obk````
2021-10-24 15:48:44 +02:00
Pietro Albini
b63ab8005a update cfg(bootstrap) 2021-10-23 21:55:57 -04:00
Maybe Waffle
5f390cfb72 Add tests for const_slice_from_ref and const_array_from_ref 2021-10-23 22:51:22 +03:00
Matthias Krüger
5ea0274563
Rollup merge of #83233 - jethrogb:split_array, r=yaahc
Implement split_array and split_array_mut

This implements `[T]::split_array::<const N>() -> (&[T; N], &[T])` and `[T; N]::split_array::<const M>() -> (&[T; M], &[T])` and their mutable equivalents. These are another few “missing” array implementations now that const generics are a thing, similar to #74373, #75026, etc. Fixes #74674.

This implements `[T; N]::split_array` returning an array and a slice. Ultimately, this is probably not what we want, we would want the second return value to be an array of length N-M, which will likely be possible with future const generics enhancements. We need to implement the array method now though, to immediately shadow the slice method. This way, when the slice methods get stabilized, calling them on an array will not be automatic through coercion, so we won't have trouble stabilizing the array methods later (cf. into_iter debacle).

An unchecked version of `[T]::split_array` could also be added as in #76014. This would not be needed for `[T; N]::split_array` as that can be compile-time checked. Edit: actually, since split_at_unchecked is internal-only it could be changed to be split_array-only.
2021-10-23 05:28:19 +02:00
Jethro Beekman
4a439769ec Implement split_array and split_array_mut 2021-10-22 09:58:24 +02:00
woppopo
2fc780638e Make From impls of NonZero integer const.
I also changed the feature gate added to `From` impls of Atomic integer to `const_num_from_num` from `const_convert`.
2021-10-20 12:04:58 +09:00
Miguel Ojeda
63d7882575 Stabilize option_result_unwrap_unchecked
Closes https://github.com/rust-lang/rust/issues/81383.

Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
2021-10-20 04:03:43 +02:00
Yuki Okushi
9f2ad0a061
Rollup merge of #90009 - woppopo:const_from_more, r=dtolnay
Make more `From` impls `const` (libcore)

Adding `const` to `From` implementations in the core. `rustc_const_unstable` attribute is not added to unstable implementations.

Tracking issue: #88674

<details>
<summary>Done</summary><div>

- `T` from `T`
- `T` from `!`
- `Option<T>` from `T`
- `Option<&T>` from `&Option<T>`
- `Option<&mut T>` from `&mut Option<T>`
- `Cell<T>` from `T`
- `RefCell<T>` from `T`
- `UnsafeCell<T>` from `T`
- `OnceCell<T>` from `T`
- `Poll<T>` from `T`
- `u32` from `char`
- `u64` from `char`
- `u128` from `char`
- `char` from `u8`
- `AtomicBool` from `bool`
- `AtomicPtr<T>` from `*mut T`
- `AtomicI(bits)` from `i(bits)`
- `AtomicU(bits)` from `u(bits)`
- `i(bits)` from `NonZeroI(bits)`
- `u(bits)` from `NonZeroU(bits)`
- `NonNull<T>` from `Unique<T>`
- `NonNull<T>` from `&T`
- `NonNull<T>` from `&mut T`
- `Unique<T>` from `&mut T`
- `Infallible` from `!`
- `TryIntError` from `!`
- `TryIntError` from `Infallible`
- `TryFromSliceError` from `Infallible`
- `FromResidual for Option<T>`
</div></details>

<details>
<summary>Remaining</summary><dev>

- `NonZero` from `NonZero`
These can't be made const at this time because these use Into::into.
https://github.com/rust-lang/rust/blob/master/library/core/src/convert/num.rs#L393

- `std`, `alloc`
There may still be many implementations that can be made `const`.
</div></details>
2021-10-20 04:35:15 +09:00
Yuki Okushi
ca6798ab07
Rollup merge of #86479 - exphp-forks:float-debug-exponential, r=yaahc
Automatic exponential formatting in Debug

Context: See [this comment from the libs team](https://github.com/rust-lang/rfcs/pull/2729#issuecomment-853454204)

---

Makes `"{:?}"` switch to exponential for floats based on magnitude. The libs team suggested exploring this idea in the discussion thread for RFC rust-lang/rfcs#2729. (**note:** this is **not** an implementation of the RFC; it is an implementation of one of the alternatives)

Thresholds chosen were 1e-4 and 1e16.  Justification described [here](https://github.com/rust-lang/rfcs/pull/2729#issuecomment-864482954).

**This will require a crater run.**

---

As mentioned in the commit message of 8731d4dfb4, this behavior will not apply when a precision is supplied, because I wanted to preserve the following existing and useful behavior of `{:.PREC?}` (which recursively applies `{:.PREC}` to floats in a struct):

```rust
assert_eq!(
    format!("{:.2?}", [100.0, 0.000004]),
    "[100.00, 0.00]",
)
```

I looked around and am not sure where there are any tests that actually use this in the test suite, though?

All things considered, I'm surprised that this change did not seem to break even a single existing test in `x.py test --stage 2`.  (even when I tried a smaller threshold of 1e6)
2021-10-20 04:35:10 +09:00
woppopo
7936ecff48 Make more From impls const 2021-10-18 19:19:28 +09:00
woppopo
ea28abee28 Make Result::as_mut const 2021-10-17 18:39:54 +09:00
woppopo
d1f7608699 Add #![cfg_attr(bootstrap, feature(const_panic))] to library/core/tests/lib.rs 2021-10-17 00:32:01 +09:00
woppopo
00dba3a693 Make Option::as_mut const 2021-10-17 00:02:42 +09:00
bors
1dafe6d1c3 Auto merge of #88540 - ibraheemdev:swap-unchecked, r=kennytm
add `slice::swap_unchecked`

An unsafe version of `slice::swap` that does not do bounds checking.
2021-10-15 09:35:45 +00:00
John Kugelman
21f4677744 Add #[must_use] to expensive computations
The unifying theme for this commit is weak, admittedly. I put together a
list of "expensive" functions when I originally proposed this whole
effort, but nobody's cared about that criterion. Still, it's a decent
way to bite off a not-too-big chunk of work.

Given the grab bag nature of this commit, the messages I used vary quite
a bit.
2021-10-12 23:27:17 -04:00
bors
ffdf18d144 Auto merge of #88788 - falk-hueffner:speedup-int-log10-branchless, r=joshtriplett
Speedup int log10 branchless

This is achieved with a branchless bit-twiddling implementation of the case x < 100_000, and using this as building block.

Benchmark on an Intel i7-8700K (Coffee Lake):

```
name                                   old ns/iter  new ns/iter  diff ns/iter   diff %  speedup
num::int_log::u8_log10_predictable     165          169                     4    2.42%   x 0.98
num::int_log::u8_log10_random          438          423                   -15   -3.42%   x 1.04
num::int_log::u8_log10_random_small    438          423                   -15   -3.42%   x 1.04
num::int_log::u16_log10_predictable    633          417                  -216  -34.12%   x 1.52
num::int_log::u16_log10_random         908          471                  -437  -48.13%   x 1.93
num::int_log::u16_log10_random_small   945          471                  -474  -50.16%   x 2.01
num::int_log::u32_log10_predictable    1,496        1,340                -156  -10.43%   x 1.12
num::int_log::u32_log10_random         1,076        873                  -203  -18.87%   x 1.23
num::int_log::u32_log10_random_small   1,145        874                  -271  -23.67%   x 1.31
num::int_log::u64_log10_predictable    4,005        3,171                -834  -20.82%   x 1.26
num::int_log::u64_log10_random         1,247        1,021                -226  -18.12%   x 1.22
num::int_log::u64_log10_random_small   1,265        921                  -344  -27.19%   x 1.37
num::int_log::u128_log10_predictable   39,667       39,579                -88   -0.22%   x 1.00
num::int_log::u128_log10_random        6,456        6,696                 240    3.72%   x 0.96
num::int_log::u128_log10_random_small  4,108        3,903                -205   -4.99%   x 1.05
```

Benchmark on an M1 Mac Mini:

```
name                                   old ns/iter  new ns/iter  diff ns/iter   diff %  speedup
num::int_log::u8_log10_predictable     143          130                   -13   -9.09%   x 1.10
num::int_log::u8_log10_random          375          325                   -50  -13.33%   x 1.15
num::int_log::u8_log10_random_small    376          325                   -51  -13.56%   x 1.16
num::int_log::u16_log10_predictable    500          322                  -178  -35.60%   x 1.55
num::int_log::u16_log10_random         794          405                  -389  -48.99%   x 1.96
num::int_log::u16_log10_random_small   1,035        405                  -630  -60.87%   x 2.56
num::int_log::u32_log10_predictable    1,144        894                  -250  -21.85%   x 1.28
num::int_log::u32_log10_random         832          786                   -46   -5.53%   x 1.06
num::int_log::u32_log10_random_small   832          787                   -45   -5.41%   x 1.06
num::int_log::u64_log10_predictable    2,681        2,057                -624  -23.27%   x 1.30
num::int_log::u64_log10_random         1,015        806                  -209  -20.59%   x 1.26
num::int_log::u64_log10_random_small   1,004        795                  -209  -20.82%   x 1.26
num::int_log::u128_log10_predictable   56,825       56,526               -299   -0.53%   x 1.01
num::int_log::u128_log10_random        9,056        8,861                -195   -2.15%   x 1.02
num::int_log::u128_log10_random_small  1,528        1,527                  -1   -0.07%   x 1.00
```

The 128 bit case remains ridiculously slow because llvm fails to optimize division by a constant 128-bit value to multiplications. This could be worked around but it seems preferable to fix this in llvm.

From u32 up, table lookup (like suggested [here](https://github.com/rust-lang/rust/issues/70887#issuecomment-881099813)) is still faster, but requires a hardware `leading_zeros` to be viable, and might clog up the cache.
2021-10-12 03:18:54 +00:00
Ibraheem Ahmed
c517a0de3e add slice::swap tests 2021-10-11 16:16:20 -04:00
Guillaume Gomez
86bf3ce859
Rollup merge of #75644 - c410-f3r:array, r=yaahc
Add 'core::array::from_fn' and 'core::array::try_from_fn'

These auxiliary methods fill uninitialized arrays in a safe way and are particularly useful for elements that don't implement `Default`.

```rust
// Foo doesn't implement Default
struct Foo(usize);

let _array = core::array::from_fn::<_, _, 2>(|idx| Foo(idx));
```

Different from `FromIterator`, it is guaranteed that the array will be fully filled and no error regarding uninitialized state will be throw. In certain scenarios, however, the creation of an **element** can fail and that is why the `try_from_fn` function is also provided.

```rust
#[derive(Debug, PartialEq)]
enum SomeError {
    Foo,
}

let array = core::array::try_from_fn(|i| Ok::<_, SomeError>(i));
assert_eq!(array, Ok([0, 1, 2, 3, 4]));

let another_array = core::array::try_from_fn(|_| Err(SomeError::Foo));
assert_eq!(another_array, Err(SomeError::Foo));
 ```
2021-10-09 17:08:38 +02:00