Commit Graph

505 Commits

Author SHA1 Message Date
Ben Kimock
22dfbdd707 Add back Send and Sync impls on ChunksMut iterators
These were accidentally removed in #94247 because the representation was
changed from &mut [T] to *mut T, which has !Send + !Sync.
2022-08-01 10:32:45 -04:00
Guillaume Gomez
ef81fca760
Rollup merge of #94247 - saethlin:chunksmut-aliasing, r=the8472
Fix slice::ChunksMut aliasing

Fixes https://github.com/rust-lang/rust/issues/94231, details in that issue.
cc `@RalfJung`

This isn't done just yet, all the safety comments are placeholders. But otherwise, it seems to work.

I don't really like this approach though. There's a lot of unsafe code where there wasn't before, but as far as I can tell the only other way to uphold the aliasing requirement imposed by `__iterator_get_unchecked` is to use raw slices, which I think require the same amount of unsafe code. All that would do is tie the `len` and `ptr` fields together.

Oh I just looked and I'm pretty sure that `ChunksExactMut`, `RChunksMut`, and `RChunksExactMut` also need to be patched. Even more reason to put up a draft.
2022-07-27 17:55:01 +02:00
bors
41419e7036 Auto merge of #99491 - workingjubilee:sync-psimd, r=workingjubilee
Sync in portable-simd subtree

r? `@ghost`
2022-07-22 09:48:00 +00:00
Jubilee Young
f8aa494c69 Introduce core::simd trait imports in tests 2022-07-20 18:08:20 -07:00
bors
8bd12e8cca Auto merge of #98912 - nrc:provider-it, r=yaahc
core::any: replace some generic types with impl Trait

This gives a cleaner API since the caller only specifies the concrete type they usually want to.

r? `@yaahc`
2022-07-19 11:28:20 +00:00
Dylan DPC
e301cd39ad
Rollup merge of #99434 - timvermeulen:skip_next_non_fused, r=scottmcm
Fix `Skip::next` for non-fused inner iterators

`iter.skip(n).next()` will currently call `nth` and `next` in succession on `iter`, without checking whether `nth` exhausts the iterator. Using `?` to propagate a `None` value returned by `nth` avoids this.
2022-07-19 11:38:58 +05:30
Tim Vermeulen
e52837c362 Add note to test about Unfuse 2022-07-18 21:53:35 +02:00
Tim Vermeulen
50c612faef Fix Skip::next for non-fused inner iterators 2022-07-18 21:10:47 +02:00
Dylan DPC
5ccdf1f6f7
Rollup merge of #98839 - 5225225:assert_transmute_copy_size, r=thomcc
Add assertion that `transmute_copy`'s U is not larger than T

This is called out as a safety requirement in the docs, but because knowing this can be done at compile time and constant folded (just like the `align_of` branch is removed), we can just panic here.

I've looked at the asm (using `cargo-asm`) of a function that both is correct and incorrect, and the panic is completely removed, or is unconditional, without needing build-std.

I don't expect this to cause much breakage in the wild. I scanned through https://miri.saethlin.dev/ub for issues that would look like this (error: Undefined Behavior: memory access failed: alloc1768 has size 1, so pointer to 8 bytes starting at offset 0 is out-of-bounds), but couldn't find any.

That doesn't rule out it happening in crates tested that fail earlier for some other reason, though, but it indicates that doing this is rare, if it happens at all. A crater run for this would need to be build and test, since this is a runtime thing.

Also added a few more transmute_copy tests.
2022-07-18 21:14:42 +05:30
Yuki Okushi
50527690e2
Rollup merge of #99306 - JohnTitor:stabilize-future-poll-fn, r=joshtriplett
Stabilize `future_poll_fn`

FCP is done: https://github.com/rust-lang/rust/issues/72302#issuecomment-1179620512
Closes #72302

r? `@joshtriplett` as you started FCP

Signed-off-by: Yuki Okushi <jtitor@2k36.org>
2022-07-17 13:08:52 +09:00
bors
db41351753 Auto merge of #98866 - nagisa:nagisa/align-offset-wroom, r=Mark-Simulacrum
Add a special case for align_offset /w stride != 1

This generalizes the previous `stride == 1` special case to apply to any
situation where the requested alignment is divisible by the stride. This
in turn allows the test case from #98809 produce ideal assembly, along
the lines of:

    leaq 15(%rdi), %rax
    andq $-16, %rax

This also produces pretty high quality code for situations where the
alignment of the input pointer isn’t known:

    pub unsafe fn ptr_u32(slice: *const u32) -> *const u32 {
        slice.offset(slice.align_offset(16) as isize)
    }

    // =>

    movl %edi, %eax
    andl $3, %eax
    leaq 15(%rdi), %rcx
    andq $-16, %rcx
    subq %rdi, %rcx
    shrq $2, %rcx
    negq %rax
    sbbq %rax, %rax
    orq  %rcx, %rax
    leaq (%rdi,%rax,4), %rax

Here LLVM is smart enough to replace the `usize::MAX` special case with
a branch-less bitwise-OR approach, where the mask is constructed using
the neg and sbb instructions. This appears to work across various
architectures I’ve tried.

This change ends up introducing more branches and code in situations
where there is less knowledge of the arguments. For example when the
requested alignment is entirely unknown. This use-case was never really
a focus of this function, so I’m not particularly worried, especially
since llvm-mca is saying that the new code is still appreciably faster,
despite all the new branching.

Fixes #98809.
Sadly, this does not help with #72356.
2022-07-16 23:28:28 +00:00
Simonas Kazlauskas
62a182cf7f Add a special case for align_offset /w stride != 1
This generalizes the previous `stride == 1` special case to apply to any
situation where the requested alignment is divisible by the stride. This
in turn allows the test case from #98809 produce ideal assembly, along
the lines of:

    leaq 15(%rdi), %rax
    andq $-16, %rax

This also produces pretty high quality code for situations where the
alignment of the input pointer isn’t known:

    pub unsafe fn ptr_u32(slice: *const u32) -> *const u32 {
        slice.offset(slice.align_offset(16) as isize)
    }

    // =>

    movl %edi, %eax
    andl $3, %eax
    leaq 15(%rdi), %rcx
    andq $-16, %rcx
    subq %rdi, %rcx
    shrq $2, %rcx
    negq %rax
    sbbq %rax, %rax
    orq  %rcx, %rax
    leaq (%rdi,%rax,4), %rax

Here LLVM is smart enough to replace the `usize::MAX` special case with
a branch-less bitwise-OR approach, where the mask is constructed using
the neg and sbb instructions. This appears to work across various
architectures I’ve tried.

This change ends up introducing more branches and code in situations
where there is less knowledge of the arguments. For example when the
requested alignment is entirely unknown. This use-case was never really
a focus of this function, so I’m not particularly worried, especially
since llvm-mca is saying that the new code is still appreciably faster,
despite all the new branching.

Fixes #98809.
Sadly, this does not help with #72356.
2022-07-17 01:27:37 +03:00
Yuki Okushi
084ad59622
Stabilize future_poll_fn
Signed-off-by: Yuki Okushi <jtitor@2k36.org>
2022-07-16 10:04:14 +09:00
bors
24699bcbad Auto merge of #95956 - yaahc:stable-in-unstable, r=cjgillot
Support unstable moves via stable in unstable items

part of https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/moving.20items.20to.20core.20unstably and a blocker of https://github.com/rust-lang/rust/pull/90328.

The libs-api team needs the ability to move an already stable item to a new location unstably, in this case for Error in core. Otherwise these changes are insta-stable making them much harder to merge.

This PR attempts to solve the problem by checking the stability of path segments as well as the last item in the path itself, which is currently the only thing checked.
2022-07-14 13:42:09 +00:00
Dylan DPC
103b8602b7
Rollup merge of #98315 - joshtriplett:stabilize-core-ffi-c, r=Mark-Simulacrum
Stabilize `core::ffi:c_*` and rexport in `std::ffi`

This only stabilizes the base types, not the non-zero variants, since
those have their own separate tracking issue and have not gone through
FCP to stabilize.
2022-07-14 14:14:20 +05:30
Josh Triplett
d431338b25 Stabilize core::ffi:c_* and rexport in std::ffi
This only stabilizes the base types, not the non-zero variants, since
those have their own separate tracking issue and have not gone through
FCP to stabilize.
2022-07-13 19:28:20 -07:00
Konrad Borowski
0753fd117b Partially stabilize const_slice_from_raw_parts
This doesn't stabilize methods working on mutable pointers.
2022-07-09 23:20:02 +02:00
Jane Losare-Lusby
d68cb1f9a3 revert changes to unicode stability 2022-07-08 21:18:15 +00:00
Guillaume Gomez
4755173cf6
Rollup merge of #96935 - thomcc:atomicptr-strict-prov, r=dtolnay
Allow arithmetic and certain bitwise ops on AtomicPtr

This is mainly to support migrating from `AtomicUsize`, for the strict provenance experiment.

This is a pretty dubious set of APIs, but it should be sufficient to allow code that's using `AtomicUsize` to manipulate a tagged pointer atomically. It's under a new feature gate, `#![feature(strict_provenance_atomic_ptr)]`, but I'm not sure if it needs its own tracking issue. I'm happy to make one, but it's not clear that it's needed.

I'm unsure if it needs changes in the various non-LLVM backends. Because we just cast things to integers anyway (and were already doing so), I doubt it.

API change proposal: https://github.com/rust-lang/libs-team/issues/60

Fixes #95492
2022-07-06 20:43:23 +02:00
Nick Cameron
0c72be3e1a core::any: replace some unstable generic types with impl Trait
Signed-off-by: Nick Cameron <nrc@ncameron.org>
2022-07-05 15:06:31 +01:00
Dylan DPC
8fa1ed8f12
Rollup merge of #97712 - RalfJung:untyped, r=scottmcm
ptr::copy and ptr::swap are doing untyped copies

The consensus in https://github.com/rust-lang/rust/issues/63159 seemed to be that these operations should be "untyped", i.e., they should treat the data as raw bytes, should work when these bytes violate the validity invariant of `T`, and should exactly preserve the initialization state of the bytes that are being copied. This is already somewhat implied by the description of "copying/swapping size*N bytes" (rather than "N instances of `T`").

The implementations mostly already work that way (well, for LLVM's intrinsics the documentation is not precise enough to say what exactly happens to poison, but if this ever gets clarified to something that would *not* perfectly preserve poison, then I strongly assume there will be some way to make a copy that *does* perfectly preserve poison). However, I had to adjust `swap_nonoverlapping`; after ``@scottmcm's`` [recent changes](https://github.com/rust-lang/rust/pull/94212), that one (sometimes) made a typed copy. (Note that `mem::swap`, which works on mutable references, is unchanged. It is documented as "swapping the values at two mutable locations", which to me strongly indicates that it is indeed typed. It is also safe and can rely on `&mut T` pointing to a valid `T` as part of its safety invariant.)

On top of adding a test (that will be run by Miri), this PR then also adjusts the documentation to indeed stably promise the untyped semantics. I assume this means the PR has to go through t-libs (and maybe t-lang?) FCP.

Fixes https://github.com/rust-lang/rust/issues/63159
2022-07-05 16:04:31 +05:30
5225225
5f5ca88958 Add size assert in transmute_copy 2022-07-03 10:46:20 +01:00
Ben Kimock
7919e4208b Fix slice::ChunksMut aliasing 2022-07-03 00:15:15 -04:00
Pietro Albini
6b2d3d5f3c
update cfg(bootstrap)s 2022-07-01 15:48:23 +02:00
Thom Chiovoloni
e65ecee90e
Rename AtomicPtr::fetch_{add,sub}{,_bytes} 2022-07-01 06:21:19 -07:00
Thom Chiovoloni
2f872afdb5
Allow arithmetic and certain bitwise ops on AtomicPtr
This is mainly to support migrating from AtomicUsize, for the strict
provenance experiment.

Fixes #95492
2022-07-01 06:21:18 -07:00
Ralf Jung
8c977cfda8 libcore tests: avoid int2ptr casts 2022-06-27 13:30:44 -04:00
Ross MacArthur
bbdff1fff4
Add Iterator::next_chunk 2022-06-21 08:57:02 +02:00
Maybe Waffle
7c360dc117 Move/rename lazy::{OnceCell, Lazy} to cell::{OnceCell, LazyCell} 2022-06-16 19:53:59 +04:00
bors
e9aff9c42c Auto merge of #91970 - nrc:provide-any, r=scottmcm
Add the Provider api to core::any

This is an implementation of [RFC 3192](https://github.com/rust-lang/rfcs/pull/3192) ~~(which is yet to be merged, thus why this is a draft PR)~~. It adds an API for type-driven requests and provision of data from trait objects. A primary use case is for the `Error` trait, though that is not implemented in this PR. The only major difference to the RFC is that the functionality is added to the `any` module, rather than being in a sibling `provide_any` module (as discussed in the RFC thread).

~~Still todo: improve documentation on items, including adding examples.~~

cc `@yaahc`
2022-06-10 01:10:59 +00:00
Yuki Okushi
2b58e6314a
Stabilize const_intrinsic_copy 2022-06-08 20:17:28 +09:00
Ben Kimock
5dd5244423 Use repr(C) when depending on struct layout in ptr tests 2022-06-07 19:24:09 -04:00
Nick Cameron
843f90cbb7 Add some more tests
Signed-off-by: Nick Cameron <nrc@ncameron.org>
2022-06-06 12:10:14 +01:00
Nick Cameron
57d027d23a Modify the signature of the request_* methods so that trait_upcasting is not required
Signed-off-by: Nick Cameron <nrc@ncameron.org>
2022-06-06 12:10:13 +01:00
Nick Cameron
bb5db85f74 Add the Provider api to core::any
Signed-off-by: Nick Cameron <nrc@ncameron.org>
2022-06-06 12:10:13 +01:00
Ralf Jung
b96d1e45f1 change ptr::swap methods to do untyped copies 2022-06-05 10:09:41 -04:00
Ralf Jung
4990021082 test const_copy to make sure bytewise pointer copies are working 2022-06-03 09:20:42 -04:00
est31
cdb8e64bc7 Use Box::new() instead of box syntax in core tests 2022-05-29 01:44:11 +02:00
bors
9fed13030c Auto merge of #94954 - SimonSapin:null-thin3, r=yaahc
Extend ptr::null and null_mut to all thin (including extern) types

Fixes https://github.com/rust-lang/rust/issues/93959

This change was accepted in https://rust-lang.github.io/rfcs/2580-ptr-meta.html

Note that this changes the signature of **stable** functions. The change should be backward-compatible, but it is **insta-stable** since it cannot (easily, at all?) be made available only through a `#![feature(…)]` opt-in.

The RFC also proposed the same change for `NonNull::dangling`, which makes sense it terms of its signature but not in terms of its implementation. `dangling` uses `align_of()` as an address. But what `align_of()` should be for extern types or whether it should be allowed at all remains an open question.

This commit depends on https://github.com/rust-lang/rust/pull/93977, which is not yet part of the bootstrap compiler. So `#[cfg]` is used to only apply the change in stage 1+. As far a I know bounds cannot be made conditional with `#[cfg]`, so the entire functions are duplicated. This is unfortunate but temporary.

Since this duplication makes it less obvious in the diff, the new definitions differ in:

* More permissive bounds (`Thin` instead of implied `Sized`)
* Different implementation
* Having `rustc_allow_const_fn_unstable(const_fn_trait_bound)`
* Having `rustc_allow_const_fn_unstable(ptr_metadata)`
2022-05-25 13:58:51 +00:00
Caio
664e8a9ce5 [RFC 2011] Library code 2022-05-22 07:18:32 -03:00
bors
bb5e6c984d Auto merge of #97265 - JohnTitor:rollup-kgthnjt, r=JohnTitor
Rollup of 6 pull requests

Successful merges:

 - #97144 (Fix rusty grammar in `std::error::Reporter` docs)
 - #97225 (Fix `Display` for `cell::{Ref,RefMut}`)
 - #97228 (Omit stdarch workspace from rust-src)
 - #97236 (Recover when resolution did not resolve lifetimes.)
 - #97245 (Fix typo in futex RwLock::write_contended.)
 - #97259 (Fix typo in Mir phase docs)

Failed merges:

r? `@ghost`
`@rustbot` modify labels: rollup
2022-05-22 04:27:10 +00:00
Josh Stone
83abb7c18f Fix Display for cell::{Ref,RefMut}
These guards changed to pointers in #97027, but their `Display` was
formatting that field directly, which made it show the raw pointer
value. Now we go through `Deref` to display the real value again.
2022-05-20 11:16:30 -07:00
Caio
d917112606 Stabilize core::array::from_fn 2022-05-20 11:04:13 -03:00
Mark Rousskov
32fdc6b207 Stage-step cfgs 2022-05-18 12:29:35 -04:00
bors
9fbbe75fd7 Auto merge of #95602 - scottmcm:faster-array-intoiter-fold, r=the8472
Fix `array::IntoIter::fold` to use the optimized `Range::fold`

It was using `Iterator::by_ref` in the implementation, which ended up pessimizing it enough that, for example, it didn't vectorize when we tried it in the <https://rust-lang.zulipchat.com/#narrow/stream/257879-project-portable-simd/topic/Reducing.20sum.20into.20wider.20types> conversation.

Demonstration that the codegen test doesn't pass on the current nightly: <https://rust.godbolt.org/z/Taxev5eMn>
2022-05-14 03:12:53 +00:00
Simon Sapin
7ccc09b210 Extend ptr::null and null_mut to all thin (including extern) types
Fixes https://github.com/rust-lang/rust/issues/93959

This change was accepted in https://rust-lang.github.io/rfcs/2580-ptr-meta.html

Note that this changes the signature of **stable** functions.
The change should be backward-compatible, but it is **insta-stable**
since it cannot (easily, at all?) be made available only
through a `#![feature(…)]` opt-in.

The RFC also proposed the same change for `NonNull::dangling`,
which makes sense it terms of its signature but not in terms of its implementation.
`dangling` uses `align_of()` as an address. But what `align_of()` should be for
extern types or whether it should be allowed at all remains an open question.

This commit depends on https://github.com/rust-lang/rust/pull/93977, which is not yet
part of the bootstrap compiler. So `#[cfg]` is used to only apply the change in
stage 1+. As far a I know bounds cannot be made conditional with `#[cfg]`, so the
entire functions are duplicated. This is unfortunate but temporary.

Since this duplication makes it less obvious in the diff,
the new definitions differ in:

* More permissive bounds (`Thin` instead of implied `Sized`)
* Different implementation
* Having `rustc_allow_const_fn_unstable(ptr_metadata)`
2022-05-13 18:03:06 +02:00
bors
8c4fc9d9a4 Auto merge of #94598 - scottmcm:prefix-free-hasher-methods, r=Amanieu
Add a dedicated length-prefixing method to `Hasher`

This accomplishes two main goals:
- Make it clear who is responsible for prefix-freedom, including how they should do it
- Make it feasible for a `Hasher` that *doesn't* care about Hash-DoS resistance to get better performance by not hashing lengths

This does not change rustc-hash, since that's in an external crate, but that could potentially use it in future.

Fixes #94026

r? rust-lang/libs

---

The core of this change is the following two new methods on `Hasher`:

```rust
pub trait Hasher {
    /// Writes a length prefix into this hasher, as part of being prefix-free.
    ///
    /// If you're implementing [`Hash`] for a custom collection, call this before
    /// writing its contents to this `Hasher`.  That way
    /// `(collection![1, 2, 3], collection![4, 5])` and
    /// `(collection![1, 2], collection![3, 4, 5])` will provide different
    /// sequences of values to the `Hasher`
    ///
    /// The `impl<T> Hash for [T]` includes a call to this method, so if you're
    /// hashing a slice (or array or vector) via its `Hash::hash` method,
    /// you should **not** call this yourself.
    ///
    /// This method is only for providing domain separation.  If you want to
    /// hash a `usize` that represents part of the *data*, then it's important
    /// that you pass it to [`Hasher::write_usize`] instead of to this method.
    ///
    /// # Examples
    ///
    /// ```
    /// #![feature(hasher_prefixfree_extras)]
    /// # // Stubs to make the `impl` below pass the compiler
    /// # struct MyCollection<T>(Option<T>);
    /// # impl<T> MyCollection<T> {
    /// #     fn len(&self) -> usize { todo!() }
    /// # }
    /// # impl<'a, T> IntoIterator for &'a MyCollection<T> {
    /// #     type Item = T;
    /// #     type IntoIter = std::iter::Empty<T>;
    /// #     fn into_iter(self) -> Self::IntoIter { todo!() }
    /// # }
    ///
    /// use std:#️⃣:{Hash, Hasher};
    /// impl<T: Hash> Hash for MyCollection<T> {
    ///     fn hash<H: Hasher>(&self, state: &mut H) {
    ///         state.write_length_prefix(self.len());
    ///         for elt in self {
    ///             elt.hash(state);
    ///         }
    ///     }
    /// }
    /// ```
    ///
    /// # Note to Implementers
    ///
    /// If you've decided that your `Hasher` is willing to be susceptible to
    /// Hash-DoS attacks, then you might consider skipping hashing some or all
    /// of the `len` provided in the name of increased performance.
    #[inline]
    #[unstable(feature = "hasher_prefixfree_extras", issue = "88888888")]
    fn write_length_prefix(&mut self, len: usize) {
        self.write_usize(len);
    }

    /// Writes a single `str` into this hasher.
    ///
    /// If you're implementing [`Hash`], you generally do not need to call this,
    /// as the `impl Hash for str` does, so you can just use that.
    ///
    /// This includes the domain separator for prefix-freedom, so you should
    /// **not** call `Self::write_length_prefix` before calling this.
    ///
    /// # Note to Implementers
    ///
    /// The default implementation of this method includes a call to
    /// [`Self::write_length_prefix`], so if your implementation of `Hasher`
    /// doesn't care about prefix-freedom and you've thus overridden
    /// that method to do nothing, there's no need to override this one.
    ///
    /// This method is available to be overridden separately from the others
    /// as `str` being UTF-8 means that it never contains `0xFF` bytes, which
    /// can be used to provide prefix-freedom cheaper than hashing a length.
    ///
    /// For example, if your `Hasher` works byte-by-byte (perhaps by accumulating
    /// them into a buffer), then you can hash the bytes of the `str` followed
    /// by a single `0xFF` byte.
    ///
    /// If your `Hasher` works in chunks, you can also do this by being careful
    /// about how you pad partial chunks.  If the chunks are padded with `0x00`
    /// bytes then just hashing an extra `0xFF` byte doesn't necessarily
    /// provide prefix-freedom, as `"ab"` and `"ab\u{0}"` would likely hash
    /// the same sequence of chunks.  But if you pad with `0xFF` bytes instead,
    /// ensuring at least one padding byte, then it can often provide
    /// prefix-freedom cheaper than hashing the length would.
    #[inline]
    #[unstable(feature = "hasher_prefixfree_extras", issue = "88888888")]
    fn write_str(&mut self, s: &str) {
        self.write_length_prefix(s.len());
        self.write(s.as_bytes());
    }
}
```

With updates to the `Hash` implementations for slices and containers to call `write_length_prefix` instead of `write_usize`.

`write_str` defaults to using `write_length_prefix` since, as was pointed out in the issue, the `write_u8(0xFF)` approach is insufficient for hashers that work in chunks, as those would hash `"a\u{0}"` and `"a"` to the same thing.  But since `SipHash` works byte-wise (there's an internal buffer to accumulate bytes until a full chunk is available) it overrides `write_str` to continue to use the add-non-UTF-8-byte approach.

---

Compatibility:

Because the default implementation of `write_length_prefix` calls `write_usize`, the changed hash implementation for slices will do the same thing the old one did on existing `Hasher`s.
2022-05-06 09:43:57 +00:00
Scott McMurray
98054377ee Add a dedicated length-prefixing method to Hasher
This accomplishes two main goals:
- Make it clear who is responsible for prefix-freedom, including how they should do it
- Make it feasible for a `Hasher` that *doesn't* care about Hash-DoS resistance to get better performance by not hashing lengths

This does not change rustc-hash, since that's in an external crate, but that could potentially use it in future.
2022-05-06 00:03:38 -07:00
Matthias Krüger
47801413d9
Rollup merge of #95359 - jhpratt:int_roundings, r=joshtriplett
Update `int_roundings` methods from feedback

This updates `#![feature(int_roundings)]` (#88581) from feedback. All methods now take `NonZeroX`. The documentation makes clear that they panic in debug mode and wrap in release mode.

r? `@joshtriplett`

`@rustbot` label +T-libs +T-libs-api +S-waiting-on-review
2022-05-05 15:43:00 +02:00
Jacob Pratt
dde590d180
Update int_roundings methods from feedback 2022-05-04 23:20:29 -04:00