Derive `Ord`, `PartialOrd` and `Hash` for `SocketAddr*`
Fixes#116711
The main pain of this PR is to fix the buggy impl of `Ord` for `SocketAddrV6`, which ignored half of the fields (while `PartialEq` is derived):
4603f0b8af/library/core/src/net/socket_addr.rs (L99-L106)4603f0b8af/library/core/src/net/socket_addr.rs (L676)
For me it looks like a simple copy-paste error made in https://github.com/rust-lang/rust/pull/72239 (copy from v4 impl) (cc `@hch12907),` as I don't see this behavior being mentioned anywhere on the PR and it also does not respect `cmp` trait "rules". I also do not see any reasons for those impls to _not_ be derived.
It's a shame we did not notice this for 28 versions/3 years. I guess this is a bug fix, but I'm not sure what the process here should be.
r? libs
optimize zipping over array iterators
Fixes#115339 (somewhat)
the new assembly:
```asm
zip_arrays:
.cfi_startproc
vmovups (%rdx), %ymm0
leaq 32(%rsi), %rcx
vxorps %xmm1, %xmm1, %xmm1
vmovups %xmm1, -24(%rsp)
movq $0, -8(%rsp)
movq %rsi, -88(%rsp)
movq %rdi, %rax
movq %rcx, -80(%rsp)
vmovups %ymm0, -72(%rsp)
movq $0, -40(%rsp)
movq $32, -32(%rsp)
movq -24(%rsp), %rcx
vmovups (%rsi,%rcx), %ymm0
vorps -72(%rsp,%rcx), %ymm0, %ymm0
vmovups %ymm0, (%rsi,%rcx)
vmovups (%rsi), %ymm0
vmovups %ymm0, (%rdi)
vzeroupper
retq
```
This is still longer than the slice version given in the issue but at least it eliminates the terrible `vpextrb`/`orb` chain. I guess this is due to excessive memcpys again (haven't looked at the llvmir)?
The `TrustedLen` specialization is a drive-by change since I had to do something for the default impl anyway to be able to specialize the `TrustedRandomAccessNoCoerce` impl.
Implement `slice::split_once` and `slice::rsplit_once`
Feature gate is `slice_split_once` and tracking issue is #112811. These are equivalents to the existing `str::split_once` and `str::rsplit_once` methods.
Add "integer square root" method to integer primitive types
For every suffix `N` among `8`, `16`, `32`, `64`, `128` and `size`, this PR adds the methods
```rust
const fn uN::isqrt() -> uN;
const fn iN::isqrt() -> iN;
const fn iN::checked_isqrt() -> Option<iN>;
```
to compute the [integer square root](https://en.wikipedia.org/wiki/Integer_square_root), addressing issue #89273.
The implementation is based on the [base 2 digit-by-digit algorithm](https://en.wikipedia.org/wiki/Methods_of_computing_square_roots#Binary_numeral_system_(base_2)) on Wikipedia, which after some benchmarking has proved to be faster than both binary search and Heron's/Newton's method. I haven't had the time to understand and port [this code](http://atoms.alife.co.uk/sqrt/SquareRoot.java) based on lookup tables instead, but I'm not sure whether it's worth complicating such a function this much for relatively little benefit.
Implement Step for ascii::Char
This allows iterating over ranges of `ascii::Char`, similarly to ranges of `char`.
Note that `ascii::Char` is still unstable, tracked in #110998.
Fix implementation of `Duration::checked_div`
I ran across this while running some sanity checks on `time`. Quickcheck immediately found a bug, and as I'd modified the code from `std` I knew there was a bug here as well.
tl;dr this code fails ([playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=1189a3efcdfc192c27d6d87815359353))
```rust
use std::time::Duration;
fn main() {
assert_eq!(
Duration::new(1, 1).checked_div(7),
Some(Duration::new(0, 142_857_143)),
);
}
```
The existing code determines that 1/7 = 0 (seconds), 1/7 = 0 (nanoseconds), 1 billion / 7 = 142,857,142 (extra nanoseconds). The billion comes from multiplying the remainder of the seconds (1) by the number of nanoseconds in a second. However, **this wrongly ignores any remaining nanoseconds**. This PR takes that into consideration, adds a test, and also changes the roundabout way of calculating the remainder into directly computing it.
Note: This is _not_ a rounding error. This result divides evenly.
`@rustbot` label +A-time +C-bug +S-waiting-on-reviewer +T-libs
core/any: remove Provider trait, rename Demand to Request
This touches on two WIP features:
* `error_generic_member_access`
* tracking issue: https://github.com/rust-lang/rust/issues/99301
* RFC (WIP): https://github.com/rust-lang/rfcs/pull/2895
* `provide_any`
* tracking issue: https://github.com/rust-lang/rust/issues/96024
* RFC: https://github.com/rust-lang/rfcs/pull/3192
The changes in this PR are intended to address libs meeting feedback summarized by `@Amanieu` in https://github.com/rust-lang/rust/issues/96024#issuecomment-1554773172
The specific items this PR addresses so far are:
> We feel that the names "demand" and "request" are somewhat synonymous and would like only one of those to be used for better consistency.
I went with `Request` here since it sounds nicer, but I'm mildly concerned that at first glance it could be confused with the use of the word in networking context.
> The Provider trait should be deleted and its functionality should be merged into Error. We are happy to only provide an API that is only usable with Error. If there is demand for other uses then this can be provided through an external crate.
The net impact this PR has is that examples which previously looked like
```
core::any::request_ref::<String>(&err).unwramp()
```
now look like
```
(&err as &dyn core::error::Error).request_value::<String>().unwrap()
```
These are methods that based on the type hint when called return an `Option<T>` of that type. I'll admit I don't fully understand how that's done, but it involves `core::any::tags::Type` and `core::any::TaggedOption`, neither of which are exposed in the public API, to construct a `Request` which is then passed to the `Error.provide` method.
Something that I'm curious about is whether or not they are essential to the use of `Request` types (prior to this PR referred to as `Demand`) and if so does the fact that they are kept private imply that `Request`s are only meant to be constructed privately within the standard library? That's what it looks like to me.
These methods ultimately call into code that looks like:
```
/// Request a specific value by tag from the `Error`.
fn request_by_type_tag<'a, I>(err: &'a (impl Error + ?Sized)) -> Option<I::Reified>
where
I: tags::Type<'a>,
{
let mut tagged = core::any::TaggedOption::<'a, I>(None);
err.provide(tagged.as_request());
tagged.0
}
```
As far as the `Request` API is concerned, one suggestion I would like to make is that the previous example should look more like this:
```
/// Request a specific value by tag from the `Error`.
fn request_by_type_tag<'a, I>(err: &'a (impl Error + ?Sized)) -> Option<I::Reified>
where
I: tags::Type<'a>,
{
let tagged_request = core::any::Request<I>::new_tagged();
err.provide(tagged_request);
tagged.0
}
```
This makes it possible for anyone to construct a `Request` for use in their own projects without exposing an implementation detail like `TaggedOption` in the API surface.
Otherwise noteworthy is that I had to add `pub(crate)` on both `core::any::TaggedOption` and `core::any::tags` since `Request`s now need to be constructed in the `core::error` module. I considered moving `TaggedOption` into the `core::error` module but again I figured it's an implementation detail of `Request` and belongs closer to that.
At the time I am opening this PR, I have not yet looked into the following bit of feedback:
> We took a look at the generated code and found that LLVM is unable to optimize multiple .provide_* calls into a switch table because each call fetches the type id from Erased::type_id separately each time and the compiler doesn't know that these calls all return the same value. This should be fixed.
This is what I'll focus on next while waiting for feedback on the progress so far. I suspect that learning more about the type IDs will help me understand the need for `TaggedOption` a little better.
* remove `impl Provider for Error`
* rename `Demand` to `Request`
* update docstrings to focus on the conceptual API provided by `Request`
* move `core::any::{request_ref, request_value}` functions into `core::error`
* move `core::any::tag`, `core::any::Request`, an `core::any::TaggedOption` into `core::error`
* replace `provide_any` feature name w/ `error_generic_member_access`
* move `core::error::request_{ref,value} tests into core::tests::error module
* update unit and doc tests
This is inherited from the old PR.
This method returns an iterator over mapped windows of the starting
iterator. Adding the more straight-forward `Iterator::windows` is not
easily possible right now as the items are stored in the iterator type,
meaning the `next` call would return references to `self`. This is not
allowed by the current `Iterator` trait design. This might change once
GATs have landed.
The idea has been brought up by @m-ou-se here:
https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/Iterator.3A.3A.7Bpairwise.2C.20windows.7D/near/224587771
Co-authored-by: Lukas Kalbertodt <lukas.kalbertodt@gmail.com>
Specialize `StepBy<Range<{integer}>>`
OLD
iter::bench_range_step_by_fold_u16 700.00ns/iter +/- 10.00ns
iter::bench_range_step_by_fold_usize 519.00ns/iter +/- 6.00ns
iter::bench_range_step_by_loop_u32 555.00ns/iter +/- 7.00ns
iter::bench_range_step_by_sum_reducible 37.00ns/iter +/- 0.00ns
NEW
iter::bench_range_step_by_fold_u16 49.00ns/iter +/- 0.00ns
iter::bench_range_step_by_fold_usize 194.00ns/iter +/- 1.00ns
iter::bench_range_step_by_loop_u32 98.00ns/iter +/- 0.00ns
iter::bench_range_step_by_sum_reducible 1.00ns/iter +/- 0.00ns
NEW + `-Ctarget-cpu=x86-64-v3`
iter::bench_range_step_by_fold_u16 22.00ns/iter +/- 0.00ns
iter::bench_range_step_by_fold_usize 80.00ns/iter +/- 1.00ns
iter::bench_range_step_by_loop_u32 41.00ns/iter +/- 0.00ns
iter::bench_range_step_by_sum_reducible 1.00ns/iter +/- 0.00ns
I have only optimized for walltime of those methods, I haven't tested whether it eliminates bounds checks when indexing into slices via things like `(0..slice.len()).step_by(16)`.
For ranges < usize we determine the number of items
StepBy would yield and then store that in the range.end
instead of the actual end. This significantly
simplifies calculation of the loop induction variable
especially in cases where StepBy::step (an usize)
could overflow the Range's item type
Ignore `core`, `alloc` and `test` tests that require unwinding on `-C panic=abort`
Some of the tests for `core` and `alloc` require unwinding through their use of `catch_unwind`. These tests fail when testing using `-C panic=abort` (in my case through a target without unwinding support, and `-Z panic-abort-tests`), while they should be ignored as they don't indicate a failure.
This PR marks all of these tests with this attribute:
```rust
#[cfg_attr(not(panic = "unwind"), ignore = "test requires unwinding support")]
```
I'm not aware of a way to test this on rust-lang/rust's CI, as we don't test any target with `-C panic=abort`, but I tested this locally on a Ferrocene target and it does indeed make the test suite pass.
* ensuring that offset_of!(Self, ...) works iff inside an impl block
* ensuring that the output type is usize and doesn't coerce. this can be
changed in the future, but if it is done, it should be a conscious descision
* improving the privacy checking test
* ensuring that generics don't let you escape the unsized check
Add midpoint function for all integers and floating numbers
This pull-request adds the `midpoint` function to `{u,i}{8,16,32,64,128,size}`, `NonZeroU{8,16,32,64,size}` and `f{32,64}`.
This new function is analog to the [C++ midpoint](https://en.cppreference.com/w/cpp/numeric/midpoint) function, and basically compute `(a + b) / 2` with a rounding towards ~~`a`~~ negative infinity in the case of integers. Or simply said: `midpoint(a, b)` is `(a + b) >> 1` as if it were performed in a sufficiently-large signed integral type.
Note that unlike the C++ function this pull-request does not implement this function on pointers (`*const T` or `*mut T`). This could be implemented in a future pull-request if desire.
### Implementation
For `f32` and `f64` the implementation in based on the `libcxx` [one](18ab892ff7/libcxx/include/__numeric/midpoint.h (L65-L77)). I originally tried many different approach but all of them failed or lead me with a poor version of the `libcxx`. Note that `libstdc++` has a very similar one; Microsoft STL implementation is also basically the same as `libcxx`. It unfortunately doesn't seems like a better way exist.
For unsigned integers I created the macro `midpoint_impl!`, this macro has two branches:
- The first one take `$SelfT` and is used when there is no unsigned integer with at least the double of bits. The code simply use this formula `a + (b - a) / 2` with the arguments in the correct order and signs to have the good rounding.
- The second branch is used when a `$WideT` (at least double of bits as `$SelfT`) is provided, using a wider number means that no overflow can occur, this greatly improve the codegen (no branch and less instructions).
For signed integers the code basically forwards the signed numbers to the unsigned version of midpoint by mapping the signed numbers to their unsigned numbers (`ex: i8 [-128; 127] to [0; 255]`) and vice versa.
I originally created a version that worked directly on the signed numbers but the code was "ugly" and not understandable. Despite this mapping "overhead" the codegen is better than my most optimized version on signed integers.
~~Note that in the case of unsigned numbers I tried to be smart and used `#[cfg(target_pointer_width = "64")]` to determine if using the wide version was better or not by looking at the assembly on godbolt. This was applied to `u32`, `u64` and `usize` and doesn't change the behavior only the assembly code generated.~~
Spelling library
Split per https://github.com/rust-lang/rust/pull/110392
I can squash once people are happy w/ the changes. It's really uncommon for large sets of changes to be perfectly acceptable w/o at least some changes.
I probably won't have time to respond until tomorrow or the next day