Commit Graph

649 Commits

Author SHA1 Message Date
bors
a5e2eca40e Auto merge of #112699 - bluebear94:mf/more-is-sorted-tests, r=cuviper
Add more comprehensive tests for is_sorted and friends

See #53485 and #55045.
2023-07-21 23:25:04 +00:00
KaDiWa
c3462a0676
remove the unstable core::sync::atomic::ATOMIC_*_INIT constants 2023-07-18 09:45:52 +02:00
Mark Rousskov
67b0cfc761 Flip cfg's for bootstrap bump 2023-07-12 21:38:55 -04:00
Ralf Jung
e1338cc254 enable test_join test in Miri 2023-07-03 14:05:55 +02:00
bors
ae8ffa663c Auto merge of #111850 - the8472:external-step-by, r=scottmcm
Specialize `StepBy<Range<{integer}>>`

OLD

    iter::bench_range_step_by_fold_u16      700.00ns/iter +/- 10.00ns
    iter::bench_range_step_by_fold_usize    519.00ns/iter  +/- 6.00ns
    iter::bench_range_step_by_loop_u32      555.00ns/iter  +/- 7.00ns
    iter::bench_range_step_by_sum_reducible  37.00ns/iter  +/- 0.00ns

NEW

    iter::bench_range_step_by_fold_u16       49.00ns/iter +/- 0.00ns
    iter::bench_range_step_by_fold_usize    194.00ns/iter +/- 1.00ns
    iter::bench_range_step_by_loop_u32       98.00ns/iter +/- 0.00ns
    iter::bench_range_step_by_sum_reducible   1.00ns/iter +/- 0.00ns

NEW + `-Ctarget-cpu=x86-64-v3`

    iter::bench_range_step_by_fold_u16      22.00ns/iter +/- 0.00ns
    iter::bench_range_step_by_fold_usize    80.00ns/iter +/- 1.00ns
    iter::bench_range_step_by_loop_u32      41.00ns/iter +/- 0.00ns
    iter::bench_range_step_by_sum_reducible  1.00ns/iter +/- 0.00ns

I have only optimized for walltime of those methods, I haven't tested whether it eliminates bounds checks when indexing into slices via things like `(0..slice.len()).step_by(16)`.
2023-06-26 00:28:30 +00:00
The 8472
070ce235f2 Specialize StepBy<Range<{integer}>>
For ranges < usize we determine the number of items
StepBy would yield and then store that in the range.end
instead of the actual end. This significantly
simplifies calculation of the loop induction variable
especially in cases where StepBy::step (an usize)
could overflow the Range's item type
2023-06-23 00:17:34 +02:00
Michael Goulet
e24fe97bd9
Rollup merge of #112606 - clarfonthey:ip-display, r=thomcc
Alter `Display` for `Ipv6Addr` for IPv4-compatible addresses

ACP: rust-lang/libs-team#239
2023-06-19 17:53:35 -07:00
+merlan #flirora
c2e4e981b3 Add more comprehensive tests for is_sorted and friends
See #53485 and #55045.
2023-06-16 03:04:34 -04:00
许杰友 Jieyou Xu (Joe)
72b3b58efc
Extend unused_must_use to cover block exprs 2023-06-15 17:59:13 +08:00
ltdk
2dce58d0f6 Fix SocketAddrV6: Display tests 2023-06-14 15:21:15 -04:00
ltdk
2f2c3f55a9 Fix Ipv6Addr: Display tests 2023-06-14 14:25:25 -04:00
bors
371994e0d8 Auto merge of #112314 - ferrocene:pa-core-alloc-abort, r=bjorn3
Ignore `core`, `alloc` and `test` tests that require unwinding on `-C panic=abort`

Some of the tests for `core` and `alloc` require unwinding through their use of `catch_unwind`. These tests fail when testing using `-C panic=abort` (in my case through a target without unwinding support, and `-Z panic-abort-tests`), while they should be ignored as they don't indicate a failure.

This PR marks all of these tests with this attribute:

```rust
#[cfg_attr(not(panic = "unwind"), ignore = "test requires unwinding support")]
```

I'm not aware of a way to test this on rust-lang/rust's CI, as we don't test any target with `-C panic=abort`, but I tested this locally on a Ferrocene target and it does indeed make the test suite pass.
2023-06-13 19:03:27 +00:00
Pietro Albini
44556eed36
ignore core, alloc and test tests that require unwinding on panic=abort 2023-06-13 15:53:24 +02:00
Urgau
d9d1c76ded Allow undropped_manually_drops for some tests 2023-06-08 11:41:34 +02:00
Mark Rousskov
42e757192d Bump to latest beta compiler 2023-05-30 08:00:10 -04:00
Lukas Markeffsky
7cdb23b98a don't skip inference for type in offset_of! 2023-05-20 15:20:27 +02:00
est31
30c0e4e72b Add more tests for the offset_of!() macro
* ensuring that offset_of!(Self, ...) works iff inside an impl block
* ensuring that the output type is usize and doesn't coerce. this can be
  changed in the future, but if it is done, it should be a conscious descision
* improving the privacy checking test
* ensuring that generics don't let you escape the unsized check
2023-05-18 13:16:17 +02:00
bors
18bfe5d8a9 Auto merge of #92048 - Urgau:num-midpoint, r=scottmcm
Add midpoint function for all integers and floating numbers

This pull-request adds the `midpoint` function to `{u,i}{8,16,32,64,128,size}`, `NonZeroU{8,16,32,64,size}` and `f{32,64}`.

This new function is analog to the [C++ midpoint](https://en.cppreference.com/w/cpp/numeric/midpoint) function, and basically compute `(a + b) / 2` with a rounding towards ~~`a`~~ negative infinity in the case of integers. Or simply said: `midpoint(a, b)` is `(a + b) >> 1` as if it were performed in a sufficiently-large signed integral type.

Note that unlike the C++ function this pull-request does not implement this function on pointers (`*const T` or `*mut T`). This could be implemented in a future pull-request if desire.

### Implementation

For `f32` and `f64` the implementation in based on the `libcxx` [one](18ab892ff7/libcxx/include/__numeric/midpoint.h (L65-L77)). I originally tried many different approach but all of them failed or lead me with a poor version of the `libcxx`. Note that `libstdc++` has a very similar one; Microsoft STL implementation is also basically the same as `libcxx`. It unfortunately doesn't seems like a better way exist.

For unsigned integers I created the macro `midpoint_impl!`, this macro has two branches:
 - The first one take `$SelfT` and is used when there is no unsigned integer with at least the double of bits. The code simply use this formula `a + (b - a) / 2` with the arguments in the correct order and signs to have the good rounding.
 - The second branch is used when a `$WideT` (at least double of bits as `$SelfT`) is provided, using a wider number means that no overflow can occur, this greatly improve the codegen (no branch and less instructions).

For signed integers the code basically forwards the signed numbers to the unsigned version of midpoint by mapping the signed numbers to their unsigned numbers (`ex: i8 [-128; 127] to [0; 255]`) and vice versa.
I originally created a version that worked directly on the signed numbers but the code was "ugly" and not understandable. Despite this mapping "overhead" the codegen is better than my most optimized version on signed integers.

~~Note that in the case of unsigned numbers I tried to be smart and used `#[cfg(target_pointer_width = "64")]` to determine if using the wide version was better or not by looking at the assembly on godbolt. This was applied to `u32`, `u64` and `usize` and doesn't change the behavior only the assembly code generated.~~
2023-05-14 19:33:02 +00:00
bors
81c2459af6 Stabilize const_ptr_read 2023-05-05 20:36:21 +02:00
Deadbeef
e92806704b fix rustdoc and core test 2023-04-29 08:50:56 +00:00
Matthias Krüger
9babe98562
Rollup merge of #110419 - jsoref:spelling-library, r=jyn514
Spelling library

Split per https://github.com/rust-lang/rust/pull/110392

I can squash once people are happy w/ the changes. It's really uncommon for large sets of changes to be perfectly acceptable w/o at least some changes.

I probably won't have time to respond until tomorrow or the next day
2023-04-26 18:51:41 +02:00
Loïc BRANSTETT
bf73234d92 Implement midpoint for all floating point f32 and f64 2023-04-26 10:18:53 +02:00
Loïc BRANSTETT
1a72d7c7c4 Implement midpoint for all signed and unsigned integers 2023-04-26 10:18:53 +02:00
Josh Soref
9cb9346005 Spelling library/
* advance
* aligned
* borrowed
* calculate
* debugable
* debuggable
* declarations
* desugaring
* documentation
* enclave
* ignorable
* initialized
* iterator
* kaboom
* monomorphization
* nonexistent
* optimizer
* panicking
* process
* reentrant
* rustonomicon
* the
* uninitialized

Signed-off-by: Josh Soref <2119212+jsoref@users.noreply.github.com>
2023-04-26 02:10:22 -04:00
DrMeepster
a642563d49 major test improvements 2023-04-21 02:45:48 -07:00
DrMeepster
2bcb018253 fmt 2023-04-21 02:14:03 -07:00
DrMeepster
b92c2f792c fix incorrect param env in dead code lint 2023-04-21 02:14:03 -07:00
DrMeepster
b95852b93c test improvements 2023-04-21 02:14:03 -07:00
DrMeepster
511e457c4b offset_of 2023-04-21 02:14:02 -07:00
John Millikin
4e2797dd76 Implement Neg for signed non-zero integers.
Negating a non-zero integer currently requires unpacking to a
primitive and re-wrapping. Since negation of non-zero signed
integers always produces a non-zero result, it is safe to
implement `Neg` for `NonZeroI{N}`.

The new `impl` is marked as stable because trait implementations
for two stable types can't be marked unstable.
2023-04-20 14:27:29 +09:00
bors
3a5c8e91f0 Auto merge of #110393 - fee1-dead-contrib:rm-const-traits, r=oli-obk
Rm const traits in libcore

See [zulip thread](https://rust-lang.zulipchat.com/#narrow/stream/146212-t-compiler.2Fconst-eval/topic/.60const.20Trait.60.20removal.20or.20rework)

* [x] Bless ui tests
* [ ] Re constify some unstable functions with workarounds if they are needed
2023-04-19 13:03:40 +00:00
Deadbeef
4c6ddc036b fix library and rustdoc tests 2023-04-16 11:38:52 +00:00
Deadbeef
76dbe29104 rm const traits in libcore 2023-04-16 06:49:27 +00:00
est31
77821b2eb9 Remove unused unused_macros
The macro is always used
2023-04-16 08:35:39 +02:00
Tobias Decking
65c9c79d3f
remove obsolete test 2023-04-10 21:57:45 +02:00
Tobias Decking
0f96c71792 Improve the floating point parser in dec2flt.
* Remove all remaining traces of unsafe.
* Put `parse_8digits` inside a loop.
* Rework parsing of inf/NaN values.
2023-04-10 00:47:08 +02:00
Deadbeef
04a5d61161 Revert "Mark DoubleEndedIterator as #[const_trait] using rustc_do_not_const_check, implement const Iterator and DoubleEndedIterator for Range."
This reverts commit 8a9d6bf4fd.
2023-04-08 08:18:29 +00:00
Trevor Gross
dc4ba57566 Stabilize a portion of 'once_cell'
Move items not part of this stabilization to 'lazy_cell' or 'once_cell_try'
2023-03-29 18:04:44 -04:00
bors
cdbbce0a9e Auto merge of #108095 - soc:drop-contains, r=Amanieu
Drop unstable `Option::contains`, `Result::contains`, `Result::contains_err`

This is a proposal to drop the three functions `Option::contains`, `Result::contains` and `Result::contains_err`.

The discovery of `Option::is_some_with`/`Result::is_ok_with`/`Result::is_err_with` in https://github.com/rust-lang/rust/pull/93051 obviates the need for these methods (non-stabilization tracked in https://github.com/rust-lang/rust/issues/62358).

An additional benefit of change is that it avoids spurious error messages in IDEs, when `contains` is supplied by a third-party library:
![option-result-unstable](https://user-images.githubusercontent.com/42493/219127961-13cb559e-6ee8-4449-8dc9-d28d07270ad5.png)
2023-03-28 23:20:30 +00:00
The 8472
e29b27b4a4 replace advance_by returning usize with Result<(), NonZeroUsize> 2023-03-27 16:03:14 +02:00
The 8472
69db91b8b2 Change advance(_back)_by to return usize instead of Result<(), usize>
A successful advance is now signalled by returning `0` and other values now represent the remaining number
of steps that couldn't be advanced as opposed to the amount of steps that have been advanced during a partial advance_by.

This simplifies adapters a bit, replacing some `match`/`if` with arithmetic. Whether this is beneficial overall depends
on whether `advance_by` is mostly used as a building-block for other iterator methods and adapters or whether
we also see uses by users where `Result` might be more useful.
2023-03-27 14:11:49 +02:00
onestacked
8a9d6bf4fd Mark DoubleEndedIterator as #[const_trait] using rustc_do_not_const_check, implement const Iterator and DoubleEndedIterator for Range. 2023-03-18 09:17:37 +01:00
Mara Bos
96d252160e Update format_args!() test to account for inlining. 2023-03-16 11:21:50 +01:00
est31
999405059c Match unmatched backticks in library/ 2023-03-03 03:03:29 +01:00
Ralf Jung
229aef1f7d add missing feature in core/tests 2023-02-28 10:07:57 +01:00
Linus Färnstrand
6cb34492a6 Move IpAddr and SocketAddr to core 2023-02-26 13:50:08 +01:00
soc
3aa9f76a3a
Remove #![feature(option_result_contains)] from library/core/tests/lib.rs 2023-02-15 19:30:02 +00:00
bors
2d91939bb7 Auto merge of #107634 - scottmcm:array-drain, r=thomcc
Improve the `array::map` codegen

The `map` method on arrays [is documented as sometimes performing poorly](https://doc.rust-lang.org/std/primitive.array.html#note-on-performance-and-stack-usage), and after [a question on URLO](https://users.rust-lang.org/t/try-trait-residual-o-trait-and-try-collect-into-array/88510?u=scottmcm) prompted me to take another look at the core [`try_collect_into_array`](7c46fb2111/library/core/src/array/mod.rs (L865-L912)) function, I had some ideas that ended up working better than I'd expected.

There's three main ideas in here, split over three commits:
1. Don't use `array::IntoIter` when we can avoid it, since that seems to not get SRoA'd, meaning that every step writes things like loop counters into the stack unnecessarily
2. Don't return arrays in `Result`s unnecessarily, as that doesn't seem to optimize away even with `unwrap_unchecked` (perhaps because it needs to get moved into a new LLVM type to account for the discriminant)
3. Don't distract LLVM with all the `Option` dances when we know for sure we have enough items (like in `map` and `zip`).  This one's a larger commit as to do it I ended up adding a new `pub(crate)` trait, but hopefully those changes are still straight-forward.

(No libs-api changes; everything should be completely implementation-detail-internal.)

It's still not completely fixed -- I think it needs pcwalton's `memcpy` optimizations still (#103830) to get further -- but this seems to go much better than before.  And the remaining `memcpy`s are just `transmute`-equivalent (`[T; N] -> ManuallyDrop<[T; N]>` and `[MaybeUninit<T>; N] -> [T; N]`), so hopefully those will be easier to remove with LLVM16 than the previous subobject copies 🤞

r? `@thomcc`

As a simple example, this test
```rust
pub fn long_integer_map(x: [u32; 64]) -> [u32; 64] {
    x.map(|x| 13 * x + 7)
}
```
On nightly <https://rust.godbolt.org/z/xK7548TGj> takes `sub rsp, 808`
```llvm
start:
  %array.i.i.i.i = alloca [64 x i32], align 4
  %_3.sroa.5.i.i.i = alloca [65 x i32], align 4
  %_5.i = alloca %"core::iter::adapters::map::Map<core::array::iter::IntoIter<u32, 64>, [closure@/app/example.rs:2:11: 2:14]>", align 8
```
(and yes, that's a 6**5**-element array `alloca` despite 6**4**-element input and output)

But with this PR it's only `sub rsp, 520`
```llvm
start:
  %array.i.i.i.i.i.i = alloca [64 x i32], align 4
  %array1.i.i.i = alloca %"core::mem::manually_drop::ManuallyDrop<[u32; 64]>", align 4
```

Similarly, the loop it emits on nightly is scalar-only and horrifying
```nasm
.LBB0_1:
        mov     esi, 64
        mov     edi, 0
        cmp     rdx, 64
        je      .LBB0_3
        lea     rsi, [rdx + 1]
        mov     qword ptr [rsp + 784], rsi
        mov     r8d, dword ptr [rsp + 4*rdx + 528]
        mov     edi, 1
        lea     edx, [r8 + 2*r8]
        lea     r8d, [r8 + 4*rdx]
        add     r8d, 7
.LBB0_3:
        test    edi, edi
        je      .LBB0_11
        mov     dword ptr [rsp + 4*rcx + 272], r8d
        cmp     rsi, 64
        jne     .LBB0_6
        xor     r8d, r8d
        mov     edx, 64
        test    r8d, r8d
        jne     .LBB0_8
        jmp     .LBB0_11
.LBB0_6:
        lea     rdx, [rsi + 1]
        mov     qword ptr [rsp + 784], rdx
        mov     edi, dword ptr [rsp + 4*rsi + 528]
        mov     r8d, 1
        lea     esi, [rdi + 2*rdi]
        lea     edi, [rdi + 4*rsi]
        add     edi, 7
        test    r8d, r8d
        je      .LBB0_11
.LBB0_8:
        mov     dword ptr [rsp + 4*rcx + 276], edi
        add     rcx, 2
        cmp     rcx, 64
        jne     .LBB0_1
```

whereas with this PR it's unrolled and vectorized
```nasm
	vpmulld	ymm1, ymm0, ymmword ptr [rsp + 64]
	vpaddd	ymm1, ymm1, ymm2
	vmovdqu	ymmword ptr [rsp + 328], ymm1
	vpmulld	ymm1, ymm0, ymmword ptr [rsp + 96]
	vpaddd	ymm1, ymm1, ymm2
	vmovdqu	ymmword ptr [rsp + 360], ymm1
```
(though sadly still stack-to-stack)
2023-02-13 10:18:48 +00:00
Matthias Krüger
454ae9fb8b
Rollup merge of #107954 - RalfJung:tree-borrows-fix, r=m-ou-se
avoid mixing accesses of ptrs derived from a mutable ref and parent ptrs

``@Vanille-N`` is working on a successor for Stacked Borrows. It will mostly accept strictly more code than Stacked Borrows did, with one exception: the following pattern no longer works.
```rust
let mut root = 6u8;
let mref = &mut root;
let ptr = mref as *mut u8;
*ptr = 0; // Write
assert_eq!(root, 0); // Parent Read
*ptr = 0; // Attempted Write
```
This worked in Stacked Borrows kind of by accident: when doing the "parent read", under SB we Disable `mref`, but the raw ptrs derived from it remain usable. The fact that we can still use the "children" of a reference that is no longer usable is quite nasty and leads to some undesirable effects (in particular it is the major blocker for resolving https://github.com/rust-lang/unsafe-code-guidelines/issues/257). So in Tree Borrows we no longer do that; instead, reading from `root` makes `mref` and all its children read-only.

Due to other improvements in Tree Borrows, the entire Miri test suite still passes with this new behavior, and even the entire libcore and liballoc test suite, except for these 2 cases this PR fixes. Both of these involve code where the programmer wrote `&mut` but then used pointers derived from that reference in ways that alias with the parent pointer, which arguably is violating uniqueness. They are fixed by properly using raw pointers throughout.
2023-02-12 22:29:49 +01:00
Ralf Jung
c3a2e7a809 avoid mixing accesses of ptrs derived from a mutable ref and parent ptrs 2023-02-12 15:16:27 +01:00