Commit Graph

11629 Commits

Author SHA1 Message Date
Matthias Krüger
c4ba0a6912
Rollup merge of #113202 - guilliamxavier:patch-1, r=workingjubilee
std docs: factorize literal in Barrier example

Motivated by https://www.reddit.com/r/rust/comments/rnh5hu/barrier_question_barrier_does_not_sync_many/ (but maybe not worth it?)
2023-07-02 10:27:20 +02:00
Matthias Krüger
2a3766e7fe
Rollup merge of #113147 - lizhanhui:fix_vec_from_raw_parts_doc_example, r=Mark-Simulacrum
Fix document examples of Vec::from_raw_parts and Vec::from_raw_parts_in

These two examples are misplaced.
2023-07-02 10:27:20 +02:00
bors
e5bb341f0e Auto merge of #111992 - ferrocene:pa-panic-abort-tests-bench, r=m-ou-se
Test benchmarks with `-Z panic-abort-tests`

During test execution, when a `#[bench]` benchmark is encountered it's executed once to check whether it works. Unfortunately that was not compatible with `-Z panic-abort-tests`: the feature works by spawning a subprocess for each test, which prevents the use of dynamic tests as we cannot pass closures to child processes, and before this PR the conversion from benchmark to test was done by turning benchmarks into dynamic tests whose closures execute the benchmark once.

The approach this PR took was to add two new kinds of `TestFn`s: `StaticBenchAsTestFn` and `DynBenchAsTestFn` (⚠️ **this is a breaking change** ⚠️). With that change, a `StaticBenchFn` can be converted into a `StaticBenchAsTestFn` without creating dynamic tests, and making it possible to test `#[bench]` functions with `-Z panic-abort-tests`. The subprocess test runner also had to be updated to perform the conversion from benchmark to test when appropriate.

Along with the bug fix, in the first commit I refactored how tests are executed: rather than executing the test function in multiple places across `libtest`, there is now a private `TestFn::into_runnable()` method, which returns either a `RunnableTest` or `RunnableBench`, on which you can call the `run()` method. This simplified the rest of the changes in the PR.

This PR is best reviewed commit-by-commit.
Fixes https://github.com/rust-lang/rust/issues/73509
2023-07-01 07:07:50 +00:00
bors
6b06fdfcd4 Auto merge of #113194 - lu-zero:intrinsics-inline, r=thomcc
Mark wrapped intrinsics as inline(always)

This should mitigate having the inliner decide not to inline when the architecture is lacking an implementation of
TargetTransformInfo::areInlineCompatible aware of the target features (e.g. PowerPC as today).

See https://github.com/rust-lang/stdarch/pull/1443#issuecomment-1613788080
2023-07-01 04:24:26 +00:00
Matthias Krüger
709f184593
Rollup merge of #113153 - tshepang:patch-6, r=cuviper
make HashMap::or_insert_with example more simple
2023-07-01 00:35:05 +02:00
Matthias Krüger
4e8f1357b8
Rollup merge of #113072 - tshepang:patch-1, r=cuviper
str docs: remove "Basic usage" text where not useful

Not "useful" in that there is only one example given
2023-07-01 00:35:04 +02:00
Guilliam Xavier
e34ff93c6b
std docs: factorize literal in Barrier example 2023-06-30 16:11:30 +02:00
Luca Barbato
528f11c24b Mark wrapped intrinsics as inline(always)
This should mitigate having the inliner decide not to inline when
the architecture is lacking an implementation of
TargetTransformInfo::areInlineCompatible aware of the target
features (e.g. PowerPC as today).
2023-06-30 12:07:21 +02:00
Matthias Krüger
016c306ce6
Rollup merge of #107624 - tgross35:const-cstr-methods, r=dtolnay
Stabilize `const_cstr_methods`

This PR seeks to stabilize `const_cstr_methods`. Fixes most of #101719

## New const stable API

```rust
impl CStr {
    // depends: memchr
    pub const fn from_bytes_with_nul(bytes: &[u8]) -> Result<&Self, FromBytesWithNulError> {...}
    // depends: const_slice_index
    pub const fn to_bytes(&self) -> &[u8] {}
    // depends: pointer casts
    pub const fn to_bytes_with_nul(&self) -> &[u8] {}
    // depends: str::from_utf8
    pub const fn to_str(&self) -> Result<&str, str::Utf8Error> {}
}
```

I don't think any of these methods will have any issue when `CStr` becomes a thin pointer as long as `memchr` is const  (which also allows for const `strlen`) .

## Notes

- `from_bytes_until_nul` relies on `const_slice_index`, which relies on `const_trait_impls`, and generally this should be avoided. After talking with Oli, it should be OK in this case because we could replace the ranges with pointer tricks if needed (worst case being those feature gates disappear). https://github.com/rust-lang/rust/pull/107624#discussion_r1101468480
- Making `from_ptr` const is deferred because it depends on `const_eval_select`. I have moved this under the new flag `const_cstr_from_ptr` https://github.com/rust-lang/rust/pull/107624#discussion_r1101555239

cc ``@oli-obk`` I think you're the const expert

``@rustbot`` modify labels: +T-libs-api +needs-fcp
2023-06-30 08:01:12 +02:00
Tshepang Mbambo
5b46aa1122
make HashMap::or_insert_with example more simple 2023-06-29 09:33:15 +02:00
Li Zhanhui
9a67df290c
Fix document examples of Vec::from_raw_parts and Vec::from_raw_parts_in
Signed-off-by: Li Zhanhui <lizhanhui@gmail.com>
2023-06-29 04:21:00 +00:00
Matthias Krüger
f35f213d27
Rollup merge of #113054 - Rageking8:make-rustc_on_unimplemented-std-agnostic, r=WaffleLapkin
Make `rustc_on_unimplemented` std-agnostic

See #112923

r? `@WaffleLapkin`
2023-06-29 05:48:40 +02:00
Matthias Krüger
42a495da7e
Rollup merge of #112670 - petrochenkov:typriv, r=eholk
privacy: Type privacy lints fixes and cleanups

See individual commits.
Follow up to https://github.com/rust-lang/rust/pull/111801.
2023-06-29 05:48:39 +02:00
Dylan DPC
fa56e01b35
Rollup merge of #111571 - jhpratt:proc-macro-span, r=m-ou-se
Implement proposed API for `proc_macro_span`

As proposed in [#54725 (comment)](https://github.com/rust-lang/rust/issues/54725#issuecomment-1546918161). I have omitted the byte-level API as it's already available as [`Span::byte_range`](https://doc.rust-lang.org/nightly/proc_macro/struct.Span.html#method.byte_range).

`@rustbot` label +A-proc-macros

r? `@m-ou-se`
2023-06-28 18:28:46 +05:30
Matthias Krüger
448d2a8417
Rollup merge of #112628 - gootorov:box_alloc_partialeq, r=joshtriplett
Allow comparing `Box`es with different allocators

Currently, comparing `Box`es over different allocators is not allowed:
```Rust
error[E0308]: mismatched types
  --> library/alloc/tests/boxed.rs:22:20
   |
22 |     assert_eq!(b1, b2);
   |                    ^^ expected `Box<{integer}, ConstAllocator>`, found `Box<{integer}, AnotherAllocator>`
   |
   = note: expected struct `Box<{integer}, ConstAllocator>`
              found struct `Box<{integer}, AnotherAllocator>`

For more information about this error, try `rustc --explain E0308`.
error: could not compile `alloc` (test "collectionstests") due to previous error
```
This PR lifts this limitation
2023-06-27 22:10:13 +02:00
Rageking8
48544c1b12 Make rustc_on_unimplemented std-agnostic 2023-06-27 18:13:24 +08:00
Tshepang Mbambo
6bab85a456
str docs: remove "Basic usage" text where not useful
Not "useful" in that there is only one example given
2023-06-26 22:11:13 +02:00
Takayuki Maeda
c6a4d44977
Rollup merge of #112677 - the8472:remove-unusued-field, r=JohnTitor
remove unused field

Followup to #104455. The field is no longer needed since ExtractIf (previously DrainFilter) doesn't keep draining in its drop impl.
2023-06-26 23:16:16 +09:00
bors
25b5af1b3a Auto merge of #113024 - Jerrody:master, r=thomcc
`Default`: Always inline primitive data types.
2023-06-26 06:45:04 +00:00
bors
ae8ffa663c Auto merge of #111850 - the8472:external-step-by, r=scottmcm
Specialize `StepBy<Range<{integer}>>`

OLD

    iter::bench_range_step_by_fold_u16      700.00ns/iter +/- 10.00ns
    iter::bench_range_step_by_fold_usize    519.00ns/iter  +/- 6.00ns
    iter::bench_range_step_by_loop_u32      555.00ns/iter  +/- 7.00ns
    iter::bench_range_step_by_sum_reducible  37.00ns/iter  +/- 0.00ns

NEW

    iter::bench_range_step_by_fold_u16       49.00ns/iter +/- 0.00ns
    iter::bench_range_step_by_fold_usize    194.00ns/iter +/- 1.00ns
    iter::bench_range_step_by_loop_u32       98.00ns/iter +/- 0.00ns
    iter::bench_range_step_by_sum_reducible   1.00ns/iter +/- 0.00ns

NEW + `-Ctarget-cpu=x86-64-v3`

    iter::bench_range_step_by_fold_u16      22.00ns/iter +/- 0.00ns
    iter::bench_range_step_by_fold_usize    80.00ns/iter +/- 1.00ns
    iter::bench_range_step_by_loop_u32      41.00ns/iter +/- 0.00ns
    iter::bench_range_step_by_sum_reducible  1.00ns/iter +/- 0.00ns

I have only optimized for walltime of those methods, I haven't tested whether it eliminates bounds checks when indexing into slices via things like `(0..slice.len()).step_by(16)`.
2023-06-26 00:28:30 +00:00
The 8472
f174547124 Mark the StepBy specialization as unsafe 2023-06-25 18:11:51 +02:00
The 8472
8a72f35234 StepBy<Range<{int <= usize}>> can be TrustedLen 2023-06-25 18:11:51 +02:00
The 8472
f70a4b9dd3 doccomments for StepBy specializations 2023-06-25 18:02:11 +02:00
bors
c51fbb3dd3 Auto merge of #113001 - ChrisDenton:win-arm32-shim, r=thomcc
Move windows-sys arm32 shim to c.rs

This moves the arm32 shim in to c.rs instead of appending to the generated file itself.

This makes it simpler to change these workarounds if/when needed. The downside is we need to exclude a couple of functions from being generated (see the comment). A metadata solution could help here but they'll be easy enough to add back if that happens.
2023-06-25 11:27:19 +00:00
George
bb33730361
Always inline primitive data types. 2023-06-25 12:36:21 +03:00
Matthias Krüger
48247884c9
Rollup merge of #113009 - ChrisDenton:remove-path, r=workingjubilee
Remove unnecessary `path` attribute

Follow up to #111401. I missed this at the time but it should now be totally unnecessary since the other include was removed.

r? `@workingjubilee`
2023-06-25 02:04:21 +02:00
Matthias Krüger
2ed4368d2f
Rollup merge of #112956 - Amanieu:weak-intrinsics, r=Mark-Simulacrum
Expose `compiler-builtins-weak-intrinsics` feature for `-Zbuild-std`

This was added in rust-lang/compiler-builtins#526 to force all compiler-builtins intrinsics to use weak linkage.
2023-06-25 02:04:20 +02:00
Matthias Krüger
8630b1b3f4
Rollup merge of #112950 - tshepang:patch-4, r=Mark-Simulacrum
DirEntry::file_name: improve explanation
2023-06-25 02:04:20 +02:00
Chris Denton
e2eff0d4ab
Remove unnecessary path attribute 2023-06-24 19:56:29 +01:00
Chris Denton
8a7399cd45
Move arm32 shim to c.rs 2023-06-24 17:30:27 +01:00
Michael Goulet
ee8b035fab
Rollup merge of #112763 - Patryk27:bump-compiler-builtins, r=Amanieu
Bump compiler_builtins

Actually closes https://github.com/rust-lang/rust/issues/108489.

Note that the example code given [in compiler_builtins](https://github.com/rust-lang/compiler-builtins/pull/527) doesn't compile on current rustc since we're still waiting for https://reviews.llvm.org/D153197 (aka `LLVM ERROR: Expected a constant shift amount!`), but it's a step forward anyway.
2023-06-23 19:47:20 -07:00
Michael Goulet
4a01a38466
Rollup merge of #111087 - ibraheemdev:patch-15, r=dtolnay
Implement `Sync` for `mpsc::Sender`

`mpsc::Sender` is currently `!Sync` because the previous implementation contained an optimization where the channel started out as single-producer and was dynamically upgraded on the first clone, which relied on a unique reference to the sender. This optimization is one of the main reasons the old implementation was so complex and was removed in #93563. `mpsc::Sender` can now soundly implement `Sync`.

Note for any potential confusion, this chance does *not* add MPMC behavior. This only affects the already `Send + Clone` *sender*, not *receiver*.

It's technically possible to rely on the `!Sync` behavior in the same way as a `PhantomData<*mut T>`, but that seems very unlikely in practice. Either way, this change is insta-stable and needs an FCP.

`@rustbot` label +T-libs-api -T-libs
2023-06-23 19:47:19 -07:00
Matthias Krüger
8168915639
Rollup merge of #112704 - RalfJung:dont-wrap-slices, r=ChrisDenton
slice::from_raw_parts: mention no-wrap-around condition

Cc https://github.com/rust-lang/rust/issues/83996. This probably needs to be mentioned in more places, so I am not closing that issue, but this here should help at least.
2023-06-23 13:18:13 +02:00
Amanieu d'Antras
4a9f292e50 Expose compiler-builtins-weak-intrinsics feature for -Zbuild-std
This was added in rust-lang/compiler-builtins#526 to force all
compiler-builtins intrinsics to use weak linkage.
2023-06-23 11:15:34 +01:00
Tshepang Mbambo
6f61f6ba11
DirEntry::file_name: improve explanation 2023-06-23 04:47:30 +02:00
The 8472
1bc095cd80 add inline annotation to concrete impls
otherwise they wouldn't be eligible for cross-crate inlining
2023-06-23 00:17:34 +02:00
The 8472
070ce235f2 Specialize StepBy<Range<{integer}>>
For ranges < usize we determine the number of items
StepBy would yield and then store that in the range.end
instead of the actual end. This significantly
simplifies calculation of the loop induction variable
especially in cases where StepBy::step (an usize)
could overflow the Range's item type
2023-06-23 00:17:34 +02:00
Thom Chiovoloni
5ef4d1fb2e
Actually save all the files 2023-06-21 14:59:40 -07:00
Thom Chiovoloni
37854aab76
Update tvOS support elsewhere in the stdlib 2023-06-21 14:59:40 -07:00
Thom Chiovoloni
49da0acb71
Avoid fork/exec spawning on tvOS/watchOS, as those functions are marked as prohibited 2023-06-21 14:59:40 -07:00
Thom Chiovoloni
f978d7ea42
Finish up preliminary tvos support in libstd 2023-06-21 14:59:39 -07:00
Thom Chiovoloni
bdc3db944c
wip: Support Apple tvOS in libstd 2023-06-21 14:59:37 -07:00
bors
006a26c0b5 Auto merge of #111684 - ChayimFriedman2:unused-offset-of, r=WaffleLapkin
Warn on unused `offset_of!()` result

The usage of `core::hint::must_use()` means that we don't get a specialized message. I figured out that since there are plenty of other methods that just have `#[must_use]` with no message it'll be fine, but it is a bit unfortunate that the error mentions `must_use` and not `offset_of!`.

Fixes #111669.
2023-06-21 16:40:54 +00:00
Guillaume Gomez
38916c71cb
Rollup merge of #112863 - clubby789:stderr-typo, r=albertlarsan68
Fix copy-paste typo in `eprint(ln)` docs

Fixes #112862
2023-06-21 15:45:17 +02:00
Guillaume Gomez
476798d4fd
Rollup merge of #99587 - ibraheemdev:park-orderings, r=m-ou-se
Document memory orderings of `thread::{park, unpark}`

Document `thread::park/unpark` as having acquire/release synchronization. Without that guarantee, even the example in the documentation can deadlock:

```rust
let flag = Arc::new(AtomicBool::new(false));

let t2 = thread::spawn(move || {
    while !flag.load(Ordering::Acquire) {
        thread::park();
    }
});

flag.store(true, Ordering::Release);
t2.thread().unpark();

// t1: flag.store(true)
// t1: thread.unpark()
// t2: flag.load() == false

// t2 now parks, is immediately unblocked but never
// acquires the flag, and thus spins forever
```

Multiple calls to `unpark` should also maintain a release sequence to make sure operations released by previous `unpark`s are not lost:

```rust
let a = Arc::new(AtomicBool::new(false));
let b = Arc::new(AtomicBool::new(false));

let t2 = thread::spawn(move || {
    while !a.load(Ordering::Acquire) || !b.load(Ordering::Acquire) {
        thread::park();
    }
});

thread::spawn(move || {
    a.store(true, Ordering::Release);
    t2.thread().unpark();
});

b.store(true, Ordering::Release);
t2.thread().unpark();

// t1: a.store(true)
// t1: t2.unpark()
// t3: b.store(true)
// t3: t2.unpark()

// t2 now parks, is immediately unblocked but never
// acquires the store of `a`, only the store of `b` which
// was released by the most recent unpark, and thus spins forever
```

This is of course a contrived example, but is reasonable to rely upon in real code.

Note that all implementations of park/unpark already comply with the rules, it's just undocumented.
2023-06-21 15:45:15 +02:00
Mara Bos
3acb1d2b9b
"Memory Orderings" -> "Memory Ordering"
Co-authored-by: yvt <i@yvt.jp>
2023-06-21 12:43:22 +02:00
Chayim Refael Friedman
592844cf88 Warn on unused offset_of!() result 2023-06-21 11:43:14 +03:00
bors
97bf23d26b Auto merge of #112877 - Nilstrieb:rollup-5g5hegl, r=Nilstrieb
Rollup of 6 pull requests

Successful merges:

 - #112632 (Implement PartialOrd for `Vec`s over different allocators)
 - #112759 (Make closure_saved_names_of_captured_variables a query. )
 - #112772 (Add a fully fledged `Clause` type, rename old `Clause` to `ClauseKind`)
 - #112790 (Syntactically accept `become` expressions (explicit tail calls experiment))
 - #112830 (More codegen cleanups)
 - #112844 (Add retag in MIR transform: `Adt` for `Unique` may contain a reference)

r? `@ghost`
`@rustbot` modify labels: rollup
2023-06-21 08:00:23 +00:00
Nilstrieb
78a90cb4ee
Rollup merge of #112632 - gootorov:vec_alloc_partialeq, r=dtolnay
Implement PartialOrd for `Vec`s over different allocators

It is already possible to `PartialEq` `Vec`s with different allocators, but that is not the case with `PartialOrd`.
2023-06-21 07:37:00 +02:00
bors
67da586efe Auto merge of #106450 - albertlarsan68:fix-arc-ptr-eq, r=Amanieu
Make `{Arc,Rc,Weak}::ptr_eq` ignore pointer metadata

FCP completed in https://github.com/rust-lang/rust/issues/103763#issuecomment-1362267967

Closes #103763
2023-06-21 05:13:39 +00:00