Currently it just calls `truncate(0)`. `truncate()` is (a) not marked as
`#[inline]`, and (b) more general than needed for `clear()`.
This commit changes `clear()` to do the work itself. This modest change
was first proposed in rust-lang#74172, where the reviewer rejected it because
there was insufficient evidence that `Vec::clear()`'s performance
mattered enough to justify the change. Recent changes within rustc have
made `Vec::clear()` hot within `macro_parser.rs`, so the change is now
clearly worthwhile.
Although it doesn't show wins on CI perf runs, this seems to be because they
use PGO. But not all platforms currently use PGO. Also, local builds don't use
PGO, and `truncate` sometimes shows up in an over-represented fashion in local
profiles. So local profiling will be made easier by this change.
Note that this will also benefit `String::clear()`, because it just
calls `Vec::clear()`.
Finally, the commit removes the `vec-clear.rs` codegen test. It was
added in #52908. From before then until now, `Vec::clear()` just called
`Vec::truncate()` with a zero length. The body of Vec::truncate() has
changed a lot since then. Now that `Vec::clear()` is doing actual work
itself, and not just calling `Vec::truncate()`, it's not surprising that
its generated code includes a load and an icmp. I think it's reasonable
to remove this test.
Update binary_search example to instead redirect to partition_point
Inspired by discussion in the tracking issue for `Result::into_ok_or_err`: https://github.com/rust-lang/rust/issues/82223#issuecomment-1067098167
People are surprised by us not providing a `Result<T, T> -> T` conversion, and the main culprit for this confusion seems to be the `binary_search` API. We should instead redirect people to the equivalent API that implicitly does that `Result<T, T> -> T` conversion internally which should obviate the need for the `into_ok_or_err` function and give us time to work towards a more general solution that applies to all enums rather than just `Result` such as making or_patterns usable for situations like this via postfix `match`.
I choose to duplicate the example rather than simply moving it from `binary_search` to partition point because most of the confusion seems to arise when people are looking at `binary_search`. It makes sense to me to have the example presented immediately rather than requiring people to click through to even realize there is an example. If I had to put it in only one place I'd leave it in `binary_search` and remove it from `partition_point` but it seems pretty obviously relevant to `partition_point` so I figured the best option would be to duplicate it.
**This Commit**
Adds some clarity around indexing into Strings and the constraints
driving various decisions there.
**Why?**
The [`String` documentation][0] mentions how `String`s can't be indexed
but `Range` has an implementation for `SliceIndex<str>`. This can be
confusing. There are also several statements to explain the lack of
`String` indexing:
- the inability to index into a `String` is an implication of UTF-8
encoding
- indexing into a `String` could not be constant-time with UTF-8
encoding
- indexing into a `String` does not have an obvious return type
This last statement made sense but the first two seemed contradictory to
the documentation around [`SliceIndex<str>`][1] which mention:
- one can index into a `String` with a `Range` (also called substring
slicing but it uses the same syntax and the method name is `index`)
- `Range` indexing into a `String` is constant-time
To resolve this seeming contradiction the documentation is reworked to
more clearly explain what factors drive the decision to disallow
indexing into a `String` with a single number.
[0]: https://doc.rust-lang.org/stable/std/string/struct.String.html#utf-8
[1]: https://doc.rust-lang.org/stable/std/slice/trait.SliceIndex.html#impl-SliceIndex%3Cstr%3E
Relevant commit messages from squashed history in order:
Add initial version of ThinBox
update test to actually capture failure
swap to middle ptr impl based on matthieu-m's design
Fix stack overflow in debug impl
The previous version would take a `&ThinBox<T>` and deref it once, which
resulted in a no-op and the same type, which it would then print causing
an endless recursion. I've switched to calling `deref` by name to let
method resolution handle deref the correct number of times.
I've also updated the Drop impl for good measure since it seemed like it
could be falling prey to the same bug, and I'll be adding some tests to
verify that the drop is happening correctly.
add test to verify drop is behaving
add doc examples and remove unnecessary Pointee bounds
ThinBox: use NonNull
ThinBox: tests for size
Apply suggestions from code review
Co-authored-by: Alphyr <47725341+a1phyr@users.noreply.github.com>
use handle_alloc_error and fix drop signature
update niche and size tests
add cfg for allocating APIs
check null before calculating offset
add test for zst and trial usage
prevent optimizer induced ub in drop and cleanup metadata gathering
account for arbitrary size and alignment metadata
Thank you nika and thomcc!
Update library/alloc/src/boxed/thin.rs
Co-authored-by: Josh Triplett <josh@joshtriplett.org>
Update library/alloc/src/boxed/thin.rs
Co-authored-by: Josh Triplett <josh@joshtriplett.org>
Bump bootstrap compiler to 1.61.0 beta
This PR bumps the bootstrap compiler to the 1.61.0 beta. The first commit changes the stage0 compiler, the second commit applies the "mechanical" changes and the third and fourth commits apply changes explained in the relevant comments.
r? `@Mark-Simulacrum`
Add debug assertions to some unsafe functions
As suggested by https://github.com/rust-lang/rust/issues/51713
~~Some similar code calls `abort()` instead of `panic!()` but aborting doesn't work in a `const fn`, and the intrinsic for doing dispatch based on whether execution is in a const is unstable.~~
This picked up some invalid uses of `get_unchecked` in the compiler, and fixes them.
I can confirm that they do in fact pick up invalid uses of `get_unchecked` in the wild, though the user experience is less-than-awesome:
```
Running unittests (target/x86_64-unknown-linux-gnu/debug/deps/rle_decode_fast-04b7918da2001b50)
running 6 tests
error: test failed, to rerun pass '--lib'
Caused by:
process didn't exit successfully: `/home/ben/rle-decode-helper/target/x86_64-unknown-linux-gnu/debug/deps/rle_decode_fast-04b7918da2001b50` (signal: 4, SIGILL: illegal instruction)
```
~~As best I can tell these changes produce a 6% regression in the runtime of `./x.py test` when `[rust] debug = true` is set.~~
Latest commit (6894d559bd) brings the additional overhead from this PR down to 0.5%, while also adding a few more assertions. I think this actually covers all the places in `core` that it is reasonable to check for safety requirements at runtime.
Thoughts?
Fix double drop of allocator in IntoIter impl of Vec
Fixes#95269
The `drop` impl of `IntoIter` reconstructs a `RawVec` from `buf`, `cap` and `alloc`, when that `RawVec` is dropped it also drops the allocator. To avoid dropping the allocator twice we wrap it in `ManuallyDrop` in the `InttoIter` struct.
Note this is my first contribution to the standard library, so I might be missing some details or a better way to solve this.
allow arbitrary inherent impls for builtin types in core
Part of https://github.com/rust-lang/compiler-team/issues/487. Slightly adjusted after some talks with `@m-ou-se` about the requirements of `t-libs-api`.
This adds a crate attribute `#![rustc_coherence_is_core]` which allows arbitrary impls for builtin types in core.
For other library crates impls for builtin types should be avoided if possible. We do have to allow the existing stable impls however. To prevent us from accidentally adding more of these in the future, there is a second attribute `#[rustc_allow_incoherent_impl]` which has to be added to **all impl items**. This only supports impls for builtin types but can easily be extended to additional types in a future PR.
This implementation does not check for overlaps in these impls. Perfectly checking that requires us to check the coherence of these incoherent impls in every crate, as two distinct dependencies may add overlapping methods. It should be easy enough to detect if it goes wrong and the attribute is only intended for use inside of std.
The first two commits are mostly unrelated cleanups.
These debug assertions are all implemented only at runtime using
`const_eval_select`, and in the error path they execute
`intrinsics::abort` instead of being a normal debug assertion to
minimize the impact of these assertions on code size, when enabled.
Of all these changes, the bounds checks for unchecked indexing are
expected to be most impactful (case in point, they found a problem in
rustc).
Refactor set_ptr_value as with_metadata_of
Replaces `set_ptr_value` (#75091) with methods of reversed argument order:
```rust
impl<T: ?Sized> *mut T {
pub fn with_metadata_of<U: ?Sized>(self, val: *mut U) -> *mut U;
}
impl<T: ?Sized> *const T {
pub fn with_metadata_of<U: ?Sized>(self, val: *const U) -> *const U;
}
```
By reversing the arguments we achieve several clarifications:
- The function closely resembles `cast` with an argument to
initialize the metadata. This is easier to teach and answers a long
outstanding question that had restricted cast to `Sized` pointee
targets. See multiples reviews of
<https://github.com/rust-lang/rust/pull/47631>
- The 'object identity', in the form of provenance, is now preserved
from the receiver argument to the result. This helps explain the method as
a builder-style, instead of some kind of setter that would modify
something in-place. Ensuring that the result has the identity of the
`self` argument is also beneficial for an intuition of effects.
- An outstanding concern, 'Correct argument type', is avoided by not
committing to any specific argument type. This is consistent with cast
which does not require its receiver to be a 'raw address'.
Hopefully the usage examples in `sync/rc.rs` serve as sufficient examples of the style to convince the reader of the readability improvements of this style, when compared to the previous order of arguments.
I want to take the opportunity to motivate inclusion of this method _separate_ from metadata API, separate from `feature(ptr_metadata)`. It does _not_ involve the `Pointee` trait in any form. This may be regarded as a very, very light form that does not commit to any details of the pointee trait, or its associated metadata. There are several use cases for which this is already sufficient and no further inspection of metadata is necessary.
- Storing the coercion of `*mut T` into `*mut dyn Trait` as a way to dynamically cast some an arbitrary instance of the same type to a dyn trait instance. In particular, one can have a field of type `Option<*mut dyn io::Seek>` to memorize if a particular writer is seekable. Then a method `fn(self: &T) -> Option<&dyn Seek>` can be provided, which does _not_ involve the static trait bound `T: Seek`. This makes it possible to create an API that is capable of utilizing seekable streams and non-seekable streams (instead of a possible less efficient manner such as more buffering) through the same entry-point.
- Enabling more generic forms of unsizing for no-`std` smart pointers. Using the stable APIs only few concrete cases are available. One can unsize arrays to `[T]` by `ptr::slice_from_raw_parts` but unsizing a custom smart pointer to, e.g., `dyn Iterator`, `dyn Future`, `dyn Debug`, can't easily be done generically. Exposing `with_metadata_of` would allow smart pointers to offer their own `unsafe` escape hatch with similar parameters where the caller provides the unsized metadata. This is particularly interesting for embedded where `dyn`-trait usage can drastically reduce code size.
Allow comparing `Vec`s with different allocators using `==`
See https://stackoverflow.com/q/71021633/7884305.
I did not changed the `PartialOrd` impl too because it was not generic already (didn't support `Vec<T> <=> Vec<U> where T: PartialOrd<U>`).
Does it needs tests?
I don't think this will hurt type inference much because the default allocator is usually not inferred (`new()` specifies it directly, and even with other allocators, you pass the allocator to `new_in()` so the compiler usually knows the type).
I think this requires FCP since the impls are already stable.
Fix typo in `String::try_reserve_exact` docs
Copying the pattern from `Vec::try_reserve_exact` and `String::try_reserve`,
it looks like this doc comment is intending to refer to the currently-being-documented
function.
Copying the pattern from `Vec::try_reserve_exact` and `String::try_reserve`,
it looks like this doc comment is intending to refer to the currently-being-documented
function.
Hermit now properly separates kernel from userspace.
Applications for hermit can now use Rust's default alloc_error_handler instead of calling the kernel's __rg_oom.
add module-level documentation for vec's in-place iteration
As requested in the last libs team meeting and during previous reviews.
Feel free to point out any gaps you encounter, after all non-obvious things may with hindsight seem obvious to me.
r? `@yaahc`
CC `@steffahn`
By reversing the arguments we achieve several clarifications:
- The function closely resembles `cast` but with an argument to
initialized the metadata. This is easier to teach and answers an long
outstanding question that had restricted cast to `Sized` targets
initially. See multiples reviews of
<https://github.com/rust-lang/rust/pull/47631>
- The 'object identity', in the form or provenance, is now preserved
from the call receiver to the result. This helps explain the method as
a builder-style, instead of some kind of setter that would modify
something in-place. Ensuring that the result has the identity of the
`self` argument is also beneficial for an intuition of effects.
- An outstanding concern, 'Correct argument type', is avoided by not
committing to any specific argument type. This is consistent with cast
which does not require its receiver to be a raw address.
BTreeMap::entry: Avoid allocating if no insertion
This PR allows the `VacantEntry` to borrow from an empty tree with no root, and to lazily allocate a new root node when the user calls `.insert(value)`.
Made the fields of VecDeque's Iter private by creating a Iter::new(...) function to create a new instance of Iter and migrating usage to use Iter::new(...).
Improve doc wording for retain on some collections
I found the documentation wording on the various retain methods on many collections to be unusual.
I tried to invert the relation by switching `such that` with `for which` .
Use modern formatting for format! macros
This updates the standard library's documentation to use the new format_args syntax.
The documentation is worthwhile to update as it should be more idiomatic
(particularly for features like this, which are nice for users to get acquainted
with). The general codebase is likely more hassle than benefit to update: it'll
hurt git blame, and generally updates can be done by folks updating the code if
(and when) that makes things more readable with the new format.
A few places in the compiler and library code are updated (mostly just due to
already having been done when this commit was first authored).
`eprintln!("{}", e)` becomes `eprintln!("{e}")`, but `eprintln!("{}", e.kind())` remains untouched.
This updates the standard library's documentation to use the new syntax. The
documentation is worthwhile to update as it should be more idiomatic
(particularly for features like this, which are nice for users to get acquainted
with). The general codebase is likely more hassle than benefit to update: it'll
hurt git blame, and generally updates can be done by folks updating the code if
(and when) that makes things more readable with the new format.
A few places in the compiler and library code are updated (mostly just due to
already having been done when this commit was first authored).
fix typo in btree/vec doc: Self -> self
this pr fixes#92345
the documentation refers to the object the method is called for, not the type, so it should be using the lower case self.
Fix a layout possible miscalculation in `alloc::RawVec`
A layout miscalculation could happen in `RawVec` when used with a type whose size isn't a multiple of its alignment. I don't know if such type can exist in Rust, but the Layout API provides ways to manipulate such types. Anyway, it is better to calculate memory size in a consistent way.
Add documentation to more `From::from` implementations.
For users looking at documentation through IDE popups, this gives them relevant information rather than the generic trait documentation wording “Performs the conversion”. For users reading the documentation for a specific type for any reason, this informs them when the conversion may allocate or copy significant memory versus when it is always a move or cheap copy.
Notes on specific cases:
* The new documentation for `From<T> for T` explains that it is not a conversion at all.
* Also documented `impl<T, U> Into<U> for T where U: From<T>`, the other central blanket implementation of conversion.
* The new documentation for construction of maps and sets from arrays of keys mentions the handling of duplicates. Future work could be to do this for *all* code paths that convert an iterable to a map or set.
* I did not add documentation to conversions of a specific error type to a more general error type.
* I did not add documentation to unstable code.
This change was prepared by searching for the text "From<... for" and so may have missed some cases that for whatever reason did not match. I also looked for `Into` impls but did not find any worth documenting by the above criteria.
Optimize `core::str::Chars::count`
I wrote this a while ago after seeing this function as a bottleneck in a profile, but never got around to contributing it. I saw it again, and so here it is. The implementation is fairly complex, but I tried to explain what's happening at both a high level (in the header comment for the file), and in line comments in the impl. Hopefully it's clear enough.
This implementation (`case00_cur_libcore` in the benchmarks below) is somewhat consistently around 4x to 5x faster than the old implementation (`case01_old_libcore` in the benchmarks below), for a wide variety of workloads, without regressing performance on any of the workload sizes I've tried.
I also improved the benchmarks for this code, so that they explicitly check text in different languages and of different sizes (err, the cross product of language x size). The results of the benchmarks are here:
<details>
<summary>Benchmark results</summary>
<pre>
test str::char_count::emoji_huge::case00_cur_libcore ... bench: 20,216 ns/iter (+/- 3,673) = 17931 MB/s
test str::char_count::emoji_huge::case01_old_libcore ... bench: 108,851 ns/iter (+/- 12,777) = 3330 MB/s
test str::char_count::emoji_huge::case02_iter_increment ... bench: 329,502 ns/iter (+/- 4,163) = 1100 MB/s
test str::char_count::emoji_huge::case03_manual_char_len ... bench: 223,333 ns/iter (+/- 14,167) = 1623 MB/s
test str::char_count::emoji_large::case00_cur_libcore ... bench: 293 ns/iter (+/- 6) = 19331 MB/s
test str::char_count::emoji_large::case01_old_libcore ... bench: 1,681 ns/iter (+/- 28) = 3369 MB/s
test str::char_count::emoji_large::case02_iter_increment ... bench: 5,166 ns/iter (+/- 85) = 1096 MB/s
test str::char_count::emoji_large::case03_manual_char_len ... bench: 3,476 ns/iter (+/- 62) = 1629 MB/s
test str::char_count::emoji_medium::case00_cur_libcore ... bench: 48 ns/iter (+/- 0) = 14750 MB/s
test str::char_count::emoji_medium::case01_old_libcore ... bench: 217 ns/iter (+/- 4) = 3262 MB/s
test str::char_count::emoji_medium::case02_iter_increment ... bench: 642 ns/iter (+/- 7) = 1102 MB/s
test str::char_count::emoji_medium::case03_manual_char_len ... bench: 445 ns/iter (+/- 3) = 1591 MB/s
test str::char_count::emoji_small::case00_cur_libcore ... bench: 18 ns/iter (+/- 0) = 3777 MB/s
test str::char_count::emoji_small::case01_old_libcore ... bench: 23 ns/iter (+/- 0) = 2956 MB/s
test str::char_count::emoji_small::case02_iter_increment ... bench: 66 ns/iter (+/- 2) = 1030 MB/s
test str::char_count::emoji_small::case03_manual_char_len ... bench: 29 ns/iter (+/- 1) = 2344 MB/s
test str::char_count::en_huge::case00_cur_libcore ... bench: 25,909 ns/iter (+/- 39,260) = 13299 MB/s
test str::char_count::en_huge::case01_old_libcore ... bench: 102,887 ns/iter (+/- 3,257) = 3349 MB/s
test str::char_count::en_huge::case02_iter_increment ... bench: 166,370 ns/iter (+/- 12,439) = 2071 MB/s
test str::char_count::en_huge::case03_manual_char_len ... bench: 166,332 ns/iter (+/- 4,262) = 2071 MB/s
test str::char_count::en_large::case00_cur_libcore ... bench: 281 ns/iter (+/- 6) = 19160 MB/s
test str::char_count::en_large::case01_old_libcore ... bench: 1,598 ns/iter (+/- 19) = 3369 MB/s
test str::char_count::en_large::case02_iter_increment ... bench: 2,598 ns/iter (+/- 167) = 2072 MB/s
test str::char_count::en_large::case03_manual_char_len ... bench: 2,578 ns/iter (+/- 55) = 2088 MB/s
test str::char_count::en_medium::case00_cur_libcore ... bench: 44 ns/iter (+/- 1) = 15295 MB/s
test str::char_count::en_medium::case01_old_libcore ... bench: 201 ns/iter (+/- 51) = 3348 MB/s
test str::char_count::en_medium::case02_iter_increment ... bench: 322 ns/iter (+/- 40) = 2090 MB/s
test str::char_count::en_medium::case03_manual_char_len ... bench: 319 ns/iter (+/- 5) = 2109 MB/s
test str::char_count::en_small::case00_cur_libcore ... bench: 15 ns/iter (+/- 0) = 2333 MB/s
test str::char_count::en_small::case01_old_libcore ... bench: 14 ns/iter (+/- 0) = 2500 MB/s
test str::char_count::en_small::case02_iter_increment ... bench: 30 ns/iter (+/- 1) = 1166 MB/s
test str::char_count::en_small::case03_manual_char_len ... bench: 30 ns/iter (+/- 1) = 1166 MB/s
test str::char_count::ru_huge::case00_cur_libcore ... bench: 16,439 ns/iter (+/- 3,105) = 19777 MB/s
test str::char_count::ru_huge::case01_old_libcore ... bench: 89,480 ns/iter (+/- 2,555) = 3633 MB/s
test str::char_count::ru_huge::case02_iter_increment ... bench: 217,703 ns/iter (+/- 22,185) = 1493 MB/s
test str::char_count::ru_huge::case03_manual_char_len ... bench: 157,330 ns/iter (+/- 19,188) = 2066 MB/s
test str::char_count::ru_large::case00_cur_libcore ... bench: 243 ns/iter (+/- 6) = 20905 MB/s
test str::char_count::ru_large::case01_old_libcore ... bench: 1,384 ns/iter (+/- 51) = 3670 MB/s
test str::char_count::ru_large::case02_iter_increment ... bench: 3,381 ns/iter (+/- 543) = 1502 MB/s
test str::char_count::ru_large::case03_manual_char_len ... bench: 2,423 ns/iter (+/- 429) = 2096 MB/s
test str::char_count::ru_medium::case00_cur_libcore ... bench: 42 ns/iter (+/- 1) = 15119 MB/s
test str::char_count::ru_medium::case01_old_libcore ... bench: 180 ns/iter (+/- 4) = 3527 MB/s
test str::char_count::ru_medium::case02_iter_increment ... bench: 402 ns/iter (+/- 45) = 1579 MB/s
test str::char_count::ru_medium::case03_manual_char_len ... bench: 280 ns/iter (+/- 29) = 2267 MB/s
test str::char_count::ru_small::case00_cur_libcore ... bench: 12 ns/iter (+/- 0) = 2666 MB/s
test str::char_count::ru_small::case01_old_libcore ... bench: 12 ns/iter (+/- 0) = 2666 MB/s
test str::char_count::ru_small::case02_iter_increment ... bench: 19 ns/iter (+/- 0) = 1684 MB/s
test str::char_count::ru_small::case03_manual_char_len ... bench: 14 ns/iter (+/- 1) = 2285 MB/s
test str::char_count::zh_huge::case00_cur_libcore ... bench: 15,053 ns/iter (+/- 2,640) = 20067 MB/s
test str::char_count::zh_huge::case01_old_libcore ... bench: 82,622 ns/iter (+/- 3,602) = 3656 MB/s
test str::char_count::zh_huge::case02_iter_increment ... bench: 230,456 ns/iter (+/- 7,246) = 1310 MB/s
test str::char_count::zh_huge::case03_manual_char_len ... bench: 220,595 ns/iter (+/- 11,624) = 1369 MB/s
test str::char_count::zh_large::case00_cur_libcore ... bench: 227 ns/iter (+/- 65) = 20792 MB/s
test str::char_count::zh_large::case01_old_libcore ... bench: 1,136 ns/iter (+/- 144) = 4154 MB/s
test str::char_count::zh_large::case02_iter_increment ... bench: 3,147 ns/iter (+/- 253) = 1499 MB/s
test str::char_count::zh_large::case03_manual_char_len ... bench: 2,993 ns/iter (+/- 400) = 1577 MB/s
test str::char_count::zh_medium::case00_cur_libcore ... bench: 36 ns/iter (+/- 5) = 16388 MB/s
test str::char_count::zh_medium::case01_old_libcore ... bench: 142 ns/iter (+/- 18) = 4154 MB/s
test str::char_count::zh_medium::case02_iter_increment ... bench: 379 ns/iter (+/- 37) = 1556 MB/s
test str::char_count::zh_medium::case03_manual_char_len ... bench: 364 ns/iter (+/- 51) = 1620 MB/s
test str::char_count::zh_small::case00_cur_libcore ... bench: 11 ns/iter (+/- 1) = 3000 MB/s
test str::char_count::zh_small::case01_old_libcore ... bench: 11 ns/iter (+/- 1) = 3000 MB/s
test str::char_count::zh_small::case02_iter_increment ... bench: 20 ns/iter (+/- 3) = 1650 MB/s
</pre>
</details>
I also added fairly thorough tests for different sizes and alignments. This completes on my machine in 0.02s, which is surprising given how thorough they are, but it seems to detect bugs in the implementation. (I haven't run the tests on a 32 bit machine yet since before I reworked the code a little though, so... hopefully I'm not about to embarrass myself).
This uses similar SWAR-style techniques to the `is_ascii` impl I contributed in https://github.com/rust-lang/rust/pull/74066, so I'm going to request review from the same person who reviewed that one. That said am not particularly picky, and might not have the correct syntax for requesting a review from someone (so it goes).
r? `@nagisa`
Replace iterator-based construction of collections by `Into<T>`
Just a few quality of life improvements in the doc examples. I also removed some `Vec`s in favor of arrays.
Stabilize arc_new_cyclic
This stabilizes feature `arc_new_cyclic` as the implementation has been merged for one year and there is no unresolved questions. The FCP is not started yet.
Closes#75861 .
``@rustbot`` label +T-libs-api
Improve `Arc` and `Rc` documentation
This makes two changes (I can split the PR if necessary, but the changes are pretty small):
1. A bunch of trait implementations claimed to be zero cost; however, they use the `Arc<T>: From<Box<T>>` impl which is definitely not free, especially for large dynamically sized `T`.
2. The code in deferred initialization examples unnecessarily used excessive amounts of `unsafe`. This has been reduced.
doc: guarantee call order for sort_by_cached_key
`slice::sort_by_cached_key` takes a caching function `f: impl FnMut(&T) -> K`, which means that the order that calls to the caching function are made is user-visible. This adds a clause to the documentation to promise the current behavior, which is that `f` is called on all elements of the slice from left to right, unless the slice has len < 2 in which case `f` is not called.
For example, this can be used to ensure that the following code is a correct way to involve the index of the element in the sort key:
```rust
let mut index = 0;
slice.sort_by_cached_key(|x| (my_key(index, x), index += 1).0);
```
Clarify explicitly that BTree{Map,Set} are ordered.
One of the main reasons one would want to use a BTree{Map,Set} rather than a Hash{Map,Set} is because they maintain their keys in sorted order; but this was never explicitly stated in the top-level docs (it was only indirectly alluded to there, and stated explicitly in the docs for `iter`, `values`, etc.)
This PR states the ordering guarantee more prominently.
Add diagnostic items for macros
For use in Clippy, it adds diagnostic items to all the stable public macros
Clippy has lints that look for almost all of these (currently by name or path), but there are a few that aren't currently part of any lint, I could remove those if it's preferred to add them as needed rather than ahead of time
Yield means something else in the context of generators, which are
sufficiently close to iterators that it's better to avoid the
terminology collision here.
Implement `panic::update_hook`
Add a new function `panic::update_hook` to allow creating panic hooks that forward the call to the previously set panic hook, without race conditions. It works by taking a closure that transforms the old panic hook into a new one, while ensuring that during the execution of the closure no other thread can modify the panic hook. This is a small function so I hope it can be discussed here without a formal RFC, however if you prefer I can write one.
Consider the following example:
```rust
let prev = panic::take_hook();
panic::set_hook(Box::new(move |info| {
println!("panic handler A");
prev(info);
}));
```
This is a common pattern in libraries that need to do something in case of panic: log panic to a file, record code coverage, send panic message to a monitoring service, print custom message with link to github to open a new issue, etc. However it is impossible to avoid race conditions with the current API, because two threads can execute in this order:
* Thread A calls `panic::take_hook()`
* Thread B calls `panic::take_hook()`
* Thread A calls `panic::set_hook()`
* Thread B calls `panic::set_hook()`
And the result is that the original panic hook has been lost, as well as the panic hook set by thread A. The resulting panic hook will be the one set by thread B, which forwards to the default panic hook. This is not considered a big issue because the panic handler setup is usually run during initialization code, probably before spawning any other threads.
Using the new `panic::update_hook` function, this race condition is impossible, and the result will be either `A, B, original` or `B, A, original`.
```rust
panic::update_hook(|prev| {
Box::new(move |info| {
println!("panic handler A");
prev(info);
})
});
```
I found one real world use case here: 988cf403e7/src/detection.rs (L32) the workaround is to detect the race condition and panic in that case.
The pattern of `take_hook` + `set_hook` is very common, you can see some examples in this pull request, so I think it's natural to have a function that combines them both. Also using `update_hook` instead of `take_hook` + `set_hook` reduces the number of calls to `HOOK_LOCK.write()` from 2 to 1, but I don't expect this to make any difference in performance.
### Unresolved questions:
* `panic::update_hook` takes a closure, if that closure panics the error message is "panicked while processing panic" which is not nice. This is a consequence of holding the `HOOK_LOCK` while executing the closure. Could be avoided using `catch_unwind`?
* Reimplement `panic::set_hook` as `panic::update_hook(|_prev| hook)`?
Partially stabilize `maybe_uninit_extra`
This covers:
```rust
impl<T> MaybeUninit<T> {
pub unsafe fn assume_init_read(&self) -> T { ... }
pub unsafe fn assume_init_drop(&mut self) { ... }
}
```
It does not cover the const-ness of `write` under `const_maybe_uninit_write` nor the const-ness of `assume_init_read` (this commit adds `const_maybe_uninit_assume_init_read` for that).
FCP: https://github.com/rust-lang/rust/issues/63567#issuecomment-958590287.
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
This covers:
impl<T> MaybeUninit<T> {
pub unsafe fn assume_init_read(&self) -> T { ... }
pub unsafe fn assume_init_drop(&mut self) { ... }
}
It does not cover the const-ness of `write` under
`const_maybe_uninit_write` nor the const-ness of
`assume_init_read` (this commit adds
`const_maybe_uninit_assume_init_read` for that).
FCP: https://github.com/rust-lang/rust/issues/63567#issuecomment-958590287.
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
Replace usages of vec![].into_iter with [].into_iter
`[].into_iter` is idiomatic over `vec![].into_iter` because its simpler and faster (unless the vec is optimized away in which case it would be the same)
So we should change all the implementation, documentation and tests to use it.
I skipped:
* `src/tools` - Those are copied in from upstream
* `src/test/ui` - Hard to tell if `vec![].into_iter` was used intentionally or not here and not much benefit to changing it.
* any case where `vec![].into_iter` was used because we specifically needed a `Vec::IntoIter<T>`
* any case where it looked like we were intentionally using `vec![].into_iter` to test it.
Mak DefId to AccessLevel map in resolve for export
hir_id to accesslevel in resolve and applied in privacy
using local def id
removing tracing probes
making function not recursive and adding comments
Move most of Exported/Public res to rustc_resolve
moving public/export res to resolve
fix missing stability attributes in core, std and alloc
move code to access_levels.rs
return for some kinds instead of going through them
Export correctness, macro changes, comments
add comment for import binding
add comment for import binding
renmae to access level visitor, remove comments, move fn as closure, remove new_key
fmt
fix rebase
fix rebase
fmt
fmt
fix: move macro def to rustc_resolve
fix: reachable AccessLevel for enum variants
fmt
fix: missing stability attributes for other architectures
allow unreachable pub in rustfmt
fix: missing impl access level + renaming export to reexport
Missing impl access level was found thanks to a test in clippy
Fix a minor mistake in `String::try_reserve_exact` examples
The examples of `String::try_reserve_exact` didn't actually use `try_reserve_exact`, which was probably a minor mistake, and this PR fixed it.
Drop guards in slice sorting derive src pointers from &mut T, which is invalidated by interior mutation in comparison
I tried to run https://github.com/rust-lang/miri-test-libstd on `alloc` with `-Zmiri-track-raw-pointers`, and got a failure on the test `slice::panic_safe`. The test failure has nothing to do with panic safety, it's from how the test tests for panic safety.
I minimized the test failure into this very silly program:
```rust
use std::cell::Cell;
use std::cmp::Ordering;
#[derive(Clone)]
struct Evil(Cell<usize>);
fn main() {
let mut input = vec![Evil(Cell::new(0)); 3];
// Hits the bug pattern via CopyOnDrop in core
input.sort_unstable_by(|a, _b| {
a.0.set(0);
Ordering::Less
});
// Hits the bug pattern via InsertionHole in alloc
input.sort_by(|_a, b| {
b.0.set(0);
Ordering::Less
});
}
```
To fix this, I'm just removing the mutability/uniqueness where it wasn't required.
Implement split_at_spare_mut without Deref to a slice so that the spare slice is valid
~I'm not sure I understand what's going on here correctly. And I'm pretty sure this safety comment needs to be changed. I'm just referring to the same thing that `as_mut_ptr_range` does.~ (Thanks `@RalfJung` for the guidance and clearing things up)
I tried to run https://github.com/rust-lang/miri-test-libstd on alloc with -Zmiri-track-raw-pointers, and got a failure on the test `vec::test_extend_from_within`.
I minimized the test failure into this program:
```rust
#![feature(vec_split_at_spare)]
fn main() {
Vec::<i32>::with_capacity(1).split_at_spare_mut();
}
```
The problem is that the existing implementation is actually getting a pointer range where both pointers are derived from the initialized region of the Vec's allocation, but we need the second one to be valid for the region between len and capacity. (thanks Ralf for clearing this up)
RawVec: don't recompute capacity after allocating.
Currently it sets the capacity to `ptr.len() / mem::size_of::<T>()`
after any buffer allocation/reallocation. This would be useful if
allocators ever returned a `NonNull<[u8]>` with a size larger than
requested. But this never happens, so it's not useful.
Removing this slightly reduces the size of generated LLVM IR, and
slightly speeds up the hot path of `RawVec` growth.
r? `@ghost`
Allow reverse iteration of lowercase'd/uppercase'd chars
The PR implements `DoubleEndedIterator` trait for `ToLowercase` and `ToUppercase`.
This enables reverse iteration of lowercase/uppercase variants of character sequences.
One of use cases: determining whether a char sequence is a suffix of another one.
Example:
```rust
fn endswith_ignore_case(s1: &str, s2: &str) -> bool {
for eob in s1
.chars()
.flat_map(|c| c.to_lowercase())
.rev()
.zip_longest(s2.chars().flat_map(|c| c.to_lowercase()).rev())
{
match eob {
EitherOrBoth::Both(c1, c2) => {
if c1 != c2 {
return false;
}
}
EitherOrBoth::Left(_) => return true,
EitherOrBoth::Right(_) => return false,
}
}
true
}
```
Currently it sets the capacity to `ptr.len() / mem::size_of::<T>()`
after any buffer allocation/reallocation. This would be useful if
allocators ever returned a `NonNull<[u8]>` with a size larger than
requested. But this never happens, so it's not useful.
Removing this slightly reduces the size of generated LLVM IR, and
slightly speeds up the hot path of `RawVec` growth.
The previous implementation used slice::as_mut_ptr_range to derive the
pointer for the spare capacity slice. This is invalid, because that
pointer is derived from the initialized region, so it does not have
provenance over the uninitialized region.
Update example code for Vec::splice to change the length
The current example for `Vec::splice` illustrates the replacement of a section of length 2 with a new section of length 2. This isn't a particularly interesting case for splice, and makes it look a bit like a shorthand for the kind of manipulations that could be done with a mutable slice.
In order to provide a stronger example, this updates the example to use different lengths for the source and destination regions, and uses a slice from the middle of the vector to illustrate that this does not necessarily have to be at the beginning or the end.
Resolves#92067
The src pointers in CopyOnDrop and InsertionHole used to be *mut T, and
were derived via automatic conversion from &mut T. According to Stacked
Borrows 2.1, this means that those pointers become invalidated by
interior mutation in the comparison function.
But there's no need for mutability in this code path. Thus, we can
change the drop guards to use *const and derive those from &T.
Make split_inclusive() on an empty slice yield an empty output
`[].split_inclusive()` currently yields a single, empty slice. That's
different from `"".split_inslusive()`, which yields no output at
all. I think that makes the slice version harder to use.
The case where I ran into this bug was when writing code for
generating a diff between two slices of bytes. I wanted to prefix
removed lines with "-" and a added lines with "+". Due to
`split_inclusive()`'s current behavior, that means that my code prints
just a "-" or "+" for empty files. I suspect most existing callers
have similar "bugs" (which would be fixed by this patch).
Closes#89716.
add BinaryHeap::try_reserve and BinaryHeap::try_reserve_exact
`try_reserve` of many collections were stablized in https://github.com/rust-lang/rust/pull/87993 in 1.57.0. Add `try_reserve` for the rest collections such as `BinaryHeap` should be not controversial.
Use spare_capacity_mut instead of invalid unchecked indexing when joining str
This is a fix for https://github.com/rust-lang/rust/issues/91574
I think in general I'd prefer to see this code implemented with raw pointers or `MaybeUninit::write_slice`, but there's existing code in here based on copying from slice to slice, so converting everything from `&[T]` to `&[MaybeUninit<T>]` is less disruptive.
BTree: improve public descriptions and comments
BTreeSet has always used the term "value" next to and meaning the same thing as "elements" (in the mathematical sense but also used for key-value pairs in BTreeMap), while in the BTreeMap sense these "values" are known as "keys" and definitely not "values". Today I had enough of that.
r? `@Mark-Simulacrum`
Btree: assert more API compatibility
Introducing a member such as `BTreeSet::min()` would silently break compatibility if no code calls the existing `BTreeSet::min(set)`. `BTreeSet` is the only btree class silently bringing in stable members, apart from many occurrences of `#[derive(Debug)]` on iterators.
r? `@Mark-Simulacrum`
Rollup of 11 pull requests
Successful merges:
- #91668 (Remove the match on `ErrorKind::Other`)
- #91678 (Add tests fixed by #90023)
- #91679 (Move core/stream/stream/mod.rs to core/stream/stream.rs)
- #91681 (fix typo in `intrinsics::raw_eq` docs)
- #91686 (Fix `Vec::reserve_exact` documentation)
- #91697 (Delete Utf8Lossy::from_str)
- #91706 (Add unstable book entries for parts of asm that are not being stabilized)
- #91709 (Replace iterator-based set construction by *Set::From<[T; N]>)
- #91716 (Improve x.py logging and defaults a bit more)
- #91747 (Add pierwill to .mailmap)
- #91755 (Fix since attribute for const_linked_list_new feature)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Replace iterator-based set construction by *Set::From<[T; N]>
This uses the array-based construction for `BtreeSet`s and `HashSet`s instead of first creating an iterator. I could also replace the `let mut a = Set::new(); a.insert(...);` fragments if desired.
Fix `Vec::reserve_exact` documentation
The documentation previously said the new capacity cannot overflow `usize`, but in fact it cannot exceed `isize::MAX`.
Update documentation to use `from()` to initialize `HashMap`s and `BTreeMap`s
As of Rust 1.56, `HashMap` and `BTreeMap` both have associated `from()` functions. I think using these in the documentation cleans things up a bit. It allows us to remove some of the `mut`s and avoids the Initialize-Then-Modify anti-pattern.
replace vec::Drain drop loops with drop_in_place
The `Drain::drop` implementation came up in https://github.com/rust-lang/rust/pull/82185#issuecomment-789584796 as potentially interfering with other optimization work due its widespread use somewhere in `println!`
`@rustbot` label T-libs-impl