fix minor spelling error in Poll::ready docs
Fixes minor spelling error in the proposed `Poll::ready` docs. Not that my opinion matters, but +1 on the original PR (#89651), it reads much nicer to me than the `ready!` macro.
Add #[must_use] to is_condition tests
I threw in `std::path::Path::has_root` for funsies.
A continuation of #89718.
Parent issue: #89692
r? ```@joshtriplett```
Add #[must_use] to non-mutating verb methods
These are methods that could be misconstrued to mutate their input, similar to #89694. I gave each one a different custom message.
I wrote that `upgrade` and `downgrade` don't modify the input pointers. Logically they don't, but technically they do...
Parent issue: #89692
r? ```@joshtriplett```
Add #[must_use] to From::from and Into::into
Risk of churn: **High**
Magic 8-Ball says: **Outlook not so good**
I figured I'd put this out there. If we don't do it now maybe we save it for a rainy day.
Parent issue: #89692
r? `@joshtriplett`
Speedup int log10 branchless
This is achieved with a branchless bit-twiddling implementation of the case x < 100_000, and using this as building block.
Benchmark on an Intel i7-8700K (Coffee Lake):
```
name old ns/iter new ns/iter diff ns/iter diff % speedup
num::int_log::u8_log10_predictable 165 169 4 2.42% x 0.98
num::int_log::u8_log10_random 438 423 -15 -3.42% x 1.04
num::int_log::u8_log10_random_small 438 423 -15 -3.42% x 1.04
num::int_log::u16_log10_predictable 633 417 -216 -34.12% x 1.52
num::int_log::u16_log10_random 908 471 -437 -48.13% x 1.93
num::int_log::u16_log10_random_small 945 471 -474 -50.16% x 2.01
num::int_log::u32_log10_predictable 1,496 1,340 -156 -10.43% x 1.12
num::int_log::u32_log10_random 1,076 873 -203 -18.87% x 1.23
num::int_log::u32_log10_random_small 1,145 874 -271 -23.67% x 1.31
num::int_log::u64_log10_predictable 4,005 3,171 -834 -20.82% x 1.26
num::int_log::u64_log10_random 1,247 1,021 -226 -18.12% x 1.22
num::int_log::u64_log10_random_small 1,265 921 -344 -27.19% x 1.37
num::int_log::u128_log10_predictable 39,667 39,579 -88 -0.22% x 1.00
num::int_log::u128_log10_random 6,456 6,696 240 3.72% x 0.96
num::int_log::u128_log10_random_small 4,108 3,903 -205 -4.99% x 1.05
```
Benchmark on an M1 Mac Mini:
```
name old ns/iter new ns/iter diff ns/iter diff % speedup
num::int_log::u8_log10_predictable 143 130 -13 -9.09% x 1.10
num::int_log::u8_log10_random 375 325 -50 -13.33% x 1.15
num::int_log::u8_log10_random_small 376 325 -51 -13.56% x 1.16
num::int_log::u16_log10_predictable 500 322 -178 -35.60% x 1.55
num::int_log::u16_log10_random 794 405 -389 -48.99% x 1.96
num::int_log::u16_log10_random_small 1,035 405 -630 -60.87% x 2.56
num::int_log::u32_log10_predictable 1,144 894 -250 -21.85% x 1.28
num::int_log::u32_log10_random 832 786 -46 -5.53% x 1.06
num::int_log::u32_log10_random_small 832 787 -45 -5.41% x 1.06
num::int_log::u64_log10_predictable 2,681 2,057 -624 -23.27% x 1.30
num::int_log::u64_log10_random 1,015 806 -209 -20.59% x 1.26
num::int_log::u64_log10_random_small 1,004 795 -209 -20.82% x 1.26
num::int_log::u128_log10_predictable 56,825 56,526 -299 -0.53% x 1.01
num::int_log::u128_log10_random 9,056 8,861 -195 -2.15% x 1.02
num::int_log::u128_log10_random_small 1,528 1,527 -1 -0.07% x 1.00
```
The 128 bit case remains ridiculously slow because llvm fails to optimize division by a constant 128-bit value to multiplications. This could be worked around but it seems preferable to fix this in llvm.
From u32 up, table lookup (like suggested [here](https://github.com/rust-lang/rust/issues/70887#issuecomment-881099813)) is still faster, but requires a hardware `leading_zeros` to be viable, and might clog up the cache.
Fix ICE when compiling nightly std/rustc on beta compiler
Fix#89775#89479 renames a lot of diagnostic items, but it happens that the beta compiler assumes that there must be DefId with `rustc_diagnostic_item = "send_trait"`, causing an ICE when compiling stage 0 std or stage 1 compiler. So gate it with `cfg(bootstrap)`.
The unwrap is also removed, so that existence of the diagnostic item is not required. I ripgreped the code base and this seems the only place where `unwrap` is called on the return value of `get_diagnostic_item`.
Add `Poll::ready` and revert stabilization of `task::ready!`
This PR adds an inherent `ready` method to `Poll` that can be used with the `?` operator as an alternative to the `task::ready!` macro:
```rust
let val = ready!(fut.poll(cx));
let val = fut.poll(cx).ready()?;
```
I think this form is a nice, non-breaking middle ground between changing the `impl Try for Poll`, and adding a separate macro. It looks better than `ready!` in my opinion, and it composes well:
```rust
let elem = ready!(fut.poll(cx)).pop().unwrap();
let elem = fut.poll(cx).ready()?.pop().unwrap();
```
The planned stabilization of `ready!` in 1.56 has been reverted because I think this alternate approach is worth considering.
r? rust-lang/libs
Add enum_intrinsics_non_enums lint
There is a clippy lint to prevent calling [`mem::discriminant`](https://doc.rust-lang.org/std/mem/fn.discriminant.html) with a non-enum type. I think the lint is worthy of being included in rustc, given that `discriminant::<T>()` where `T` is a non-enum has an unspecified return value, and there are no valid use cases where you'd actually want this.
I've also made the lint check [variant_count](https://doc.rust-lang.org/core/mem/fn.variant_count.html) (#73662).
closes#83899
Add #[must_use] to from_value conversions
I added two methods to the list myself. Clippy did not flag them because they take `mut` args, but neither modifies their argument.
```rust
core::str const unsafe fn from_utf8_unchecked_mut(v: &mut [u8]) -> &mut str;
std::ffi::CString unsafe fn from_raw(ptr: *mut c_char) -> CString;
```
I put a custom note on `from_raw`:
```rust
#[must_use = "call `drop(from_raw(ptr))` if you intend to drop the `CString`"]
pub unsafe fn from_raw(ptr: *mut c_char) -> CString {
```
Parent issue: #89692
r? ``@joshtriplett``
Add #[must_use] to alloc constructors
Added `#[must_use]`. to the various forms of `new`, `pin`, and `with_capacity` in the `alloc` crate. No extra explanations given as I couldn't think of anything useful to add.
I figure this deserves extra scrutiny compared to the other PRs I've done so far. In particular:
* The 4 `pin`/`pin_in` methods I touched. Are there legitimate use cases for pinning and not using the result? Pinning's a difficult concept I'm not very comfortable with.
* `Box`'s constructors. Do people ever create boxes just for the side effects... allocating or zeroing out memory?
Parent issue: #89692
r? ``@joshtriplett``
Cfg hide no_global_oom_handling and no_fp_fmt_parse
These are unstable sysroot customisation cfg options that only projects building their own sysroot will use (e.g. Rust-for-linux). Most users shouldn't care. `no_global_oom_handling` can be especially annoying since it's applied on many commonly used alloc crate methods (e.g. `Box::new`, `Vec::push`).
r? ```@GuillaumeGomez```
docs: `std:#️⃣:Hash` should ensure prefix-free data
Attempt to synthesize the discussion in #89429 into a suggestion regarding `Hash` implementations (not a hard requirement).
Closes#89429.
Improve docs for int_log
* Clarify rounding.
* Avoid "wrapping" wording.
* Omit wrong claim on 0 only being returned in error cases.
* Typo fix for one_less_than_next_power_of_two.
Add new tier-3 target: armv7-unknown-linux-uclibceabihf
This change adds a new tier-3 target: armv7-unknown-linux-uclibceabihf
This target is primarily used in embedded linux devices where system resources are slim and glibc is deemed too heavyweight. Cross compilation C toolchains are available [here](https://toolchains.bootlin.com/) or via [buildroot](https://buildroot.org).
The change is based largely on a previous PR #79380 with a few minor modifications. The author of that PR was unable to push the PR forward, and graciously allowed me to take it over.
Per the [target tier 3 policy](https://github.com/rust-lang/rfcs/blob/master/text/2803-target-tier-policy.md), I volunteer to be the "target maintainer".
This is my first PR to Rust itself, so I apologize if I've missed things!
Add documentation to boxed conversions
Among other changes, documents whether allocations are necessary
to complete the type conversion.
Part of #51430, supersedes #89199
Update to Unicode 14.0
The Unicode Standard [announced Version 14.0](https://home.unicode.org/announcing-the-unicode-standard-version-14-0/) on September 14, 2021, and this pull request updates the generated tables in `core` accordingly.
This did require a little prep-work in `unicode-table-generator`. First, #81358 had modified the generated file instead of the tool, so that change is now reflected in the tool as well. Next, I found that the "Alphabetic" property in version 14 was panicking when generating a bitset, "cannot pack 264 into 8 bits". We've been using the skiplist for that anyway, so I changed this to fail gracefully. Finally, I confirmed that the tool still created the exact same tables for 13 before moving to 14.
Add 'core::array::from_fn' and 'core::array::try_from_fn'
These auxiliary methods fill uninitialized arrays in a safe way and are particularly useful for elements that don't implement `Default`.
```rust
// Foo doesn't implement Default
struct Foo(usize);
let _array = core::array::from_fn::<_, _, 2>(|idx| Foo(idx));
```
Different from `FromIterator`, it is guaranteed that the array will be fully filled and no error regarding uninitialized state will be throw. In certain scenarios, however, the creation of an **element** can fail and that is why the `try_from_fn` function is also provided.
```rust
#[derive(Debug, PartialEq)]
enum SomeError {
Foo,
}
let array = core::array::try_from_fn(|i| Ok::<_, SomeError>(i));
assert_eq!(array, Ok([0, 1, 2, 3, 4]));
let another_array = core::array::try_from_fn(|_| Err(SomeError::Foo));
assert_eq!(another_array, Err(SomeError::Foo));
```
Add #[must_use] to string/char transformation methods
These methods could be misconstrued as modifying their arguments instead of returning new values.
Where possible I made the note recommend a method that does mutate in place.
Parent issue: #89692
Fix minor std::thread documentation typo
callers of spawn_unchecked() need to make sure that the thread
not outlive references in the passed closure, not the other way around.
Optimize File::read_to_end and read_to_string
Reading a file into an empty vector or string buffer can incur unnecessary `read` syscalls and memory re-allocations as the buffer "warms up" and grows to its final size. This is perhaps a necessary evil with generic readers, but files can be read in smarter by checking the file size and reserving that much capacity.
`std::fs::read` and `std::fs::read_to_string` already perform this optimization: they open the file, reads its metadata, and call `with_capacity` with the file size. This ensures that the buffer does not need to be resized and an initial string of small `read` syscalls.
However, if a user opens the `File` themselves and calls `file.read_to_end` or `file.read_to_string` they do not get this optimization.
```rust
let mut buf = Vec::new();
file.read_to_end(&mut buf)?;
```
I searched through this project's codebase and even here are a *lot* of examples of this. They're found all over in unit tests, which isn't a big deal, but there are also several real instances in the compiler and in Cargo. I've documented the ones I found in a comment here:
https://github.com/rust-lang/rust/issues/89516#issuecomment-934423999
Most telling, the documentation for both the `Read` trait and the `Read::read_to_end` method both show this exact pattern as examples of how to use readers. What this says to me is that this shouldn't be solved by simply fixing the instances of it in this codebase. If it's here it's certain to be prevalent in the wider Rust ecosystem.
To that end, this commit adds specializations of `read_to_end` and `read_to_string` directly on `File`. This way it's no longer a minor footgun to start with an empty buffer when reading a file in.
A nice side effect of this change is that code that accesses a `File` as `impl Read` or `dyn Read` will benefit. For example, this code from `compiler/rustc_serialize/src/json.rs`:
```rust
pub fn from_reader(rdr: &mut dyn Read) -> Result<Json, BuilderError> {
let mut contents = Vec::new();
match rdr.read_to_end(&mut contents) {
```
Related changes:
- I also added specializations to `BufReader` to delegate to `self.inner`'s methods. That way it can call `File`'s optimized implementations if the inner reader is a file.
- The private `std::io::append_to_string` function is now marked `unsafe`.
- `File::read_to_string` being more efficient means that the performance note for `io::read_to_string` can be softened. I've added `@camelid's` suggested wording from https://github.com/rust-lang/rust/issues/80218#issuecomment-936806502.
r? `@joshtriplett`
These methods could be misconstrued as modifying their arguments instead
of returning new values.
Where possible I made the note recommend a method that does mutate in
place.
Among other changes, documents whether allocations are necessary
to complete the type conversion.
Part of #51430
Co-authored-by: Giacomo Stevanato <giaco.stevanato@gmail.com>
Co-authored-by: Joshua Nelson <github@jyn.dev>
Implement #85440 (Random test ordering)
This PR adds `--shuffle` and `--shuffle-seed` options to `libtest`. The options are similar to the [`-shuffle` option](c894b442d1/src/testing/testing.go (L1482-L1499)) that was recently added to Go.
Here are the relevant parts of the help message:
```
--shuffle Run tests in random order
--shuffle-seed SEED
Run tests in random order; seed the random number
generator with SEED
...
By default, the tests are run in alphabetical order. Use --shuffle or set
RUST_TEST_SHUFFLE to run the tests in random order. Pass the generated
"shuffle seed" to --shuffle-seed (or set RUST_TEST_SHUFFLE_SEED) to run the
tests in the same order again. Note that --shuffle and --shuffle-seed do not
affect whether the tests are run in parallel.
```
Is an RFC needed for this?
Reading a file into an empty vector or string buffer can incur
unnecessary `read` syscalls and memory re-allocations as the buffer
"warms up" and grows to its final size. This is perhaps a necessary evil
with generic readers, but files can be read in smarter by checking the
file size and reserving that much capacity.
`std::fs::read` and `read_to_string` already perform this optimization:
they open the file, reads its metadata, and call `with_capacity` with
the file size. This ensures that the buffer does not need to be resized
and an initial string of small `read` syscalls.
However, if a user opens the `File` themselves and calls
`file.read_to_end` or `file.read_to_string` they do not get this
optimization.
```rust
let mut buf = Vec::new();
file.read_to_end(&mut buf)?;
```
I searched through this project's codebase and even here are a *lot* of
examples of this. They're found all over in unit tests, which isn't a
big deal, but there are also several real instances in the compiler and
in Cargo. I've documented the ones I found in a comment here:
https://github.com/rust-lang/rust/issues/89516#issuecomment-934423999
Most telling, the `Read` trait and the `read_to_end` method both show
this exact pattern as examples of how to use readers. What this says to
me is that this shouldn't be solved by simply fixing the instances of it
in this codebase. If it's here it's certain to be prevalent in the wider
Rust ecosystem.
To that end, this commit adds specializations of `read_to_end` and
`read_to_string` directly on `File`. This way it's no longer a minor
footgun to start with an empty buffer when reading a file in.
A nice side effect of this change is that code that accesses a `File` as
a bare `Read` constraint or via a `dyn Read` trait object will benefit.
For example, this code from `compiler/rustc_serialize/src/json.rs`:
```rust
pub fn from_reader(rdr: &mut dyn Read) -> Result<Json, BuilderError> {
let mut contents = Vec::new();
match rdr.read_to_end(&mut contents) {
```
Related changes:
- I also added specializations to `BufReader` to delegate to
`self.inner`'s methods. That way it can call `File`'s optimized
implementations if the inner reader is a file.
- The private `std::io::append_to_string` function is now marked
`unsafe`.
- `File::read_to_string` being more efficient means that the performance
note for `io::read_to_string` can be softened. I've added @camelid's
suggested wording from:
https://github.com/rust-lang/rust/issues/80218#issuecomment-936806502
Make cfg imply doc(cfg)
This is a reopening of #79341, rebased and modified a bit (we made a lot of refactoring in rustdoc's types so they needed to be reflected in this PR as well):
* `hidden_cfg` is now in the `Cache` instead of `DocContext` because `cfg` information isn't stored anymore on `clean::Attributes` type but instead computed on-demand, so we need this information in later parts of rustdoc.
* I removed the `bool_to_options` feature (which makes the code a bit simpler to read for `SingleExt` trait implementation.
* I updated the version for the feature.
There is only one thing I couldn't figure out: [this comment](https://github.com/rust-lang/rust/pull/79341#discussion_r561855624)
> I think I'll likely scrap the whole `SingleExt` extension trait as the diagnostics for 0 and >1 items should be different.
How/why should they differ?
EDIT: this part has been solved, the current code was fine, just needed a little simplification.
cc `@Nemo157`
r? `@jyn514`
Original PR description:
This is only active when the `doc_cfg` feature is active.
The implicit cfg can be overridden via `#[doc(cfg(...))]`, so e.g. to hide a `#[cfg]` you can use something like:
```rust
#[cfg(unix)]
#[doc(cfg(all()))]
pub struct Unix;
```
By adding `#![doc(cfg_hide(foobar))]` to the crate attributes the cfg `#[cfg(foobar)]` (and _only_ that _exact_ cfg) will not be implicitly treated as a `doc(cfg)` to render a message in the documentation.
Rename `std:🧵:available_conccurrency` to `std:🧵:available_parallelism`
_Tracking issue: https://github.com/rust-lang/rust/issues/74479_
This PR renames `std:🧵:available_conccurrency` to `std:🧵:available_parallelism`.
## Rationale
The API was initially named `std:🧵:hardware_concurrency`, mirroring the [C++ API of the same name](https://en.cppreference.com/w/cpp/thread/thread/hardware_concurrency). We eventually decided to omit any reference to the word "hardware" after [this comment](https://github.com/rust-lang/rust/pull/74480#issuecomment-662045841). And so we ended up with `available_concurrency` instead.
---
For a talk I was preparing this week I was reading through ["Understanding and expressing scalable concurrency" (A. Turon, 2013)](http://aturon.github.io/academic/turon-thesis.pdf), and the following passage stood out to me (emphasis mine):
> __Concurrency is a system-structuring mechanism.__ An interactive system that deals with disparate asynchronous events is naturally structured by division into concurrent threads with disparate responsibilities. Doing so creates a better fit between problem and solution, and can also decrease the average latency of the system by preventing long-running computations from obstructing quicker ones.
> __Parallelism is a resource.__ A given machine provides a certain capacity for parallelism, i.e., a bound on the number of computations it can perform simultaneously. The goal is to maximize throughput by intelligently using this resource. For interactive systems, parallelism can decrease latency as well.
_Chapter 2.1: Concurrency is not Parallelism. Page 30._
---
_"Concurrency is a system-structuring mechanism. Parallelism is a resource."_ — It feels like this accurately captures the way we should be thinking about these APIs. What this API returns is not "the amount of concurrency available to the program" which is a property of the program, and thus even with just a single thread is effectively unbounded. But instead it returns "the amount of _parallelism_ available to the program", which is a resource hard-constrained by the machine's capacity (and can be further restricted by e.g. operating systems).
That's why I'd like to propose we rename this API from `available_concurrency` to `available_parallelism`. This still meets the criteria we previously established of not attempting to define what exactly we mean by "hardware", "threads", and other such words. Instead we only talk about "concurrency" as an abstract resource available to our program.
r? `@joshtriplett`
refactor: make VecDeque's IterMut fields module-private, not just crate-private
Made the fields of VecDeque's IterMut private by creating a IterMut::new(...) function to create a new instance of IterMut and migrating usage to use IterMut::new(...).
refactor: VecDeques Drain fields to private
Made the fields of VecDeque's Drain private by creating a Drain::new(...) function to create a new instance of Drain and migrating usage to use Drain::new(...).
Expand documentation for `FpCategory`.
I intend these changes to be helpful to readers who are not yet familiar with the quirks of floating-point numbers. Additionally, I felt it was misleading to describe `Nan` as being the result of division by zero, since most divisions by zero (except for 0/0) produce `Infinite` floats, so I moved that remark to the `Infinite` variant with adjustment.
The first sentence of the `Nan` documentation is copied from `f32`; I followed the example of the `f64` documentation by referring to `f32` for general concepts, rather than duplicating the text.
----
I considered making similar changes to the documentation of the `is_*` methods of floats, but decided that that was a much larger and trickier problem; here, each of the variants' descriptions can be expected to be read in context of being mutually exclusive with the others.
Fix Lower/UpperExp formatting for integers and precision zero
Fixes the integer part of #89493 (I daren't touch the floating-point formatting code). The issue is that the "subtracted" precision essentially behaves like extra trailing zeros, but this is not currently reflected in the code properly.
for signed wrapping remainder, do not compare lhs with MIN
Since the wrapped remainder is going to be 0 for all cases when the rhs is -1, there is no need to compare the lhs with MIN.
Made the fields of VecDeque's IterMut private by creating a IterMut::new(...) function to create a new instance of IterMut and migrating usage to use IterMut::new(...).
Since the wrapped remainder is going to be 0 for all cases when the rhs is -1,
there is no need to divide in this case. Comparing the lhs with MIN is only done
for the overflow bool. In particular, this results in better code generation for
wrapping remainder, which discards the overflow bool completely.
refactor: VecDeques PairSlices fields to private
Reducing VecDeque's PairSlices fields to private, a `from(...)` method is already used to create PairSlices.
Use the 64b inner:monotonize() implementation not the 128b one for aarch64
aarch64 prior to v8.4 (FEAT_LSE2) doesn't have an instruction that guarantees
untorn 128b reads except for completing a 128b load/store exclusive pair
(ldxp/stxp) or compare-and-swap (casp) successfully. The requirement to
complete a 128b read+write atomic is actually more expensive and more unfair
than the previous implementation of monotonize() which used a Mutex on aarch64,
especially at large core counts. For aarch64 switch to the 64b atomic
implementation which is about 13x faster for a benchmark that involves many
calls to Instant::now().
Correctly handle supertraits for min_specialization
Supertraits of specialization markers could circumvent checks for
min_specialization. Elaborating predicates prevents this.
r? ````@nikomatsakis````
path.push() should work as expected on windows verbatim paths
On Windows, std::fs::canonicalize() returns an so-called UNC path. UNC paths differ with regular paths because:
- This type of path can much longer than a non-UNC path (32k vs 260 characters).
- The prefix for a UNC path is ``Component::Prefix(Prefix::DiskVerbatim(..)))``
- No `/` is allowed
- No `.` is allowed
- No `..` is allowed
Rust has poor handling of such paths. If you join a UNC path with a path with any of the above, then this will not work.
I've implemented a new method `fn join_fold()` which joins paths and also removes any `.` and `..` from it, and replaces `/` with `\` on Windows. Using this function it is possible to use UNC paths without issue. In addition, this function is useful on Linux too; paths can be appended without having to call `canonicalize()` to remove the `.` and `..`.
This PR needs test cases, which can I add. I hope this will a start of a discussion.
Stabilize `const_panic`
Closes#51999
FCP completed in #89006
```@rustbot``` label +A-const-eval +A-const-fn +T-lang
cc ```@oli-obk``` for review (not `r?`'ing as not on lang team)
Improve wording of `map_or_else` docs
Changes doc text to refer to the "default" parameter as the "default"
function.
Previously, the doc text referred to the "f" parameter as the "default" function; and the "default" parameter as the "fallback" function.
implement advance_(back_)_by on more iterators
Add more efficient, non-default implementations for `feature(iter_advance_by)` (#77404) on more iterators and adapters.
This PR only contains implementations where skipping over items doesn't elide any observable side-effects such as user-provided closures or `clone()` functions. I'll put those in a separate PR.
Add truncate note to Vec::resize
A very minor addition to the `Vec::resize` documentation to point out the `truncate` method.
When I was searching for something matching `truncate` I managed to miss it, along with some colleagues. We later found it by chance. We did find `resize` however, so I was hoping to point it out in the documentation.
Partially stabilize `array_methods`
This stabilizes `<[T; N]>::as_slice` and `<[T; N]>::as_mut_slice`, which is forms part of the `array_methods` feature: #76118.
This also makes `<[T; N]>::as_slice` const due to its trivial nature.