Fix an intrinsic invocation on threaded wasm
This looks like it was forgotten to get updated in #74482 and wasm with
threads isn't built on CI so we didn't catch this by accident.
refactor: removing alloc::collections::vec_deque ignore-tidy-filelength
This PR removes the need for ignore-tidy-filelength for alloc::collections::vec_deque which is part of the issue https://github.com/rust-lang/rust/issues/60302
It is probably easiest to review this PR by looking at it commit by commit rather than looking at the overall diff.
specialize io::copy to use copy_file_range, splice or sendfile
Fixes#74426.
Also covers #60689 but only as an optimization instead of an official API.
The specialization only covers std-owned structs so it should avoid the problems with #71091
Currently linux-only but it should be generalizable to other unix systems that have sendfile/sosplice and similar.
There is a bit of optimization potential around the syscall count. Right now it may end up doing more syscalls than the naive copy loop when doing short (<8KiB) copies between file descriptors.
The test case executes the following:
```
[pid 103776] statx(3, "", AT_STATX_SYNC_AS_STAT|AT_EMPTY_PATH, STATX_ALL, {stx_mask=STATX_ALL|STATX_MNT_ID, stx_attributes=0, stx_mode=S_IFREG|0644, stx_size=17, ...}) = 0
[pid 103776] write(4, "wxyz", 4) = 4
[pid 103776] write(4, "iklmn", 5) = 5
[pid 103776] copy_file_range(3, NULL, 4, NULL, 5, 0) = 5
```
0-1 `stat` calls to identify the source file type. 0 if the type can be inferred from the struct from which the FD was extracted
𝖬 `write` to drain the `BufReader`/`BufWriter` wrappers. only happen when buffers are present. 𝖬 ≾ number of wrappers present. If there is a write buffer it may absorb the read buffer contents first so only result in a single write. Vectored writes would also be an option but that would require more invasive changes to `BufWriter`.
𝖭 `copy_file_range`/`splice`/`sendfile` until file size, EOF or the byte limit from `Take` is reached. This should generally be *much* more efficient than the read-write loop and also have other benefits such as DMA offload or extent sharing.
## Benchmarks
```
OLD
test io::tests::bench_file_to_file_copy ... bench: 21,002 ns/iter (+/- 750) = 6240 MB/s [ext4]
test io::tests::bench_file_to_file_copy ... bench: 35,704 ns/iter (+/- 1,108) = 3671 MB/s [btrfs]
test io::tests::bench_file_to_socket_copy ... bench: 57,002 ns/iter (+/- 4,205) = 2299 MB/s
test io::tests::bench_socket_pipe_socket_copy ... bench: 142,640 ns/iter (+/- 77,851) = 918 MB/s
NEW
test io::tests::bench_file_to_file_copy ... bench: 14,745 ns/iter (+/- 519) = 8889 MB/s [ext4]
test io::tests::bench_file_to_file_copy ... bench: 6,128 ns/iter (+/- 227) = 21389 MB/s [btrfs]
test io::tests::bench_file_to_socket_copy ... bench: 13,767 ns/iter (+/- 3,767) = 9520 MB/s
test io::tests::bench_socket_pipe_socket_copy ... bench: 26,471 ns/iter (+/- 6,412) = 4951 MB/s
```
Previously EOVERFLOW handling was only applied for io::copy specialization
but not for fs::copy sharing the same code.
Additionally we lower the chunk size to 1GB since we have a user report
that older kernels may return EINVAL when passing 0x8000_0000
but smaller values succeed.
Android builds use feature level 14, the libc wrapper for splice is gated
on feature level 21+ so we have to invoke the syscall directly.
Additionally the emulator doesn't seem to support it so we also have to
add ENOSYS checks.
commit c547d5fabcd756515afa7263ee5304965bb4c497
Author: C <DeveloperC@protonmail.com>
Date: Sat Oct 31 11:22:23 2020 +0000
test: updating ui/hygiene/panic-location.rs expected
commit 2af03769c4ffdbbbad75197a1ad0df8c599186be
Author: C <DeveloperC@protonmail.com>
Date: Sat Oct 31 10:43:30 2020 +0000
fix: documentation unresolved link
commit c4b0df361ce27d7392d8016229f2e0265af32086
Author: C <DeveloperC@protonmail.com>
Date: Sat Oct 31 02:58:31 2020 +0000
style: compiling with Rust's style guidelines
commit bdd2de5f3c09b49a18e3293f2457fcab25557c96
Author: C <DeveloperC@protonmail.com>
Date: Sat Oct 31 02:56:31 2020 +0000
refactor: removing ignore-tidy-filelength
commit fcc4b3bc41f57244c65ebb8e4efe4cbc9460b5a9
Author: C <DeveloperC@protonmail.com>
Date: Sat Oct 31 02:51:35 2020 +0000
refactor: moving trait RingSlices to ring_slices.rs
commit 2f0cc539c06d8841baf7f675168f68ca7c21e68e
Author: C <DeveloperC@protonmail.com>
Date: Sat Oct 31 02:46:09 2020 +0000
refactor: moving struct PairSlices to pair_slices.rs
commit a55d3ef1dab4c3d85962b3a601ff8d1f7497faf2
Author: C <DeveloperC@protonmail.com>
Date: Sat Oct 31 02:31:45 2020 +0000
refactor: moving struct Iter to iter.rs
commit 76ab33a12442a03726f36f606b4e0fe70f8f246b
Author: C <DeveloperC@protonmail.com>
Date: Sat Oct 31 02:24:32 2020 +0000
refactor: moving struct IntoIter into into_iter.rs
commit abe0d9eea2933881858c3b1bc09df67cedc5ada5
Author: C <DeveloperC@protonmail.com>
Date: Sat Oct 31 02:19:07 2020 +0000
refactor: moving struct IterMut into iter_mut.rs
commit 70ebd6420335e1895e2afa2763a0148897963e24
Author: C <DeveloperC@protonmail.com>
Date: Sat Oct 31 01:49:15 2020 +0000
refactor: moved macros into macros.rs
commit b08dd2add994b04ae851aa065800bd8bd6326134
Author: C <DeveloperC@protonmail.com>
Date: Sat Oct 31 01:05:36 2020 +0000
refactor: moving vec_deque.rs to vec_deque/mod.rs
Improve BinaryHeap performance
By changing the condition in the loops from `child < end` to `child < end - 1` we're guaranteed that `right = child + 1 < end` and since finding the index of the biggest sibling can be done with an arithmetic operation we can remove a branch from the loop body. The case where there's no right child, i.e. `child == end - 1` is instead handled outside the loop, after it ends; note that if the loops ends early we can use `return` instead of `break` since the check `child == end - 1` will surely fail.
I've also removed a call to `<[T]>::swap` that was hiding a bound check that [wasn't being optimized by LLVM](https://godbolt.org/z/zrhdGM).
A quick benchmarks on my pc shows that the gains are pretty significant:
|name |before ns/iter |after ns/iter |diff ns/iter |diff % |speedup |
|---------------------|----------------|---------------|--------------|----------|--------|
|find_smallest_1000 | 352,565 | 260,098 | -92,467 | -26.23% | x 1.36 |
|from_vec | 676,795 | 473,934 | -202,861 | -29.97% | x 1.43 |
|into_sorted_vec | 469,511 | 304,275 | -165,236 | -35.19% | x 1.54 |
|pop | 483,198 | 373,778 | -109,420 | -22.64% | x 1.29 |
The other 2 benchmarks for `BinaryHeap` (`peek_mut_deref_mut` and `push`) weren't impacted and as such didn't show any significant change.
Update thread and futex APIs to work with Emscripten
This updates the thread and futex APIs in `std` to match the APIs exposed by
Emscripten. This allows threads to run on `wasm32-unknown-emscripten` and the
thread parker to compile without errors related to the missing `futex` module.
To make use of this, Rust code must be compiled with `-C target-feature=atomics`
and Emscripten must link with `-pthread`.
I have confirmed this works well locally when building multithreaded crates.
Attempting to enable `std` thread tests currently fails for seemingly obscure
reasons and Emscripten is currently disabled in CI, so further work is needed to
have proper test coverage here.
This updates the thread and futex APIs in `std` to match the APIs exposed by
Emscripten. This allows threads to run on `wasm32-unknown-emscripten` and the
thread parker to compile without errors related to the missing `futex` module.
To make use of this, Rust code must be compiled with `-C target-feature=atomics`
and Emscripten must link with `-pthread`.
I have confirmed this works well locally when building multithreaded crates.
Attempting to enable `std` thread tests currently fails for seemingly obscure
reasons and Emscripten is currently disabled in CI, so further work is needed to
have proper test coverage here.
BTreeMap: split off most code of append
To complete #78056, move the last single-purpose pieces of code out of map.rs into a separate module. Also, tweaked documentation and safeness - I doubt think this code would be safe if the iterators passed in wouldn't be as sorted as the method says they should be - and bounds on MergeIterInner.
r? ```@Mark-Simulacrum```
Duration::zero() -> Duration::ZERO
In review for #72790, whether or not a constant or a function should be favored for `#![feature(duration_zero)]` was seen as an open question. In https://github.com/rust-lang/rust/issues/73544#issuecomment-691701670 an invitation was opened to either stabilize the methods or propose a switch to the constant value, supplemented with reasoning. Followup comments suggested community preference leans towards the const ZERO, which would be reason enough.
ZERO also "makes sense" beside existing associated consts for Duration. It is ever so slightly awkward to have a series of constants specifying 1 of various units but leave 0 as a method, especially when they are side-by-side in code. It seems unintuitive for the one non-dynamic value (that isn't from Default) to be not-a-const, which could hurt discoverability of the associated constants overall. Elsewhere in `std`, methods for obtaining a constant value were even deprecated, as seen with [std::u32::min_value](https://doc.rust-lang.org/std/primitive.u32.html#method.min_value).
Most importantly, ZERO costs less to use. A match supports a const pattern, but const fn can only be used if evaluated through a const context such as an inline `const { const_fn() }` or a `const NAME: T = const_fn()` declaration elsewhere. Likewise, while https://github.com/rust-lang/rust/issues/73544#issuecomment-691949373 notes `Duration::zero()` can optimize to a constant value, "can" is not "will". Only const contexts have a strong promise of such. Even without that in mind, the comment in question still leans in favor of the constant for simplicity. As it costs less for a developer to use, may cost less to optimize, and seems to have more of a community consensus for it, the associated const seems best.
r? ```@LukasKalbertodt```
The discussion seems to have resolved that this lint is a bit "noisy" in
that applying it in all places would result in a reduction in
readability.
A few of the trivial functions (like `Path::new`) are fine to leave
outside of closures.
The general rule seems to be that anything that is obviously an
allocation (`Box`, `Vec`, `vec![]`) should be in a closure, even if it
is a 0-sized allocation.
Add missing newline to error message of the default OOM hook
Currently the default OOM hook in libstd does not end the error message with a newline:
```
memory allocation of 4 bytes failedtimeout: the monitored command dumped core
/playground/tools/entrypoint.sh: line 11: 7 Aborted timeout --signal=KILL ${timeout} "$`@"`
```
https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=030d8223eb57dfe47ef157709aa26542
This is because the `fmt::Arguments` passed to `dumb_print()` does not end with a newline. All other calls to `dumb_print()` in libstd pass a `\n`-ended `fmt::Arguments` to `dumb_print()`. For example:
25f6938da4/library/std/src/sys_common/util.rs (L18)
I think the `\n` was forgotten in #51264.
This PR appends `\n` to the error string.
~~Note that I didn't add a test, because I didn't find tests for functions in ` library/std/src/alloc.rs` or a test that is similar to the test of this change would be.~~ *Edit: CI told me there is an existing test. Sorry.*
Workaround for "could not fully normalize" ICE
Workaround for "could not fully normalize" ICE (#78139) by removing the `needs_drop::<T>()` calls triggering it.
Corresponding beta PR: #78845Fixes#78139 -- the underlying bug is likely not fixed but we don't have another test case isolated for now, so closing.
fix some incorrect aliasing in the BTree
This line is wrong:
```
ptr::copy(slice.as_ptr().add(idx), slice.as_mut_ptr().add(idx + 1), slice.len() - idx);
```
When `slice.as_mut_ptr()` is called, that creates a mutable reference to the entire slice, which invalidates the raw pointer previously returned by `slice.as_ptr()`. (Miri currently misses this because raw pointers are not tracked properly.)
Cc ````````@ssomers````````
BTreeMap: stop mistaking node for an orderly place
A second mistake in #77612 was to ignore the node module's rightful comment "this module doesn't care whether the entries are sorted". And there's a much simpler way to visit the keys in order, if you check this separately from a single pass checking everything.
r? ````````@Mark-Simulacrum````````
Define `fs::hard_link` to not follow symlinks.
POSIX leaves it [implementation-defined] whether `link` follows symlinks.
In practice, for example, on Linux it does not and on FreeBSD it does.
So, switch to `linkat`, so that we can pick a behavior rather than
depending on OS defaults.
Pick the option to not follow symlinks. This is somewhat arbitrary, but
seems the less surprising choice because hard linking is a very
low-level feature which requires the source and destination to be on
the same mounted filesystem, and following a symbolic link could end
up in a different mounted filesystem.
[implementation-defined]: https://pubs.opengroup.org/onlinepubs/9699919799/functions/link.html
Refactor IntErrorKind to avoid "underflow" terminology
This PR is a continuation of #76455
# Changes
- `Overflow` renamed to `PosOverflow` and `Underflow` renamed to `NegOverflow` after discussion in #76455
- Changed some of the parsing code to return `InvalidDigit` rather than `Empty` for strings "+" and "-". https://users.rust-lang.org/t/misleading-error-in-str-parse-for-int-types/49178
- Carry the problem `char` with the `InvalidDigit` variant.
- Necessary changes were made to the compiler as it depends on `int_error_matching`.
- Redid tests to match on specific errors.
r? ```@KodrAus```