Manually tested via
```
// spawn a new cgroup scope for the current user
$ sudo systemd-run -p CPUQuota="300%" --uid=$(id -u) -tdS
// quota.rs
#![feature(available_parallelism)]
fn main() {
println!("{:?}", std:🧵:available_parallelism()); // prints Ok(3)
}
```
Caveats
* cgroup v1 is ignored
* funky mountpoints (containing spaces, newlines or control chars) for cgroupfs will not be handled correctly since that would require unescaping /proc/self/mountinfo
The escaping behavior of procfs seems to be undocumented. systemd and docker default to `/sys/fs/cgroup` so it should be fine for most systems.
* quota will be ignored when `sched_getaffinity` doesn't work
* assumes procfs is mounted under `/proc` and cgroupfs mounted and readable somewhere in the directory tree
When CStr moves to core with an alias in std, this can link to
`crate::ffi::CStr`. However, linking in the reverse direction (from core
to std) requires a relative path, and that path can't work from both
core::ffi and std::os::raw (different number of `../` traversals
required).
The ability to interoperate with C code via FFI is not limited to crates
using std; this allows using these types without std.
The existing types in `std::os::raw` become type aliases for the ones in
`core::ffi`. This uses type aliases rather than re-exports, to allow the
std types to remain stable while the core types are unstable.
This also moves the currently unstable `NonZero_` variants and
`c_size_t`/`c_ssize_t`/`c_ptrdiff_t` types to `core::ffi`, while leaving
them unstable.
core can't depend on external crates the way std can. Rather than revert
usage of cfg_if, add a copy of it to core. This does not export our
copy, even unstably; such a change could occur in a later commit.
Sync portable-simd for bitmasks &c.
In the ideal case, where everything works easily and nothing has to be rearranged, it is as simple as:
- `git subtree pull -P library/portable-simd https://github.com/rust-lang/portable-simd - ${branch}`
- write the commit message
- `python x.py test --stage 1` to make sure it runs
- `git push` to your PR-to-rustc branch
If anything borks up this flow, you can fix it with sufficient git wizardry but you are usually better off going back to the source, fixing it, and starting over, before you open the PR.
r? `@calebzulawski`
Add Atomic*::from_mut_slice
Tracking issue #76314 for `from_mut` has a question about the possibility of `from_mut_slice`, and I found a real case for it. A user in the forum had a parallelism problem that could be solved by open-indexing updates to a vector of atomics, but they didn't want to affect the other code using that vector. Using `from_mut_slice`, they could borrow that data as atomics just long enough for their parallel loop.
ref: https://users.rust-lang.org/t/sharing-vector-with-rayon-par-iter-correctly/72022
use BOOL for TCP_NODELAY setsockopt value on Windows
This issue was found by the Wine project and mitigated there [^1].
Windows' setsockopt expects a BOOL (a typedef for int) for TCP_NODELAY
[^2]. Windows itself is forgiving and will accept any positive optlen and
interpret the first byte of *optval as the value, so this bug does not
affect Windows itself, but does affect systems implementing Windows'
interface more strictly, such as Wine. Wine was previously passing this
through to the host's setsockopt, where, e.g., Linux requires that
optlen be correct for the chosen option, and TCP_NODELAY expects an int.
[^1]: d6ea38f32d
[^2]: https://docs.microsoft.com/en-us/windows/win32/api/winsock/nf-winsock-setsockopt
fix typo in btree/vec doc: Self -> self
this pr fixes#92345
the documentation refers to the object the method is called for, not the type, so it should be using the lower case self.
For MIRI, cfg out the swap vectorization logic from 94212
Because of #69488 the swap logic from #94212 doesn't currently work in MIRI.
Copying in smaller pieces is probably much worse for its performance anyway, so it'd probably rather just use the simple path regardless.
Part of #94371, though another PR will be needed for the CTFE aspect.
r? `@oli-obk`
cc `@RalfJung`
Rollup of 5 pull requests
Successful merges:
- #93603 (Populate liveness facts when calling `get_body_with_borrowck_facts` without `-Z polonius`)
- #93870 (Fix switch on discriminant detection in a presence of coverage counters)
- #94355 (Add one more case to avoid ICE)
- #94363 (Remove needless borrows from core::fmt)
- #94377 (`check_used` should only look at actual `used` attributes)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
This function was updated in a recent PR (92911) to be called without the caller
information passed in, but the function signature itself was not altered with
cfg_attr at the time.
As an example:
#[test]
#[ignore = "not yet implemented"]
fn test_ignored() {
...
}
Will now render as:
running 2 tests
test tests::test_ignored ... ignored, not yet implemented
test result: ok. 1 passed; 0 failed; 1 ignored; 0 measured; 0 filtered out; finished in 0.00s
Stop manually SIMDing in `swap_nonoverlapping`
Like I previously did for `reverse` (#90821), this leaves it to LLVM to pick how to vectorize it, since it can know better the chunk size to use, compared to the "32 bytes always" approach we currently have.
A variety of codegen tests are included to confirm that the various cases are still being vectorized.
It does still need logic to type-erase in some cases, though, as while LLVM is now smart enough to vectorize over slices of things like `[u8; 4]`, it fails to do so over slices of `[u8; 3]`.
As a bonus, this change also means one no longer gets the spurious `memcpy`(s?) at the end up swapping a slice of `__m256`s: <https://rust.godbolt.org/z/joofr4v8Y>
<details>
<summary>ASM for this example</summary>
## Before (from godbolt)
note the `push`/`pop`s and `memcpy`
```x86
swap_m256_slice:
push r15
push r14
push r13
push r12
push rbx
sub rsp, 32
cmp rsi, rcx
jne .LBB0_6
mov r14, rsi
shl r14, 5
je .LBB0_6
mov r15, rdx
mov rbx, rdi
xor eax, eax
.LBB0_3:
mov rcx, rax
vmovaps ymm0, ymmword ptr [rbx + rax]
vmovaps ymm1, ymmword ptr [r15 + rax]
vmovaps ymmword ptr [rbx + rax], ymm1
vmovaps ymmword ptr [r15 + rax], ymm0
add rax, 32
add rcx, 64
cmp rcx, r14
jbe .LBB0_3
sub r14, rax
jbe .LBB0_6
add rbx, rax
add r15, rax
mov r12, rsp
mov r13, qword ptr [rip + memcpy@GOTPCREL]
mov rdi, r12
mov rsi, rbx
mov rdx, r14
vzeroupper
call r13
mov rdi, rbx
mov rsi, r15
mov rdx, r14
call r13
mov rdi, r15
mov rsi, r12
mov rdx, r14
call r13
.LBB0_6:
add rsp, 32
pop rbx
pop r12
pop r13
pop r14
pop r15
vzeroupper
ret
```
## After (from my machine)
Note no `rsp` manipulation, sorry for different ASM syntax
```x86
swap_m256_slice:
cmpq %r9, %rdx
jne .LBB1_6
testq %rdx, %rdx
je .LBB1_6
cmpq $1, %rdx
jne .LBB1_7
xorl %r10d, %r10d
jmp .LBB1_4
.LBB1_7:
movq %rdx, %r9
andq $-2, %r9
movl $32, %eax
xorl %r10d, %r10d
.p2align 4, 0x90
.LBB1_8:
vmovaps -32(%rcx,%rax), %ymm0
vmovaps -32(%r8,%rax), %ymm1
vmovaps %ymm1, -32(%rcx,%rax)
vmovaps %ymm0, -32(%r8,%rax)
vmovaps (%rcx,%rax), %ymm0
vmovaps (%r8,%rax), %ymm1
vmovaps %ymm1, (%rcx,%rax)
vmovaps %ymm0, (%r8,%rax)
addq $2, %r10
addq $64, %rax
cmpq %r10, %r9
jne .LBB1_8
.LBB1_4:
testb $1, %dl
je .LBB1_6
shlq $5, %r10
vmovaps (%rcx,%r10), %ymm0
vmovaps (%r8,%r10), %ymm1
vmovaps %ymm1, (%rcx,%r10)
vmovaps %ymm0, (%r8,%r10)
.LBB1_6:
vzeroupper
retq
```
</details>
This does all its copying operations as either the original type or as `MaybeUninit`s, so as far as I know there should be no potential abstract machine issues with reading padding bytes as integers.
<details>
<summary>Perf is essentially unchanged</summary>
Though perhaps with more target features this would help more, if it could pick bigger chunks
## Before
```
running 10 tests
test slice::swap_with_slice_4x_usize_30 ... bench: 894 ns/iter (+/- 11)
test slice::swap_with_slice_4x_usize_3000 ... bench: 99,476 ns/iter (+/- 2,784)
test slice::swap_with_slice_5x_usize_30 ... bench: 1,257 ns/iter (+/- 7)
test slice::swap_with_slice_5x_usize_3000 ... bench: 139,922 ns/iter (+/- 959)
test slice::swap_with_slice_rgb_30 ... bench: 328 ns/iter (+/- 27)
test slice::swap_with_slice_rgb_3000 ... bench: 16,215 ns/iter (+/- 176)
test slice::swap_with_slice_u8_30 ... bench: 312 ns/iter (+/- 9)
test slice::swap_with_slice_u8_3000 ... bench: 5,401 ns/iter (+/- 123)
test slice::swap_with_slice_usize_30 ... bench: 368 ns/iter (+/- 3)
test slice::swap_with_slice_usize_3000 ... bench: 28,472 ns/iter (+/- 3,913)
```
## After
```
running 10 tests
test slice::swap_with_slice_4x_usize_30 ... bench: 868 ns/iter (+/- 36)
test slice::swap_with_slice_4x_usize_3000 ... bench: 99,642 ns/iter (+/- 1,507)
test slice::swap_with_slice_5x_usize_30 ... bench: 1,194 ns/iter (+/- 11)
test slice::swap_with_slice_5x_usize_3000 ... bench: 139,761 ns/iter (+/- 5,018)
test slice::swap_with_slice_rgb_30 ... bench: 324 ns/iter (+/- 6)
test slice::swap_with_slice_rgb_3000 ... bench: 15,962 ns/iter (+/- 287)
test slice::swap_with_slice_u8_30 ... bench: 281 ns/iter (+/- 5)
test slice::swap_with_slice_u8_3000 ... bench: 5,324 ns/iter (+/- 40)
test slice::swap_with_slice_usize_30 ... bench: 275 ns/iter (+/- 5)
test slice::swap_with_slice_usize_3000 ... bench: 28,277 ns/iter (+/- 277)
```
</detail>
remove feature gate in control_flow examples
Stabilization was done in https://github.com/rust-lang/rust/pull/91091, but the two examples weren't updated accordingly.
Probably too late to put it into stable, but it should be in the next release :)
Fix a layout possible miscalculation in `alloc::RawVec`
A layout miscalculation could happen in `RawVec` when used with a type whose size isn't a multiple of its alignment. I don't know if such type can exist in Rust, but the Layout API provides ways to manipulate such types. Anyway, it is better to calculate memory size in a consistent way.
Some improvements to the async docs
The goal here is to make the docs overall a little bit more comprehensive and add more links between the things.
One thing that's not working yet is the links to the keywords. Somehow I couldn't get them to work.
r? ````@GuillaumeGomez```` do you know how I could get the keyword links to work?