Optimise align_offset for stride=1 further
`stride == 1` case can be computed more efficiently through `-p (mod
a)`. That, then translates to a nice and short sequence of LLVM
instructions:
%address = ptrtoint i8* %p to i64
%negptr = sub i64 0, %address
%offset = and i64 %negptr, %a_minus_one
And produces pretty much ideal code-gen when this function is used in
isolation.
Typical use of this function will, however, involve use of
the result to offset a pointer, i.e.
%aligned = getelementptr inbounds i8, i8* %p, i64 %offset
This still looks very good, but LLVM does not really translate that to
what would be considered ideal machine code (on any target). For example
that's the codegen we obtain for an unknown alignment:
; x86_64
dec rsi
mov rax, rdi
neg rax
and rax, rsi
add rax, rdi
In particular negating a pointer is not something that’s going to be
optimised for in the design of CISC architectures like x86_64. They
are much better at offsetting pointers. And so we’d love to utilize this
ability and produce code that's more like this:
; x86_64
lea rax, [rsi + rdi - 1]
neg rsi
and rax, rsi
To achieve this we need to give LLVM an opportunity to apply its
various peep-hole optimisations that it does during DAG selection. In
particular, the `and` instruction appears to be a major inhibitor here.
We cannot, sadly, get rid of this load-bearing operation, but we can
reorder operations such that LLVM has more to work with around this
instruction.
One such ordering is proposed in #75579 and results in LLVM IR that
looks broadly like this:
; using add enables `lea` and similar CISCisms
%offset_ptr = add i64 %address, %a_minus_one
%mask = sub i64 0, %a
%masked = and i64 %offset_ptr, %mask
; can be folded with `gepi` that may follow
%offset = sub i64 %masked, %address
…and generates the intended x86_64 machine code.
One might also wonder how the increased amount of code would impact a
RISC target. Turns out not much:
; aarch64 previous ; aarch64 new
sub x8, x1, #1 add x8, x1, x0
neg x9, x0 sub x8, x8, #1
and x8, x9, x8 neg x9, x1
add x0, x0, x8 and x0, x8, x9
(and similarly for ppc, sparc, mips, riscv, etc)
The only target that seems to do worse is… wasm32.
Onto actual measurements – the best way to evaluate snipets like these
is to use llvm-mca. Much like Aarch64 assembly would allow to suspect,
there isn’t any performance difference to be found. Both snippets
execute in same number of cycles for the CPUs I tried. On x86_64,
we get throughput improvement of >50%!
Fixes#75579
Properly define va_arg and va_list for aarch64-apple-darwin
From [Apple][]:
> Because of these changes, the type `va_list` is an alias for `char*`,
> and not for the struct type in the generic procedure call standard.
With this change `/x.py test --stage 1 src/test/ui/abi/variadic-ffi`
passes.
Fixes#78092
[Apple]: https://developer.apple.com/documentation/xcode/writing_arm64_code_for_apple_platforms
transmute_copy: explain that alignment is handled correctly
The doc comment currently is somewhat misleading because if it actually transmuted `&T` to `&U`, a higher-aligned `U` would be problematic.
`#[deny(unsafe_op_in_unsafe_fn)]` in sys/wasm
This is part of #73904.
This encloses unsafe operations in unsafe fn in `libstd/sys/wasm`.
@rustbot modify labels: F-unsafe-block-in-unsafe-fn
BTreeMap: stop mistaking node::MIN_LEN for a node level constraint
Correcting #77612 that fell into the trap of assuming that node::MIN_LEN is an imposed minimum everywhere, and trying to make it much more clear it is an offered minimum at the node level.
r? @Mark-Simulacrum
Bump backtrace-rs to enable Mach-O support on iOS.
Related to rust-lang/backtrace-rs#378. Fixes backtraces on iOS that were missing in Rust v1.47.0 after switching to gimli because it only enabled Mach-O support on macOS.
replace `#[allow_internal_unstable]` with `#[rustc_allow_const_fn_unstable]` for `const fn`s
`#[allow_internal_unstable]` is currently used to side-step feature gate and stability checks.
While it was originally only meant to be used only on macros, its use was expanded to `const fn`s.
This pr adds stricter checks for the usage of `#[allow_internal_unstable]` (only on macros) and introduces the `#[rustc_allow_const_fn_unstable]` attribute for usage on `const fn`s.
This pr does not change any of the functionality associated with the use of `#[allow_internal_unstable]` on macros or the usage of `#[rustc_allow_const_fn_unstable]` (instead of `#[allow_internal_unstable]`) on `const fn`s (see https://github.com/rust-lang/rust/issues/69399#issuecomment-712911540).
Note: The check for `#[rustc_allow_const_fn_unstable]` currently only validates that the attribute is used on a function, because I don't know how I would check if the function is a `const fn` at the place of the check. I therefore openend this as a 'draft pull request'.
Closesrust-lang/rust#69399
r? @oli-obk
Throw core::panic!("message") as &str instead of String.
This makes `core::panic!("message")` consistent with `std::panic!("message")`, which throws a `&str` and not a `String`.
This also makes any other panics from `core::panicking::panic` result in a `&str` rather than a `String`, which includes compiler-generated panics such as the panics generated for `mem::zeroed()`.
---
Demonstration:
```rust
use std::panic;
use std::any::Any;
fn main() {
panic::set_hook(Box::new(|panic_info| check(panic_info.payload())));
check(&*panic::catch_unwind(|| core::panic!("core")).unwrap_err());
check(&*panic::catch_unwind(|| std::panic!("std")).unwrap_err());
}
fn check(msg: &(dyn Any + Send)) {
if let Some(s) = msg.downcast_ref::<String>() {
println!("Got a String: {:?}", s);
} else if let Some(s) = msg.downcast_ref::<&str>() {
println!("Got a &str: {:?}", s);
}
}
```
Before:
```
Got a String: "core"
Got a String: "core"
Got a &str: "std"
Got a &str: "std"
```
After:
```
Got a &str: "core"
Got a &str: "core"
Got a &str: "std"
Got a &str: "std"
```
Fix const core::panic!(non_literal_str).
Invocations of `core::panic!(x)` where `x` is not a string literal expand to `panic!("{}", x)`, which is not understood by the const panic logic right now. This adds `panic_str` as a lang item, and modifies the const eval implementation to hook into this item as well.
This fixes the issue mentioned here: https://github.com/rust-lang/rust/issues/51999#issuecomment-687604248
r? `@RalfJung`
`@rustbot` modify labels: +A-const-eval
revise Hermit's mutex interface to support the behaviour of StaticMutex
rust-lang/rust#77147 simplifies things by splitting this Mutex type into two types matching the two use cases: StaticMutex and MovableMutex. To support the new behavior of StaticMutex, we move part of the mutex implementation into libstd.
The interface to the OS changed. Consequently, I removed a few functions, which aren't longer needed.
change the order of type arguments on ControlFlow
This allows ControlFlow<BreakType> which is much more ergonomic for common iterator combinator use cases.
Addresses one component of #75744
Check for exhaustion in RangeInclusive::contains and slicing
When a range has finished iteration, `is_empty` returns true, so it
should also be the case that `contains` returns false.
Fixes#77941.
add `insert` to `Option`
This removes a cause of `unwrap` and code complexity.
This allows replacing
```
option_value = Some(build());
option_value.as_mut().unwrap()
```
with
```
option_value.insert(build())
```
It's also useful in contexts not requiring the mutability of the reference.
Here's a typical cache example:
```
let checked_cache = cache.as_ref().filter(|e| e.is_valid());
let content = match checked_cache {
Some(e) => &e.content,
None => {
cache = Some(compute_cache_entry());
// unwrap is OK because we just filled the option
&cache.as_ref().unwrap().content
}
};
```
It can be changed into
```
let checked_cache = cache.as_ref().filter(|e| e.is_valid());
let content = match checked_cache {
Some(e) => &e.content,
None => &cache.insert(compute_cache_entry()).content,
};
```
*(edited: I removed `insert_with`)*
This removes a cause of `unwrap` and code complexity.
This allows replacing
```
option_value = Some(build());
option_value.as_mut().unwrap()
```
with
```
option_value.insert(build())
```
or
```
option_value.insert_with(build)
```
It's also useful in contexts not requiring the mutability of the reference.
Here's a typical cache example:
```
let checked_cache = cache.as_ref().filter(|e| e.is_valid());
let content = match checked_cache {
Some(e) => &e.content,
None => {
cache = Some(compute_cache_entry());
// unwrap is OK because we just filled the option
&cache.as_ref().unwrap().content
}
};
```
It can be changed into
```
let checked_cache = cache.as_ref().filter(|e| e.is_valid());
let content = match checked_cache {
Some(e) => &e.content,
None => &cache.insert_with(compute_cache_entry).content,
};
```
Doc formating consistency between slice sort and sort_unstable, and big O notation consistency
Updated documentation for slice sorting methods to be consistent between stable and unstable versions, which just ended up being minor formatting differences.
I also went through and updated any doc comments with big O notation to be consistent with #74010 by italicizing them rather than having them in a code block.
Fixing escaping to ensure generation of welformed json.
doc tests' json name have a filename in them. When json test output is asked for on windows currently produces invalid json.
Tracking issue for json test output: #49359
Implement TryFrom between NonZero types.
This will instantly be stable, as trait implementations for stable types and traits can not be `#[unstable]`.
Closes#77258.
@rustbot modify labels: +T-libs
Improve wording of "cannot multiply" type error
For example, if you had this code:
fn foo(x: i32, y: f32) -> f32 {
x * y
}
You would get this error:
error[E0277]: cannot multiply `f32` to `i32`
--> src/lib.rs:2:7
|
2 | x * y
| ^ no implementation for `i32 * f32`
|
= help: the trait `Mul<f32>` is not implemented for `i32`
However, that's not usually how people describe multiplication. People
usually describe multiplication like how the division error words it:
error[E0277]: cannot divide `i32` by `f32`
--> src/lib.rs:2:7
|
2 | x / y
| ^ no implementation for `i32 / f32`
|
= help: the trait `Div<f32>` is not implemented for `i32`
So that's what this change does. It changes this:
error[E0277]: cannot multiply `f32` to `i32`
--> src/lib.rs:2:7
|
2 | x * y
| ^ no implementation for `i32 * f32`
|
= help: the trait `Mul<f32>` is not implemented for `i32`
To this:
error[E0277]: cannot multiply `i32` by `f32`
--> src/lib.rs:2:7
|
2 | x * y
| ^ no implementation for `i32 * f32`
|
= help: the trait `Mul<f32>` is not implemented for `i32`
Add Pin::static_ref, static_mut.
This adds `Pin::static_ref` and `Pin::static_mut`, which convert a static reference to a pinned static reference.
Static references are effectively already pinned, as what they refer to has to live forever and can never be moved.
---
Context: I want to update the `sys` and `sys_common` mutexes/rwlocks/condvars to use `Pin<&self>` in their functions, instead of only warning in the unsafety comments that they may not be moved. That should make them a little bit less dangerous to use. Putting such an object in a `static` (e.g. through `sys_common::StaticMutex`) fulfills the requirements about never moving it, but right now there's no safe way to get a `Pin<&T>` to a `static`. This solves that.
Wrapping intrinsics doc links update.
The links in the wrapping intrinsics docs now refer to the `wrapping_*` functions, not the `checked_*` functions.
const keyword: brief paragraph on 'const fn'
`const fn` were mentioned in the title, but called "deterministic functions" which is not their main property (though at least currently it is a consequence of being const-evaluable). This adds a brief paragraph discussing them, also in the hopes of clarifying that they do *not* have any effect on run-time uses.
Assert that pthread mutex initialization succeeded
If pthread mutex initialization fails, the failure will go unnoticed unless
debug assertions are enabled. Any subsequent use of mutex will also silently
fail, since return values from lock & unlock operations are similarly checked
only through debug assertions.
In some implementations the mutex initialization requires a memory
allocation and so it does fail in practice.
Assert that initialization succeeds to ensure that mutex guarantees
mutual exclusion.
Fixes#34966.
If pthread mutex initialization fails, the failure will go unnoticed unless
debug assertions are enabled. Any subsequent use of mutex will also silently
fail, since return values from lock & unlock operations are similarly checked
only through debug assertions.
In some implementations the mutex initialization requires a memory
allocation and so it does fail in practice.
Check that initialization succeeds to ensure that mutex guarantees
mutual exclusion.
Move `slice::check_range` to `RangeBounds`
Since this method doesn't take a slice anymore (#76662), it makes more sense to define it on `RangeBounds`.
Questions:
- Should the new method be `assert_len` or `assert_length`?
For example, if you had this code:
fn foo(x: i32, y: f32) -> f32 {
x * y
}
You would get this error:
error[E0277]: cannot multiply `f32` to `i32`
--> src/lib.rs:2:7
|
2 | x * y
| ^ no implementation for `i32 * f32`
|
= help: the trait `Mul<f32>` is not implemented for `i32`
However, that's not usually how people describe multiplication. People
usually describe multiplication like how the division error words it:
error[E0277]: cannot divide `i32` by `f32`
--> src/lib.rs:2:7
|
2 | x / y
| ^ no implementation for `i32 / f32`
|
= help: the trait `Div<f32>` is not implemented for `i32`
So that's what this change does. It changes this:
error[E0277]: cannot multiply `f32` to `i32`
--> src/lib.rs:2:7
|
2 | x * y
| ^ no implementation for `i32 * f32`
|
= help: the trait `Mul<f32>` is not implemented for `i32`
To this:
error[E0277]: cannot multiply `i32` by `f32`
--> src/lib.rs:2:7
|
2 | x * y
| ^ no implementation for `i32 * f32`
|
= help: the trait `Mul<f32>` is not implemented for `i32`
Add std:🧵:available_concurrency
This PR adds a counterpart to [C++'s `std:🧵:hardware_concurrency`](https://en.cppreference.com/w/cpp/thread/thread/hardware_concurrency) to Rust, tracking issue https://github.com/rust-lang/rust/issues/74479.
cc/ `@rust-lang/libs`
## Motivation
Being able to know how many hardware threads a platform supports is a core part of building multi-threaded code. In C++ 11 this has become available through the [`std:🧵:hardware_concurrency`](https://en.cppreference.com/w/cpp/thread/thread/hardware_concurrency) API. Currently in Rust most of the ecosystem depends on the [`num_cpus` crate](https://docs.rs/num_cpus/1.13.0/num_cpus/) ([no.35 in top 500 crates](https://docs.google.com/spreadsheets/d/1wwahRMHG3buvnfHjmPQFU4Kyfq15oTwbfsuZpwHUKc4/edit#gid=1253069234)) to provide this functionality. This PR proposes an API to provide access to the number of hardware threads available on a given platform.
__edit (2020-07-24):__ The purpose of this PR is to provide a hint for how many threads to spawn to saturate the processor. There's value in introducing APIs for NUMA and Windows processor groups, but those are intentionally out of scope for this PR. See: https://github.com/rust-lang/rust/pull/74480#issuecomment-662116186.
## Naming
Discussing the naming of the API on Zulip surfaced two options:
- `std:🧵:hardware_concurrency`
- `std:🧵:hardware_threads`
Both options seemed acceptable, but overall people seem to gravitate the most towards `hardware_threads`. Additionally `@jonas-schievink` pointed out that the "hardware threads" terminology is well-established and is used in among other the [RISC-V specification](https://riscv.org/specifications/isa-spec-pdf/) (page 20):
> A component is termed a core if it contains an independent instruction fetch unit. A RISC-V-compatible core might support multiple RISC-V-compatible __hardware threads__, or harts, through multithreading.
It's also worth noting that [the original paper introducing C++'s `std::thread` submodule](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2320.html) unfortunately doesn't feature any discussion on the naming of `hardware_concurrency`, so we can't use that to help inform our decision here.
## Return type
An important consideration `@joshtriplett` brought up is that we don't want to default to `1` for platforms where the number of available threads cannot be retrieved. Instead we want to inform the users of the fact that we don't know and allow them to handle that case. Which is why this PR uses `Option<NonZeroUsize>` as its return type, where `None` is returned on platforms where we don't know the number of hardware threads available.
The reasoning for `NonZeroUsize` vs `usize` is that if the number of threads for a platform are known, they'll always be at least 1. As evidenced by the example the `NonZero*` family of APIs may currently not be the most ergonomic to use, but improving the ergonomics of them is something that I think we can address separately.
## Implementation
`@Mark-Simulacrum` pointed out that most of the code we wanted to expose here was already available under `libtest`. So this PR mostly moves the internal code of libtest into a public API.
BTreeMap: refactor Entry out of map.rs into its own file
btree/map.rs is approaching the 3000 line mark, splitting out the entry
code buys about 500 lines of headroom.
I've created this PR because the changes I've made in #77438 will push `map.rs` over the 3000 line limit and cause tidy to complain.
I picked `Entry` to factor out because it feels less tightly coupled to the rest of `BTreeMap` than the various iterator implementations.
Related: #60302
Use posix_spawn() on unix if program is a path
Previously `Command::spawn` would fall back to the non-posix_spawn based
implementation if the `PATH` environment variable was possibly changed.
On systems with a modern (g)libc `posix_spawn()` can be significantly
faster. If program is a path itself the `PATH` environment variable is
not used for the lookup and it should be safe to use the
`posix_spawnp()` method. [1]
We found this, because we have a cli application that effectively runs a
lot of subprocesses. It would sometimes noticeably hang while printing
output. Profiling showed that the process was spending the majority of
time in the kernel's `copy_page_range` function while spawning
subprocesses. During this time the process is completely blocked from
running, explaining why users were reporting the cli app hanging.
Through this we discovered that `std::process::Command` has a fast and
slow path for process execution. The fast path is backed by
`posix_spawnp()` and the slow path by fork/exec syscalls being called
explicitly. Using fork for process creation is supposed to be fast, but
it slows down as your process uses more memory. It's not because the
kernel copies the actual memory from the parent, but it does need to
copy the references to it (see `copy_page_range` above!). We ended up
using the slow path, because the command spawn implementation in falls
back to the slow path if it suspects the PATH environment variable was
changed.
Here is a smallish program demonstrating the slowdown before this code
change:
```
use std::process::Command;
use std::time::Instant;
fn main() {
let mut args = std::env::args().skip(1);
if let Some(size) = args.next() {
// Allocate some memory
let _xs: Vec<_> = std::iter::repeat(0)
.take(size.parse().expect("valid number"))
.collect();
let mut command = Command::new("/bin/sh");
command
.arg("-c")
.arg("echo hello");
if args.next().is_some() {
println!("Overriding PATH");
command.env("PATH", std::env::var("PATH").expect("PATH env var"));
}
let now = Instant::now();
let child = command
.spawn()
.expect("failed to execute process");
println!("Spawn took: {:?}", now.elapsed());
let output = child.wait_with_output().expect("failed to wait on process");
println!("Output: {:?}", output);
} else {
eprintln!("Usage: prog [size]");
std::process::exit(1);
}
()
}
```
Running it and passing different amounts of elements to use to allocate
memory shows that the time taken for `spawn()` can differ quite
significantly. In latter case the `posix_spawnp()` implementation is 30x
faster:
```
$ cargo run --release 10000000
...
Spawn took: 324.275µs
hello
$ cargo run --release 10000000 changepath
...
Overriding PATH
Spawn took: 2.346809ms
hello
$ cargo run --release 100000000
...
Spawn took: 387.842µs
hello
$ cargo run --release 100000000 changepath
...
Overriding PATH
Spawn took: 13.434677ms
hello
```
[1]: 5f72f9800b/posix/execvpe.c (L81)
BTreeMap: improve gdb introspection of BTreeMap with ZST keys or values
I accidentally pushed an earlier revision in #77788: it changes the index of tuples for BTreeSet from ""[{}]".format(i) to "key{}".format(i). Which doesn't seem to make the slightest difference on my linux box nor on CI. In fact, gdb doesn't make any distinction between "key{}" and "val{}" for a BTreeMap either, leading to confusing output if you test more. But easy to improve.
r? @Mark-Simulacrum
liballoc: VecDeque: Add binary search functions
I am submitting rust-lang/rfcs#2997 as a PR as suggested by @scottmcm
I haven't yet created a tracking issue - if there's a favorable feedback I'll create one and update the issue links in the unstable attribs.
Remove shrink_to_fit from default ToString::to_string implementation.
As suggested by `@scottmcm` on Zulip. shrink_to_fit() seems like the wrong thing to do here in most use cases of to_string(). Would be intereseting to see if it makes any difference in a timer run.
r? `@joshtriplett`
Deny broken intra-doc links in linkchecker
Since rustdoc isn't warning about these links, check for them manually.
This also fixes the broken links that popped up from the lint.