Make `std:🧵:available_concurrency` support process-limited number of CPUs
Use `libc::sched_getaffinity` and count the number of CPUs in the returned mask. This handles cases where the process doesn't have access to all CPUs, such as when limited via `taskset` or similar.
This also covers cgroup cpusets.
Add features gates for experimental asm features
This PR splits off parts of `asm!` into separate features because they are not ready for stabilization.
Specifically this adds:
- `asm_const` for `const` operands.
- `asm_sym` for `sym` operands.
- `asm_experimental_arch` for architectures other than x86, x86_64, arm, aarch64 and riscv.
r? `@nagisa`
Append .0 to unsuffixed float if it would otherwise become int token
Previously the unsuffixed f32/f64 constructors of `proc_macro::Literal` would create literal tokens that are definitely not a float:
```rust
Literal::f32_unsuffixed(10.0) // 10
Literal::f32_suffixed(10.0) // 10f32
Literal::f64_unsuffixed(10.0) // 10
Literal::f64_suffixed(10.0) // 10f64
```
Notice that the `10` are actually integer tokens if you were to reparse them, not float tokens.
This diff updates `Literal::f32_unsuffixed` and `Literal::f64_unsuffixed` to produce tokens that unambiguously parse as a float. This matches longstanding behavior of the proc-macro2 crate's implementation of these APIs dating back at least 3.5 years, so it's likely an unobjectionable behavior.
```rust
Literal::f32_unsuffixed(10.0) // 10.0
Literal::f32_suffixed(10.0) // 10f32
Literal::f64_unsuffixed(10.0) // 10.0
Literal::f64_suffixed(10.0) // 10f64
```
Fixes https://github.com/dtolnay/syn/issues/1085.
Re-export some iterators from `core` in `std`
These iterators seem to have been forgotten to be re-exported from `std` (through `alloc`)
These are stable:
`core::slice::{SplitInclusive, SplitInclusiveMut}`
This one is still unstable:
`core::slice::EscapeAscii` (cc #77174)
Implement `RefUnwindSafe` for `Rc<T>`
This PR implements `RefUnwindSafe` for `Rc<T>`, where `T: RefUnwindSafe`.
This impl was omitted by an apparent oversight. `Rc<T>` already implements `UnwindSafe`. `Arc<T>` implements both `UnwindSafe` and `RefUnwindSafe`. There is no reason why an `&Rc<T>` is any less unwind safe than a `Rc<T>` or an `&Arc<T>`, so this should be safe to add.
Resolves#45924.
Replace `std::os::raw::c_ssize_t` with `std::os::raw::c_ptrdiff_t`
The discussions in #88345 brought up that `ssize_t` is not actually the signed index type defined in stddef.h, but instead it's `ptrdiff_t`. It seems pretty clear that the use of `ssize_t` here was a mistake on my part, and that if we're going to bother having a isize-alike for FFI in `std::os::raw`, it should be `ptrdiff_t` and not `ssize_t`.
Anyway, both this and `c_size_t` are dubious in the face of the discussion in https://internals.rust-lang.org/t/pre-rfc-usize-is-not-size-t/15369, and any RFC/project-group/etc that handles those issues there should contend with these types in some manner, but that doesn't mean we shouldn't fix something wrong like this, even if it is unstable.
All that said, `size_t` is *vastly* more common in function signatures than either `ssize_t` or `ptrdiff_t`, so I'm going to update the tracking issue's list of unresolved questions to note that perhaps we only want `c_size_t` — I mostly added the signed version for symmetry, rather than to meet a need. (Given this, I'm also fine with modifying this patch to instead remove `c_ssize_t` without a replacement)
CC `@magicant` (who brought the issue up)
CC `@chorman0773` (who has a significantly firmer grasp on the minutae of the C standard than I do)
r? `@joshtriplett` (original reviewer, active in the discussions around this)
Windows thread-local keyless drop
`#[thread_local]` allows us to maintain a per-thread list of destructors. This also avoids the need to synchronize global data (which is particularly tricky within the TLS callback function).
r? `@alexcrichton`
Add JoinHandle::is_running.
This adds:
```rust
impl<T> JoinHandle<T> {
/// Checks if the the associated thread is still running its main function.
///
/// This might return `false` for a brief moment after the thread's main
/// function has returned, but before the thread itself has stopped running.
pub fn is_running(&self) -> bool;
}
```
The usual way to check if a background thread is still running is to set some atomic flag at the end of its main function. We already do that, in the form of dropping an Arc which will reduce the reference counter. So we might as well expose that information.
This is useful in applications with a main loop (e.g. a game, gui, control system, ..) where you spawn some background task, and check every frame/iteration whether the background task is finished to .join() it in that frame/iteration while keeping the program responsive.
`#[thread_local]` allows us to maintain a per-thread list of destructors. This also avoids the need to synchronize global data (which is particularly tricky within the TLS callback function).
Add #[must_use] to remaining std functions (O-Z)
I've run out of compelling reasons to group functions together across crates so I'm just going to go module-by-module. This is half of the remaining items from the `std` crate, from O-Z.
`panicking::take_hook` has a side effect: it unregisters the current panic hook, returning it. I almost ignored it, but the documentation example shows `let _ = panic::take_hook();`, so following suit I went ahead and added a `#[must_use]`.
```rust
std::panicking fn take_hook() -> Box<dyn Fn(&PanicInfo<'_>) + 'static + Sync + Send>;
```
I added these functions that clippy did not flag:
```rust
std::path::Path fn starts_with<P: AsRef<Path>>(&self, base: P) -> bool;
std::path::Path fn ends_with<P: AsRef<Path>>(&self, child: P) -> bool;
std::path::Path fn with_file_name<S: AsRef<OsStr>>(&self, file_name: S) -> PathBuf;
std::path::Path fn with_extension<S: AsRef<OsStr>>(&self, extension: S) -> PathBuf;
```
Parent issue: #89692
r? `@joshtriplett`
Add #[must_use] to alloc functions that would leak memory
As [requested](https://github.com/rust-lang/rust/pull/89899#issuecomment-955600779) by `@joshtriplett.`
> Please do go ahead and add the ones whose only legitimate use for ignoring the return value is leaking memory. (In a separate PR please.) I think it's sufficiently error-prone to call something like alloc and ignore the result that it's legitimate to require `let _ =` for that.
I added `realloc` myself. Clippy ignored it because of its `mut` argument.
```rust
alloc/src/alloc.rs:123:1 alloc unsafe fn realloc(ptr: *mut u8, layout: Layout, new_size: usize) -> *mut u8;
```
Parent issue: #89692
r? `@joshtriplett`
Add #[must_use] to remaining core functions
I've run out of compelling reasons to group functions together across crates so I'm just going to go module-by-module. This is everything remaining from the `core` crate.
Ignored by clippy for reasons unknown:
```rust
core::alloc::Layout unsafe fn for_value_raw<T: ?Sized>(t: *const T) -> Self;
core::any const fn type_name_of_val<T: ?Sized>(_val: &T) -> &'static str;
```
Ignored by clippy because of `mut`:
```rust
str fn split_at_mut(&mut self, mid: usize) -> (&mut str, &mut str);
```
<del>
Ignored by clippy presumably because a caller might want `f` called for side effects. That seems like a bad usage of `map` to me.
```rust
core::cell::Ref<'b, T> fn map<U: ?Sized, F>(orig: Ref<'b, T>, f: F) -> Ref<'b, T>;
core::cell::Ref<'b, T> fn map_split<U: ?Sized, V: ?Sized, F>(orig: Ref<'b, T>, f: F) -> (Ref<'b, U>, Ref<'b, V>);
```
</del>
Parent issue: #89692
r? ```@joshtriplett```
Add #[must_use] to mem/ptr functions
There's a lot of low-level / unsafe stuff here. Are there legit use cases for ignoring any of these return values?
* No regressions in `./x.py test --stage 1 library/std src/tools/clippy`.
* One regression in `./x.py test --stage 1 src/test/ui`. Fixed.
* I am unable to run `./x.py doc` on my machine so I'll need to wait for the CI to verify doctests pass. I eyeballed all the adjacent tests and they all look okay.
Parent issue: #89692
r? ```@joshtriplett```
Add #[must_use] to expensive computations
The unifying theme for this commit is weak, admittedly. I put together a list of "expensive" functions when I originally proposed this whole effort, but nobody's cared about that criterion. Still, it's a decent way to bite off a not-too-big chunk of work.
Given the grab bag nature of this commit, the messages I used vary quite a bit. I'm open to wording changes.
For some reason clippy flagged four `BTreeSet` methods but didn't say boo about equivalent ones on `HashSet`. I stared at them for a while but I can't figure out the difference so I added the `HashSet` ones in.
```rust
// Flagged by clippy.
alloc::collections::btree_set::BTreeSet<T> fn difference<'a>(&'a self, other: &'a BTreeSet<T>) -> Difference<'a, T>;
alloc::collections::btree_set::BTreeSet<T> fn symmetric_difference<'a>(&'a self, other: &'a BTreeSet<T>) -> SymmetricDifference<'a, T>
alloc::collections::btree_set::BTreeSet<T> fn intersection<'a>(&'a self, other: &'a BTreeSet<T>) -> Intersection<'a, T>;
alloc::collections::btree_set::BTreeSet<T> fn union<'a>(&'a self, other: &'a BTreeSet<T>) -> Union<'a, T>;
// Ignored by clippy, but not by me.
std::collections::HashSet<T, S> fn difference<'a>(&'a self, other: &'a HashSet<T, S>) -> Difference<'a, T, S>;
std::collections::HashSet<T, S> fn symmetric_difference<'a>(&'a self, other: &'a HashSet<T, S>) -> SymmetricDifference<'a, T, S>
std::collections::HashSet<T, S> fn intersection<'a>(&'a self, other: &'a HashSet<T, S>) -> Intersection<'a, T, S>;
std::collections::HashSet<T, S> fn union<'a>(&'a self, other: &'a HashSet<T, S>) -> Union<'a, T, S>;
```
Parent issue: #89692
r? ```@joshtriplett```
Stabilize `is_symlink()` for `Metadata` and `Path`
I'm not fully sure about `since` version, correct me if I'm wrong
Needs update after stabilization: [cargo-test-support](8063672238/crates/cargo-test-support/src/paths.rs (L202))
Linked issue: #85748
track_caller for slice length assertions
`clone_from_slice` was missing `#[track_caller]`, and its assert did not report a useful location.
These are small generic methods, so hopefully track_caller gets inlined into nothingness, but it may be worth running a benchmark on this.
Use libc::sched_getaffinity and count the number of CPUs in the returned
mask. This handles cases where the process doesn't have access to all
CPUs, such as when limited via taskset or similar.