Using a pure spin lock for a critical section in a preemptable thread
is always wrong, however short the critical section may be. The thread
might be preempted, which will cause all other threads to hammer
busily at the core for the whole quant. Moreover, if threads have
different priorities, this might lead to a priority inversion problem
and a deadlock. More generally, a spinlock is not more efficient than
a well-written mutex, which typically does several spin iterations at
the start anyway.
The advise about UP vs SMP is also irrelevant in the context of
preemptive threads.
Atomic as_mut_ptr
I encountered the following pattern a few times: In Rust we use some atomic type like `AtomicI32`, and an FFI interface exposes this as `*mut i32` (or some similar `libc` type).
It was not obvious to me if a just transmuting a pointer to the atomic was acceptable, or if this should use a cast that goes through an `UnsafeCell`. See https://github.com/rust-lang/rust/issues/66136#issuecomment-557802477
Transmuting the pointer directly:
```rust
let atomic = AtomicI32::new(1);
let ptr = &atomic as *const AtomicI32 as *mut i32;
unsafe {
ffi(ptr);
}
```
A dance with `UnsafeCell`:
```rust
let atomic = AtomicI32::new(1);
unsafe {
let ptr = (&*(&atomic as *const AtomicI32 as *const UnsafeCell<i32>)).get();
ffi(ptr);
}
```
Maybe in the end both ways could be valid. But why not expose a direct method to get a pointer from the standard library?
An `as_mut_ptr` method on atomics can be safe, because only the use of the resulting pointer is where things can get unsafe. I documented its use for FFI, and "Doing non-atomic reads and writes on the resulting integer can be a data race."
The standard library could make use this method in a few places in the WASM module.
cc @RalfJung as you answered my original question.
Split non-CAS atomic support off into target_has_atomic_load_store
This PR implements my proposed changes in https://github.com/rust-lang/rust/issues/32976#issuecomment-518542029 by removing `target_has_atomic = "cas"` and splitting `target_has_atomic` into two separate `cfg`s:
* `target_has_atomic = 8/16/32/64/128`: This indicates the largest width that the target can atomically CAS (which implies support for all atomic operations).
* ` target_has_atomic_load_store = 8/16/32/64/128`: This indicates the largest width that the target can support loading or storing atomically (but may not support CAS).
cc #32976
r? @alexcrichton
PR: documentation spin loop hint
The documentation for 'spin loop hint' explains that yield is better if the lock holder is running on the same CPU. I suggest that 'CPU or core' would be clearer.
This commit stabilizes the `Atomic{I,U}{8,16,32,64}` APIs in the
`std::sync::atomic` and `core::sync::atomic` modules. Proposed in #56753
and tracked in #32976 this feature has been unstable for quite some time
and is hopefully ready to go over the finish line now!
The API is being stabilized as-is. The API of `AtomicU8` and friends
mirrors that of `AtomicUsize`. A list of changes made here are:
* A portability documentation section has been added to describe the
current state of affairs.
* Emulation of smaller-size atomics with larger-size atomics has been
documented.
* As an added bonus, `ATOMIC_*_INIT` is now scheduled for deprecation
across the board in 1.34.0 now that `const` functions can be invoked
in statics.
Note that the 128-bit atomic types are omitted from this stabilization
explicitly. They have far less platform support than the other atomic
types, and will likely require further discussion about their best
location.
Closes#32976Closes#56753
Documentation for impl From for AtomicBool and other Atomic types
As part of issue #51430 (cc @skade).
The impl is very simple, so not sure if we need to go into any details.
* Update bootstrap compiler
* Update version to 1.33.0
* Remove some `#[cfg(stage0)]` annotations
Actually updating the version number is blocked on updating Cargo
This commit is an initial start at implementing the standard library for
wasm32-unknown-unknown with the experimental `atomics` feature enabled. None of
these changes will be visible to users of the wasm32-unknown-unknown target
because they all require recompiling the standard library. The hope with this is
that we can get this support into the standard library and start iterating on it
in-tree to enable experimentation.
Currently there's a few components in this PR:
* Atomic fences are disabled on wasm as there's no corresponding atomic op and
it's not clear yet what the convention should be, but this will change in the
future!
* Implementations of `Mutex`, `Condvar`, and `RwLock` were all added based on
the atomic intrinsics that wasm has.
* The `ReentrantMutex` and thread-local-storage implementations panic currently
as there's no great way to get a handle on the current thread's "id" yet.
Right now the wasm32 target with atomics is unfortunately pretty unusable,
requiring a lot of manual things here and there to actually get it operational.
This will likely continue to evolve as the story for atomics and wasm unfolds,
but we also need more LLVM support for some operations like custom `global`
directives for this to work best.
Added in #52149 the discussion in #53514 is showing how we may not want to
actually add this attribute to the atomic types. While we continue to
debate #53514 this commit reverts the addition of the `transparent` attribute.
This should be a more conservative route which leaves us the ability to tweak
this in the future but in the meantime allows us to continue discussion as well.