The documentation on `std::sync::mpsc::Iter` and `std::sync::mpsc::TryIter` provides links to the corresponding `Receiver` methods, unlike `std::sync::mpsc::IntoIter` does.
This was left out in c59b188aae
Related to #29377
Remove unnecessary condition in Barrier::wait()
This is my first pull request for Rust, so feel free to call me out if anything is amiss.
After some examination, I realized that the second condition of the "spurious-wakeup-handler" loop in ``std::sync::Barrier::wait()`` should always evaluate to ``true``, making it redundant in the ``&&`` expression.
Here is the affected function before the fix:
```rust
#[stable(feature = "rust1", since = "1.0.0")]
pub fn wait(&self) -> BarrierWaitResult {
let mut lock = self.lock.lock().unwrap();
let local_gen = lock.generation_id;
lock.count += 1;
if lock.count < self.num_threads {
// We need a while loop to guard against spurious wakeups.
// https://en.wikipedia.org/wiki/Spurious_wakeup
while local_gen == lock.generation_id && lock.count < self.num_threads { // fixme
lock = self.cvar.wait(lock).unwrap();
}
BarrierWaitResult(false)
} else {
lock.count = 0;
lock.generation_id = lock.generation_id.wrapping_add(1);
self.cvar.notify_all();
BarrierWaitResult(true)
}
}
```
At first glance, it seems that the check that ``lock.count < self.num_threads`` would be necessary in order for a thread A to detect when another thread B has caused the barrier to reach its thread count, making thread B the "leader".
However, the control flow implicitly results in an invariant that makes observing ``!(lock.count < self.num_threads)``, i.e. ``lock.count >= self.num_threads`` impossible from thread A.
When thread B, which will be the leader, calls ``.wait()`` on this shared instance of the ``Barrier``, it locks the mutex in the first line and saves the ``MutexGuard`` in the ``lock`` variable. It then increments the value of ``lock.count``. However, it then proceeds to check if ``lock.count < self.num_threads``. Since it is the leader, it is the case that (after the increment of ``lock.count``), the lock count is *equal* to the number of threads. Thus, the second branch is immediately taken and ``lock.count`` is zeroed. Additionally, the generation ID is incremented (with wrap). Then, the condition variable is signalled. But, the other threads are waiting at the line ``lock = self.cvar.wait(lock).unwrap();``, so they cannot resume until thread B's call to ``Barrier::wait()`` returns, which drops the ``MutexGuard`` acquired in the first ``let`` statement and unlocks the mutex.
The order of events is thus:
1. A thread A calls `.wait()`
2. `.wait()` acquires the mutex, increments `lock.count`, and takes the first branch
3. Thread A enters the ``while`` loop since the generation ID has not changed and the count is less than the number of threads for the ``Barrier``
3. Spurious wakeups occur, but both conditions hold, so the thread A waits on the condition variable
4. This process repeats for N - 2 additional times for non-leader threads A'
5. *Meanwhile*, Thread B calls ``Barrier::wait()`` on the same barrier that threads A, A', A'', etc. are waiting on. The thread count reaches the number of threads for the ``Barrier``, so all threads should now proceed, with B being the leader. B acquires the mutex and increments the value ``lock.count`` only to find that it is not less than ``self.num_threads``. Thus, it immediately clamps ``self.num_threads`` back down to 0 and increments the generation. Then, it signals the condvar to tell the A (prime) threads that they may continue.
6. The A, A', A''... threads wake up and attempt to re-acquire the ``lock`` as per the internal operation of a condition variable. When each A has exclusive access to the mutex, it finds that ``lock.generation_id`` no longer matches ``local_generation`` **and the ``&&`` expression short-circuits -- and even if it were to evaluate it, ``self.count`` is definitely less than ``self.num_threads`` because it has been reset to ``0`` by thread B *before* B dropped its ``MutexGuard``**.
Therefore, it my understanding that it would be impossible for the non-leader threads to ever see the second boolean expression evaluate to anything other than ``true``. This PR simply removes that condition.
Any input would be appreciated. Sorry if this is terribly verbose. I'm new to the Rust community and concurrency can be hard to explain in words. Thanks!
There is a known bug in the implementation of mpsc channels in rust.
This adds a clearer error message when the bug occurs, so that developers don't lose too much time looking for the origin of the bug.
See https://github.com/rust-lang/rust/issues/39364
Explain non-dropped sender recv in docs
Original senders that are still hanging around could cause
Receiver::recv to not block since this is a potential footgun
for beginners, clarify more on this in the docs for readers to
be aware about it.
Maybe it would be better to show an example of the pattern where `drop(tx)` is used when it is being cloned multiple times? Although I have seen it in quite a few articles but I am surprised that this part is not very clear with the current words without careful reading.
> If the corresponding Sender has disconnected, or it disconnects while this call is blocking, this call will wake up and return Err to indicate that no more messages can ever be received on this channel. However, since channels are buffered, messages sent before the disconnect will still be properly received.
Some words there may seemed similar if I carefully read and relate it but if I am new, I probably does not know "drop" makes it "disconnected". So I mention the words "drop" and "alive" to make it more relatable to lifetime.
Original senders that are still hanging around could cause
Receiver::recv to not block since this is a potential footgun
for beginners, clarify more on this in the docs for readers to
be aware about it.
Fix minor tidbits in sender recv doc
Co-authored-by: Dylan DPC <dylan.dpc@gmail.com>
Add example for unbounded receive loops in doc
Show the drop(tx) pattern, based on tokio docs
https://tokio-rs.github.io/tokio/doc/tokio/sync/index.html
Fix example code for drop sender recv
Fix wording in sender docs
Co-authored-by: Josh Triplett <josh@joshtriplett.org>
- Split `sys_common::RWLock` between `StaticRWLock` and `MovableRWLock`
- Unbox `RwLock` on some platforms (Windows, Wasm and unsupported)
- Simplify `RwLock::into_inner`
Clarify error returns from Mutex::try_lock, RwLock::try_read,
RwLock::try_write to make it more obvious that both poisoning
and the lock being already locked are possible errors.
Improve Debug implementations of Mutex and RwLock.
This improves the Debug implementations of Mutex and RwLock.
They now show the poison flag and use debug_non_exhaustive. (See #67364.)
Fix Debug implementation for RwLock{Read,Write}Guard.
This would attempt to print the Debug representation of the lock that the guard has locked, which will try to lock again, fail, and just print `"<locked>"` unhelpfully.
After this change, this just prints the contents of the mutex, like the other smart pointers (and MutexGuard) do.
MutexGuard had this problem too: https://github.com/rust-lang/rust/issues/57702
This would attempt to print the Debug representation of the lock that
the guard has locked, which will try to lock again, fail, and just print
"<locked>" unhelpfully.
After this change, this just prints the contents of the mutex, like the
other smart pointers (and MutexGuard) do.
In particular, the following program works on Linux, but deadlocks on
mac:
use std::{
sync::{Arc, RwLock},
thread,
time::Duration,
};
fn main() {
let lock = Arc::new(RwLock::new(()));
let r1 = thread::spawn({
let lock = Arc::clone(&lock);
move || {
let _rg = lock.read();
eprintln!("r1/1");
sleep(1000);
let _rg = lock.read();
eprintln!("r1/2");
sleep(5000);
}
});
sleep(100);
let w = thread::spawn({
let lock = Arc::clone(&lock);
move || {
let _wg = lock.write();
eprintln!("w");
}
});
sleep(100);
let r2 = thread::spawn({
let lock = Arc::clone(&lock);
move || {
let _rg = lock.read();
eprintln!("r2");
sleep(2000);
}
});
r1.join().unwrap();
r2.join().unwrap();
w.join().unwrap();
}
fn sleep(ms: u64) {
std:🧵:sleep(Duration::from_millis(ms))
}
This stabilizes:
* `OnceState`
* `OnceState::is_poisoned()` (previously named `poisoned()`)
* `Once::call_once_force()`
`poisoned()` was renamed because the new name is more clear as a few
people agreed and nobody objected.
Closes#33577
The (unsafe) Mutex from sys_common had a rather complicated interface.
You were supposed to call init() manually, unless you could guarantee it
was neither moved nor used reentrantly.
Calling `destroy()` was also optional, although it was unclear if 1)
resources might be leaked or not, and 2) if destroy() should only be
called when `init()` was called.
This allowed for a number of interesting (confusing?) different ways to
use this Mutex, all captured in a single type.
In practice, this type was only ever used in two ways:
1. As a static variable. In this case, neither init() nor destroy() are
called. The variable is never moved, and it is never used
reentrantly. It is only ever locked using the LockGuard, never with
raw_lock.
2. As a Boxed variable. In this case, both init() and destroy() are
called, it will be moved and possibly used reentrantly.
No other combinations are used anywhere in `std`.
This change simplifies things by splitting this Mutex type into
two types matching the two use cases: StaticMutex and MovableMutex.
The interface of both new types is now both safer and simpler. The first
one does not call nor expose init/destroy, and the second one calls
those automatically in its new() and Drop functions. Also, the locking
functions of MovableMutex are no longer unsafe.