`wait_while` takes care of spurious wake-ups in centralized place,
reducing chances for mistakes and potential future optimizations
(who knows, maybe in future there will be no spurious wake-ups? :)
Make `Debug` representations of `[Lazy, Once]*[Cell, Lock]` consistent with `Mutex` and `RwLock`
`Mutex` prints `<locked>` as a field value when its inner value cannot be accessed, but the lazy types print a fixed string like "`OnceCell(Uninit)`". This could cause confusion if the inner type is a unit type named `Uninit` and does not respect the pretty-printing flag. With this change, the format message is now "`OnceCell(<uninit>)`", consistent with `Mutex`.
Use `LazyLock` to lazily resolve backtraces
By using TAIT to name the initializing closure, `LazyLock` can be used to replace the current `LazilyResolvedCapture`.
Spelling library
Split per https://github.com/rust-lang/rust/pull/110392
I can squash once people are happy w/ the changes. It's really uncommon for large sets of changes to be perfectly acceptable w/o at least some changes.
I probably won't have time to respond until tomorrow or the next day
`Mutex` prints `<locked>` as a field value when its inner value cannot be accessed, but the lazy types print a fixed string like "`OnceCell(Uninit)`". This could cause confusion if the inner type is a unit type named `Uninit` and does not respect the pretty-printing flag. With this change, the format message is now "`OnceCell(<uninit>)`", consistent with `Mutex`.
Receiver disconnection relies on the incorrect assumption that
`head.index != tail.index` implies that the channel is initialized (i.e
`head.block` and `tail.block` point to allocated blocks). However, it
can happen that `head.index != tail.index` and `head.block == null` at
the same time which leads to a segfault when a channel is dropped in
that state.
This can happen because initialization is performed in two steps. First,
the tail block is allocated and the `tail.block` is set. If that is
successful `head.block` is set to the same pointer. Importantly,
initialization is skipped if `tail.block` is not null.
Therefore we can have the following situation:
1. Thread A starts to send the first value of the channel, observes that
`tail.block` is null and begins initialization. It sets `tail.block`
to point to a newly allocated block and then gets preempted.
`head.block` is still null at this point.
2. Thread B starts to send the second value of the channel, observes
that `tail.block` *is not* null and proceeds with writing its value
in the allocated tail block and sets `tail.index` to 1.
3. Thread B drops the receiver of the channel which observes that
`head.index != tail.index` (0 and 1 respectively), therefore there
must be messages to drop. It starts traversing the linked list from
`head.block` which is still a null pointer, leading to a segfault.
This PR fixes this problem by waiting for initialization to complete
when `head.index != tail.index` and the `head.block` is still null. A
similar check exists in `start_recv` for similar reasons.
Fixes#110001
Signed-off-by: Petros Angelatos <petrosagg@gmail.com>
Add block-based mutex unlocking example
This modifies the existing example in the Mutex docs to show both `drop()` and block based early unlocking.
Alternative to #81872, which is getting closed.
Drop all messages in bounded channel when destroying the last receiver
Fixes#107466 by splitting the `disconnect` function for receivers/transmitters and dropping all messages in `disconnect_receivers` like the unbounded channel does. Since all receivers must be dropped before the channel is, the messages will already be discarded at that point, so the `Drop` implementation for the channel can be removed.
``@rustbot`` label +T-libs +A-concurrency
Optimize `LazyLock` size
The initialization function was unnecessarily stored separately from the data to be initialized. Since both cannot exist at the same time, a `union` can be used, with the `Once` acting as discriminant. This unfortunately requires some extra methods on `Once` so that `Drop` can be implemented correctly and efficiently.
`@rustbot` label +T-libs +A-atomic
Add #[inline] markers to once_cell methods
Added inline markers to all simple methods under the `once_cell` feature. Relates to #74465 and #105587
This should not block #105587
Move `ReentrantMutex` to `std::sync`
If I understand #84187 correctly, `sys_common` should not contain platform-independent code, even if it is private.
Fix backoff doc to match implementation
The commit 8dddb22943 in the crossbeam-channel PR (#93563) changed the backoff strategy to be quadratic instead of exponential. This updates the doc to prevent confusion.
More inference-friendly API for lazy
The signature for new was
```
fn new<F>(f: F) -> Lazy<T, F>
```
Notably, with `F` unconstrained, `T` can be literally anything, and just `let _ = Lazy::new(|| 92)` would not typecheck.
This historiacally was a necessity -- `new` is a `const` function, it couldn't have any bounds. Today though, we can move `new` under the `F: FnOnce() -> T` bound, which gives the compiler enough data to infer the type of T from closure.