Commit Graph

2241 Commits

Author SHA1 Message Date
bors
f4cfd87202 Auto merge of #120676 - Mark-Simulacrum:bootstrap-bump, r=clubby789
Bump bootstrap compiler to just-built 1.77 beta

https://forge.rust-lang.org/release/process.html#master-bootstrap-update-t-2-day-tuesday
2024-02-09 18:09:02 +00:00
Ben Kimock
88d6e9f868 Reduce use of NonNull::new_unchecked in library/ 2024-02-08 11:52:16 -05:00
Mark Rousskov
8043821b3a Bump version placeholders 2024-02-08 07:43:38 -05:00
Michael Goulet
0dd40786b5 Harmonize blanket implementations for AsyncFn* traits 2024-02-06 17:20:40 +00:00
Matthias Krüger
80e8c7e125
Rollup merge of #118960 - tvallotton:local_waker, r=Mark-Simulacrum
Add LocalWaker and ContextBuilder types to core, and LocalWake trait to alloc.

Implementation for  #118959.
2024-02-05 11:07:26 +01:00
Matthias Krüger
13e84f2a3d
Rollup merge of #113833 - WiktorPrzetacznik:master, r=dtolnay
`std::error::Error` -> Trait Implementations: lifetimes consistency improvement

This cleans up `std::error::Error` trait implementations lifetime inconsistency (`'static` -> `'a`)

**Reasoning:**

Trait implementations for `std::error::Error`, like:
`impl From<&str> for Box<dyn Error + 'static, Global>`
`impl<'a> From<&str> for Box<dyn Error + Sync + Send + 'a, Global>`
use different lifetime annotations misleadingly implying using different life annotations here is a conscious, nonaccidental decision.

[(Related forum discussion here)](https://users.rust-lang.org/t/confusing-std-error-source-code/97011/5?u=wiktor)
2024-02-05 11:07:25 +01:00
Matthias Krüger
9838e943f3
Rollup merge of #120458 - rytheo:cstr-conversion-doc, r=Mark-Simulacrum
Document `&CStr` to `CString` conversion

Related to #51430
2024-02-05 06:37:14 +01:00
Nadrieril
03daaa6f07
Rollup merge of #120355 - the8472:doc-vec-fromiter, r=cuviper
document `FromIterator for Vec` allocation behaviors

[t-libs discussion](https://rust-lang.zulipchat.com/#narrow/stream/259402-t-libs.2Fmeetings/topic/Meeting.202024-01-24/near/417686526) about #120091 didn't reach a strong consensus, but it was agreed that if we keep the current behavior it should at least be documented even though it is an implementation detail.

The language is intentionally non-committal. The previous (non-existent) documentation permits a lot of implementation leeway and we want retain that. In some cases we even must retain it to be able to rip out some code paths that rely on unstable features.
2024-01-31 12:10:50 +01:00
the8472
39dc3153c5 Apply suggestions from code review
Co-authored-by: Josh Stone <cuviper@gmail.com>
2024-01-30 22:37:07 +01:00
The 8472
c780fe6b27 document FromIterator for Vec allocation behaviors 2024-01-30 22:37:07 +01:00
Amanieu d'Antras
adb7607189 Fix BTreeMap's Cursor::remove_{next,prev}
These would incorrectly leave `current` as `None` after a failed attempt
to remove an element (due to the cursor already being at the start/end).
2024-01-30 17:51:20 +00:00
Guillaume Gomez
a5aa355ab3
Rollup merge of #120445 - Nemo157:arc-plug, r=Mark-Simulacrum
Fix some `Arc` allocator leaks

This doesn't matter for the stable `Global` allocator as it is a ZST singleton, but other allocators may rely on all instances being dropped.
2024-01-30 16:57:50 +01:00
Dylan DPC
4528b37196
Rollup merge of #120266 - steffahn:a_rc_into_inner_docs, r=Mark-Simulacrum
Improve documentation for [A]Rc::into_inner

General improvements, and also aims to better encourage the reader to actually check out Arc::try_unwrap.

This addresses concerns from https://github.com/rust-lang/rust/issues/106894#issuecomment-1905627234.

Rendered:

![Screenshot_20240123_114436](https://github.com/rust-lang/rust/assets/3986214/68896d62-13e0-4f3a-8073-91d8e77c5554)
![Screenshot_20240123_114455](https://github.com/rust-lang/rust/assets/3986214/dc58e4bd-dd7f-40b1-bc50-fd6200dde593)
2024-01-29 12:56:52 +00:00
Ryan Lowe
6aec11afbc Document From<&CStr> for CString 2024-01-28 16:53:57 -05:00
Wim Looman
6837b812e6
Fix some Arc allocator leaks
This doesn't matter for the stable `Global` allocator as it is a ZST
singleton, but other allocators may rely on all instances being dropped.
2024-01-28 18:33:34 +01:00
John-John Tedro
bdbbf04a03 Fix doctest 2024-01-28 18:25:21 +01:00
John-John Tedro
eebc720757 Replicate documentation in {Rc,Arc}::from_raw_in 2024-01-28 17:00:52 +01:00
John-John Tedro
d63384d689 Fix doctest 2024-01-28 16:38:38 +01:00
John-John Tedro
23c83fab2b Tidy up 2024-01-28 16:12:09 +01:00
John-John Tedro
d4adb3af58 Add examples for unsized {Rc,Arc}::from_raw 2024-01-28 16:07:07 +01:00
John-John Tedro
57e0dea178 Document requirements for unsized {Rc,Arc}::from_raw 2024-01-28 15:57:08 +01:00
Matthias Krüger
092ea4ba19
Rollup merge of #113489 - tguichaoua:cow_from_array, r=dtolnay
impl `From<&[T; N]>` for `Cow<[T]>`

Implement `From<&[T; N]>` for `Cow<[T]>` to simplify its usage in the following example.

```rust
fn foo(data: impl Into<Cow<'static, [&'static str]>>) { /* ... */ }

fn main() {
    foo(vec!["hello", "world"]);
    foo(&["hello", "world"]); // Error: the trait `From<&[&str; 2]>` is not implemented for `Cow<'static, [&'static str]>`
    foo(&["hello", "world"] as &[_]); // Explicit convertion into a slice is required
}
```
2024-01-26 23:15:48 +01:00
Matthias Krüger
c9ab37bf4f
Rollup merge of #103522 - Dylan-DPC:76118/array-methods-stab, r=dtolnay
stabilise array methods

Closes #76118

Stabilises the remaining array methods

FCP is yet to be carried out for this

There wasn't a clear consensus on the naming, but all the other alternatives had some flaws as discussed in the tracking issue and there was a silence on this issue for a year
2024-01-26 23:15:47 +01:00
Matthias Krüger
d1c3ddee2e
Rollup merge of #120372 - bjorn3:fix_outdated_comment, r=Nilstrieb
Fix outdated comment on Box

Caught by `@vi` in https://github.com/rust-lang/rust/pull/113960#discussion_r1454278520
2024-01-26 14:43:33 +01:00
Matthias Krüger
772e80a650
Rollup merge of #119917 - Zalathar:split-off, r=cuviper
Remove special-case handling of `vec.split_off(0)`

#76682 added special handling to `Vec::split_off` for the case where `at == 0`. Instead of copying the vector's contents into a freshly-allocated vector and returning it, the special-case code steals the old vector's allocation, and replaces it with a new (empty) buffer with the same capacity.

That eliminates the need to copy the existing elements, but comes at a surprising cost, as seen in #119913. The returned vector's capacity is no longer determined by the size of its contents (as would be expected for a freshly-allocated vector), and instead uses the full capacity of the old vector.

In cases where the capacity is large but the size is small, that results in a much larger capacity than would be expected from reading the documentation of `split_off`. This is especially bad when `split_off` is called in a loop (to recycle a buffer), and the returned vectors have a wide variety of lengths.

I believe it's better to remove the special-case code, and treat `at == 0` just like any other value:
- The current documentation states that `split_off` returns a “newly allocated vector”, which is not actually true in the current implementation when `at == 0`.
- If the value of `at` could be non-zero at runtime, then the caller has already agreed to the cost of a full memcpy of the taken elements in the general case. Avoiding that copy would be nice if it were close to free, but the different handling of capacity means that it is not.
- If the caller specifically wants to avoid copying in the case where `at == 0`, they can easily implement that behaviour themselves using `mem::replace`.

Fixes #119913.
2024-01-26 14:43:30 +01:00
Matthias Krüger
a5b60c941e
Rollup merge of #117678 - niklasf:stabilize-slice_group_by, r=dtolnay
Stabilize `slice_group_by`

Renamed "group by" to "chunk by" a per #80552.

Newly stable items:

* `core::slice::ChunkBy`
* `core::slice::ChunkByMut`
* `[T]::chunk`
* `[T]::chunk_by`

Closes #80552.
2024-01-26 14:43:29 +01:00
bjorn3
1f994c7695 Fix outdated comment on Box 2024-01-26 12:15:46 +00:00
Matthias Krüger
912877d009
Rollup merge of #119466 - Sky9x:str_from_raw_parts, r=dtolnay
Initial implementation of `str::from_raw_parts[_mut]`

ACP (accepted): rust-lang/libs-team#167
Tracking issue: #119206

Thanks to ``@Kixiron`` for previous work on this (#107207)

``@rustbot`` label +T-libs-api -T-libs
r? ``@thomcc``

Closes #107207.
2024-01-26 06:36:37 +01:00
David Tolnay
f94a94227c
Export core::str::from_raw_parts{,_mut} into alloc::str and std::str 2024-01-25 18:11:54 -08:00
Matthias Krüger
37f02320bc
Rollup merge of #118208 - Amanieu:btree_cursor2, r=dtolnay
Rewrite the BTreeMap cursor API using gaps

Tracking issue: #107540

Currently, a `Cursor` points to a single element in the tree, and allows moving to the next or previous element while mutating the tree. However this was found to be confusing and hard to use.

This PR completely refactors cursors to instead point to a gap between two elements in the tree. This eliminates the need for a "ghost" element that exists after the last element and before the first one. Additionally, `upper_bound` and `lower_bound` now have a much clearer meaning.

The ability to mutate keys is also factored out into a separate `CursorMutKey` type which is unsafe to create. This makes the API easier to use since it avoids duplicated versions of each method with and without key mutation.

API summary:

```rust
impl<K, V> BTreeMap<K, V> {
    fn lower_bound<Q>(&self, bound: Bound<&Q>) -> Cursor<'_, K, V>
    where
        K: Borrow<Q> + Ord,
        Q: Ord;
    fn lower_bound_mut<Q>(&mut self, bound: Bound<&Q>) -> CursorMut<'_, K, V>
    where
        K: Borrow<Q> + Ord,
        Q: Ord;
    fn upper_bound<Q>(&self, bound: Bound<&Q>) -> Cursor<'_, K, V>
    where
        K: Borrow<Q> + Ord,
        Q: Ord;
    fn upper_bound_mut<Q>(&mut self, bound: Bound<&Q>) -> CursorMut<'_, K, V>
    where
        K: Borrow<Q> + Ord,
        Q: Ord;
}

struct Cursor<'a, K: 'a, V: 'a>;

impl<'a, K, V> Cursor<'a, K, V> {
    fn next(&mut self) -> Option<(&'a K, &'a V)>;
    fn prev(&mut self) -> Option<(&'a K, &'a V)>;
    fn peek_next(&self) -> Option<(&'a K, &'a V)>;
    fn peek_prev(&self) -> Option<(&'a K, &'a V)>;
}

struct CursorMut<'a, K: 'a, V: 'a>;

impl<'a, K, V> CursorMut<'a, K, V> {
    fn next(&mut self) -> Option<(&K, &mut V)>;
    fn prev(&mut self) -> Option<(&K, &mut V)>;
    fn peek_next(&mut self) -> Option<(&K, &mut V)>;
    fn peek_prev(&mut self) -> Option<(&K, &mut V)>;

    unsafe fn insert_after_unchecked(&mut self, key: K, value: V);
    unsafe fn insert_before_unchecked(&mut self, key: K, value: V);
    fn insert_after(&mut self, key: K, value: V) -> Result<(), UnorderedKeyError>;
    fn insert_before(&mut self, key: K, value: V) -> Result<(), UnorderedKeyError>;
    fn remove_next(&mut self) -> Option<(K, V)>;
    fn remove_prev(&mut self) -> Option<(K, V)>;

    fn as_cursor(&self) -> Cursor<'_, K, V>;

    unsafe fn with_mutable_key(self) -> CursorMutKey<'a, K, V, A>;
}

struct CursorMutKey<'a, K: 'a, V: 'a>;

impl<'a, K, V> CursorMut<'a, K, V> {
    fn next(&mut self) -> Option<(&mut K, &mut V)>;
    fn prev(&mut self) -> Option<(&mut K, &mut V)>;
    fn peek_next(&mut self) -> Option<(&mut K, &mut V)>;
    fn peek_prev(&mut self) -> Option<(&mut K, &mut V)>;

    unsafe fn insert_after_unchecked(&mut self, key: K, value: V);
    unsafe fn insert_before_unchecked(&mut self, key: K, value: V);
    fn insert_after(&mut self, key: K, value: V) -> Result<(), UnorderedKeyError>;
    fn insert_before(&mut self, key: K, value: V) -> Result<(), UnorderedKeyError>;
    fn remove_next(&mut self) -> Option<(K, V)>;
    fn remove_prev(&mut self) -> Option<(K, V)>;

    fn as_cursor(&self) -> Cursor<'_, K, V>;

    unsafe fn with_mutable_key(self) -> CursorMutKey<'a, K, V, A>;
}

struct UnorderedKeyError;
```
2024-01-25 17:39:26 +01:00
bors
dfe53afaeb Auto merge of #119433 - taiki-e:rc-uninit-ref, r=Nilstrieb
rc,sync: Do not create references to uninitialized values

Closes #119241

r? `@RalfJung`
2024-01-23 16:43:45 +00:00
Frank Steffahn
ab938b9acb Improve documentation for [A]Rc::into_inner
General improvements, and also aims to better encourage the reader
to actually check out Arc::try_unwrap.
2024-01-23 11:38:15 +01:00
bors
e35a56d96f Auto merge of #119892 - joboet:libs_use_assert_unchecked, r=Nilstrieb,cuviper
Use `assert_unchecked` instead of `assume` intrinsic in the standard library

Now that a public wrapper for the `assume` intrinsic exists, we can use it in the standard library.

CC #119131
2024-01-23 06:45:58 +00:00
Matthias Krüger
97bcf0da7c
Rollup merge of #119801 - zachs18:zachs18-patch-1, r=steffahn,Nilstrieb
Fix deallocation with wrong allocator in (A)Rc::from_box_in

Deallocate the `Box` with the original allocator (via `&A`), not `Global`.

Fixes #119749

<details> <summary>Example code with error and Miri output</summary>

(Note that this UB is not observable on stable, because the only usable allocator on stable is `Global` anyway.)

Code ([playground link](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=96193c2c6a1912d7f669fbbe39174b09)):

```rs
#![feature(allocator_api)]
use std::alloc::System;

// uncomment one of these
use std::rc::Rc;
//use std::sync::Arc as Rc;

fn main() {
    let x: Box<[u32], System> = Box::new_in([1,2,3], System);
    let _: Rc<[u32], System> = Rc::from(x);
}
```

Miri output:

```rs
error: Undefined Behavior: deallocating alloc904, which is C heap memory, using Rust heap deallocation operation
   --> /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/alloc.rs:117:14
    |
117 |     unsafe { __rust_dealloc(ptr, layout.size(), layout.align()) }
    |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ deallocating alloc904, which is C heap memory, using Rust heap deallocation operation
    |
    = help: this indicates a bug in the program: it performed an invalid operation, and caused Undefined Behavior
    = help: see https://doc.rust-lang.org/nightly/reference/behavior-considered-undefined.html for further information
    = note: BACKTRACE:
    = note: inside `std::alloc::dealloc` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/alloc.rs:117:14: 117:64
    = note: inside `<std::alloc::Global as std::alloc::Allocator>::deallocate` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/alloc.rs:254:22: 254:51
    = note: inside `<std::boxed::Box<std::mem::ManuallyDrop<[u32]>> as std::ops::Drop>::drop` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/boxed.rs:1244:17: 1244:66
    = note: inside `std::ptr::drop_in_place::<std::boxed::Box<std::mem::ManuallyDrop<[u32]>>> - shim(Some(std::boxed::Box<std::mem::ManuallyDrop<[u32]>>))` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ptr/mod.rs:507:1: 507:56
    = note: inside `std::mem::drop::<std::boxed::Box<std::mem::ManuallyDrop<[u32]>>>` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/mem/mod.rs:992:24: 992:25
    = note: inside `std::rc::Rc::<[u32], std::alloc::System>::from_box_in` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/rc.rs:1928:13: 1928:22
    = note: inside `<std::rc::Rc<[u32], std::alloc::System> as std::convert::From<std::boxed::Box<[u32], std::alloc::System>>>::from` at /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/rc.rs:2504:9: 2504:27
note: inside `main`
   --> src/main.rs:10:32
    |
10  |     let _: Rc<[u32], System> = Rc::from(x);
    |                                ^^^^^^^^^^^

note: some details are omitted, run with `MIRIFLAGS=-Zmiri-backtrace=full` for a verbose backtrace

error: aborting due to 1 previous error
```

</details>
2024-01-22 16:54:57 +01:00
Matthias Krüger
3eb7fe32a1
Rollup merge of #120180 - Zalathar:vec-split-off-alternatives, r=dtolnay
Document some alternatives to `Vec::split_off`

One of the discussion points that came up in #119917 is that some people use `Vec::split_off` in cases where they probably shouldn't, because the alternatives (like `mem::take`) are hard to discover.

This PR adds some suggestions to the documentation of `split_off` that should point people towards alternatives that might be more appropriate for their use-case.

I've deliberately tried to keep these changes as simple and uncontroversial as possible, so that they don't depend on how the team decides to handle the concerns raised in #119917. That's why I haven't touched the existing documentation for `split_off`, and haven't added links to `split_off` to the documentation of other methods.
2024-01-21 12:28:55 +01:00
Matthias Krüger
4a941d384d
Rollup merge of #120145 - the8472:fix-inplace-dest-drop, r=cuviper
fix: Drop guard was deallocating with the incorrect size

InPlaceDstBufDrop holds onto the allocation before the shrinking happens which means it must deallocate the destination elements but the source allocation.

Thanks `@cuviper` for spotting this.
2024-01-21 12:28:53 +01:00
Zalathar
6f1944d394 Document some alternatives to Vec::split_off 2024-01-21 11:56:55 +11:00
Guillaume Gomez
b917753d79
Rollup merge of #120116 - the8472:only-same-alignments, r=cuviper
Remove alignment-changing in-place collect

This removes the alignment-changing in-place collect optimization introduced in #110353
Currently stable users can't benefit from the optimization because GlobaAlloc doesn't support alignment-changing realloc and neither do most posix allocators. So in practice it has a negative impact on performance.

Explanation from https://github.com/rust-lang/rust/issues/120091#issuecomment-1899071681:

> > You mention that in case of alignment mismatch -- when the new alignment is less than the old -- the implementation calls `mremap`.
>
> I was trying to note that this isn't really the case in practice, due to the semantics of Rust's allocator APIs. The only use of the allocator within the `in_place_collect` implementation itself is [a call to `Allocator::shrink()`](db7125f008/library/alloc/src/vec/in_place_collect.rs (L299-L303)), which per its documentation [allows decreasing the required alignment](https://doc.rust-lang.org/1.75.0/core/alloc/trait.Allocator.html). However, in stable Rust, the only available `Allocator` is [`Global`](https://doc.rust-lang.org/1.75.0/alloc/alloc/struct.Global.html), which delegates to the registered `GlobalAlloc`. Since `GlobalAlloc::realloc()` [cannot change the required alignment](https://doc.rust-lang.org/1.75.0/core/alloc/trait.GlobalAlloc.html#method.realloc), the implementation of [`<Global as Allocator>::shrink()`](db7125f008/library/alloc/src/alloc.rs (L280-L321)) must fall back to creating a brand-new allocation, `memcpy`ing the data into it, and freeing the old allocation, whenever the alignment doesn't remain exactly the same.
>
> Therefore, the underlying allocator, provided by libc or some other source, has no opportunity to internally `mremap()` the data when the alignment is changed, since it has no way of knowing that the allocation is the same.
2024-01-20 20:06:35 +01:00
Tomás Vallotton
180c68bef5 doc: fix some doctests after rebase 2024-01-20 10:26:25 -03:00
Tomás Vallotton
038c6e046c refactor: make waker mandatory.
This also removes
* impl From<&Context> for ContextBuilder
* Context::try_waker()

The from implementation is removed because now that
wakers are always supported, there are less incentives
to override the current context. Before, the incentive
was to add Waker support to a reactor that didn't have
any.
2024-01-20 10:16:09 -03:00
tvallotton
c67a446e72 fix: Apply suggestions from code review
Co-authored-by: Mark Rousskov <mark.simulacrum@gmail.com>
2024-01-20 10:14:25 -03:00
Tomás Vallotton
093f80ba7e chore: fix ci failures 2024-01-20 10:14:25 -03:00
Tomás Vallotton
0cb7a0a90e chore: add tracking issue number to local waker feature 2024-01-20 10:14:25 -03:00
Tomás Vallotton
2012d4b703 fix: make LocalWake available in targets that don't support atomics by removing a #[cfg(target_has_atomic = ptr)] 2024-01-20 10:14:25 -03:00
Tomás Vallotton
403718b19d feat: add try_waker and From<&mut Context> for ContextBuilder to allow the extention of contexts by futures 2024-01-20 10:14:21 -03:00
Tomás Vallotton
60a08196b6 feat: add LocalWaker type, ContextBuilder type, and LocalWake trait. 2024-01-20 10:13:08 -03:00
The 8472
5796b3c167 fix: Drop guard was deallocating with the incorrect size
InPlaceDstBufDrop holds onto the allocation before the shrinking happens
which means it must deallocate the destination elements but the source
allocation.
2024-01-19 23:05:30 +01:00
Matthias Krüger
7219bd22ed
Rollup merge of #120110 - invpt:patch-1, r=the8472
Update documentation for Vec::into_boxed_slice to be more clear about excess capacity

Currently, the documentation for Vec::into_boxed_slice says that "if the vector has excess capacity, its items will be moved into a newly-allocated buffer with exactly the right capacity." This is misleading, as copies do not necessarily occur, depending on if the allocator supports in-place shrinking. I copied some of the wording from shrink_to_fit, though it could potentially still be worded better than this.
2024-01-19 08:15:05 +01:00
invpt
35a9fc3472 Clarify docs for Vec::into_boxed_slice, Vec::shrink_to_fit 2024-01-18 18:01:36 -05:00
The 8472
85d1787962 remove alignment-changing in-place collect
Currently stable users can't benefit from this because GlobaAlloc doesn't support
alignment-changing realloc and neither do most posix allocators.
So in practice it always results in an extra memcpy.
2024-01-18 22:50:14 +01:00
The 8472
b28a95391b update internal ASCII art comment for vec specializations 2024-01-18 22:47:20 +01:00
Jake Goulding
fb7762b1c5 Remove no-longer-needed allow(dead_code) from the standard library
`repr(transparent)` now silences the lint.
2024-01-18 13:14:42 -05:00
Robert Grosse
db7125f008
Fix typo in comments (in_place_collect) 2024-01-16 20:48:22 -08:00
joboet
fa9a911a57
libs: use assert_unchecked instead of intrinsic 2024-01-13 20:10:00 +01:00
Zalathar
a655558b38 Remove special-case handling of vec.split_off(0) 2024-01-13 17:21:54 +11:00
hi-rustin
784b50cece chore: remove unnecessary blank line
Signed-off-by: hi-rustin <rustin.liu@gmail.com>
2024-01-11 09:45:21 +08:00
zachs18
bfe04e08c0 Fix deallocation with wrong allocator in (A)Rc::from_box_in 2024-01-10 02:17:47 -06:00
Taiki Endo
4e973b0b63 rc,sync: Do not create references to uninitialized values 2024-01-08 00:36:31 +09:00
The 8472
93b34a5ffa mark vec::IntoIter pointers as !nonnull 2024-01-07 03:44:04 +01:00
The 8472
fd8ba7bc3c typo fix 2024-01-07 03:42:45 +01:00
Matthias Krüger
923578e6f9
Rollup merge of #118781 - RalfJung:core-panic-feature, r=the8472
merge core_panic feature into panic_internals

I don't know why those are two separate features, but it does not seem intentional. This merge is useful because with https://github.com/rust-lang/rust/pull/118123, panic_internals is recognized as an internal feature, but core_panic is not -- but core_panic definitely should be internal.
2024-01-06 16:07:46 +01:00
bors
5113ed28ea Auto merge of #118297 - shepmaster:warn-dead-tuple-fields, r=WaffleLapkin
Merge `unused_tuple_struct_fields` into `dead_code`

This implicitly upgrades the lint from `allow` to `warn` and places it into the `unused` lint group.

[Discussion on Zulip](https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Moving.20.60unused_tuple_struct_fields.60.20from.20allow.20to.20warn)
2024-01-05 04:51:55 +00:00
León Orell Valerian Liehr
34ef194859
Rollup merge of #119434 - taiki-e:rc-is-dangling, r=Mark-Simulacrum
rc: Take *const T in is_dangling

It is not important which one is used since `is_dangling` does not access memory, but `*const` removes the needs of `*const T` -> `*mut T` casts in `from_raw_in`.
2024-01-03 16:08:25 +01:00
Jake Goulding
5772818dc8 Adjust library tests for unused_tuple_struct_fields -> dead_code 2024-01-02 15:34:37 -05:00
Matthias Krüger
c67ab2e0b4
Rollup merge of #119158 - JohnTheCoolingFan:arc-weak-clone-pretty, r=cuviper
Clean up alloc::sync::Weak Clone implementation

Since both return points (tail and early return) return the same expression and the only difference is whether inner is available, the code that does the atomic operations and checks on inner was moved into the if body and the only return is at the tail. Original comments preserved.
2023-12-30 11:42:02 +01:00
Taiki Endo
2c23c06c32 rc: Take *const T in is_dangling
It is not important which one is used since `is_dangling` does not access
memory, but `*const` removes the needs of `*const T` -> `*mut T` casts
in `from_raw_in`.
2023-12-30 16:28:00 +09:00
Gurinder Singh
e3aca01343 Italicise "bytes" in the docs of some Vec methods
because on a cursory read it's easy to miss that the limit is
in terms of bytes not no. of elements. The italics should help
with that.
2023-12-29 09:53:29 +05:30
Matthias Krüger
89c3236789
Rollup merge of #119205 - mumbleskates:vecdeque-comment-fix, r=Mark-Simulacrum
fix minor mistake in comments describing VecDeque resizing

Avoiding confusion where one of the items in the deque seems to disappear in two of the three cases
2023-12-24 01:08:09 +01:00
Pietro Albini
c00486c9bb
update version placeholders 2023-12-22 11:01:42 +01:00
Kent Ross
f2e711e4c2 fix minor mistake in comments describing VecDeque resizing 2023-12-21 15:20:14 -08:00
JohnTheCoolingFan
0453d5fe6f
Cleaned up alloc::sync::Weak Clone implementation
Since both return points (tail and early return) return the same
expression and the only difference is whether inner is available, the
code that does the atomic operations and checks on inner was moved into
the if body and the only return is at the tail. Original comments
preserved.
2023-12-20 12:13:34 +03:00
bors
51c0db6a91 Auto merge of #106790 - the8472:rawvec-niche, r=scottmcm
add more niches to rawvec

Previously RawVec only had a single niche in its `NonNull` pointer. With this change it now has `isize::MAX` niches since half the value-space of the capacity field is never needed, we can't have a capacity larger than isize::MAX.
2023-12-20 02:19:10 +00:00
Daniel Huang
f2f9b1d82a
Update c_str.rs 2023-12-14 19:08:36 -05:00
The 8472
6a2f44e9d8 add comment to RawVec::cap field 2023-12-11 23:38:48 +01:00
The 8472
502df1b7d4 add more niches to rawvec 2023-12-11 23:38:48 +01:00
bors
84f6130fe3 Auto merge of #118692 - surechen:remove_unused_imports, r=petrochenkov
remove redundant imports

detects redundant imports that can be eliminated.

for #117772 :

In order to facilitate review and modification, split the checking code and removing redundant imports code into two PR.

r? `@petrochenkov`
2023-12-10 11:55:48 +00:00
bors
61afc9c928 Auto merge of #116949 - hamza1311:stablize-arc_unwrap_or_clone, r=dtolnay
Stablize arc_unwrap_or_clone

Fixes: #93610

This likely needs FCP. I created this PR as it's stabilization is trivial and FCP can be just conducted here. Not sure how to ping the libs API team (last attempt didn't work apparently according to GH UI)
2023-12-10 05:01:00 +00:00
surechen
40ae34194c remove redundant imports
detects redundant imports that can be eliminated.

for #117772 :

In order to facilitate review and modification, split the checking code and
removing redundant imports code into two PR.
2023-12-10 10:56:22 +08:00
Ralf Jung
af4913fcf4 merge core_panic feature into panic_internals 2023-12-09 14:49:00 +01:00
bors
1a3aa4ad14 Auto merge of #114136 - TennyZhuang:linked-list-retain, r=thomcc
add LinkedList::{retain,retain_mut}

Implement #114135

The API is consistent with other collections.
2023-12-09 02:38:45 +00:00
bors
c9d85d67c4 Auto merge of #117960 - zhiqiangxu:dry, r=workingjubilee
chore: avoid duplicate code in `Weak::inner`
2023-12-07 02:27:41 +00:00
Matthias Krüger
49d6594278
Rollup merge of #117563 - 0xalpharush:docs/into-raw, r=workingjubilee
docs: clarify explicitly freeing heap allocated memory

The documentation for `Box::into_raw` didn't mention `drop` and wondered if I was doing something wrong. Based off [this](https://stackoverflow.com/questions/75441199/rust-how-do-i-correctly-free-heap-allocated-memory), I think it's helpful to include the more concise yet explicit way to free heap allocated memory. This is my first rust PR and I went through https://std-dev-guide.rust-lang.org/development/, but let me know if I missed something :)
2023-12-06 17:21:57 +01:00
bors
15bb3e204a Auto merge of #118460 - the8472:fix-vec-realloc, r=saethlin
Fix in-place collect not reallocating when necessary

Regression introduced in https://github.com/rust-lang/rust/pull/110353.
This was [caught by miri](https://rust-lang.zulipchat.com/#narrow/stream/269128-miri/topic/Cron.20Job.20Failure.20.28miri-test-libstd.2C.202023-11.29/near/404764617)

r? `@saethlin`
2023-12-06 08:45:11 +00:00
zhiqiangxu
75d76c8ffe Don't repeat yourself 2023-12-06 09:02:19 +08:00
bors
e9013ac0e4 Auto merge of #118273 - AngelicosPhosphoros:dedup_2_loops_version_77772_2, r=the8472
Split `Vec::dedup_by` into 2 cycles

First cycle runs until we found 2 same elements, second runs after if there any found in the first one. This allows to avoid any memory writes until we found an item which we want to remove.

This leads to significant performance gains if all `Vec` items are kept: -40% on my benchmark with unique integers.

Results of benchmarks before implementation (including new benchmark where nothing needs to be removed):
 *   vec::bench_dedup_all_100                 74.00ns/iter  +/- 13.00ns
 *   vec::bench_dedup_all_1000               572.00ns/iter +/- 272.00ns
 *   vec::bench_dedup_all_100000              64.42µs/iter  +/- 19.47µs
 *   __vec::bench_dedup_none_100                67.00ns/iter  +/- 17.00ns__
 *   __vec::bench_dedup_none_1000              662.00ns/iter  +/- 86.00ns__
 *   __vec::bench_dedup_none_10000               9.16µs/iter   +/- 2.71µs__
 *   __vec::bench_dedup_none_100000             91.25µs/iter   +/- 1.82µs__
 *   vec::bench_dedup_random_100             105.00ns/iter  +/- 11.00ns
 *   vec::bench_dedup_random_1000            781.00ns/iter  +/- 10.00ns
 *   vec::bench_dedup_random_10000             9.00µs/iter   +/- 5.62µs
 *   vec::bench_dedup_random_100000          449.81µs/iter  +/- 74.99µs
 *   vec::bench_dedup_slice_truncate_100     105.00ns/iter  +/- 16.00ns
 *   vec::bench_dedup_slice_truncate_1000      2.65µs/iter +/- 481.00ns
 *   vec::bench_dedup_slice_truncate_10000    18.33µs/iter   +/- 5.23µs
 *   vec::bench_dedup_slice_truncate_100000  501.12µs/iter  +/- 46.97µs

Results after implementation:
 *   vec::bench_dedup_all_100                 75.00ns/iter   +/- 9.00ns
 *   vec::bench_dedup_all_1000               494.00ns/iter +/- 117.00ns
 *   vec::bench_dedup_all_100000              58.13µs/iter   +/- 8.78µs
 *   __vec::bench_dedup_none_100                52.00ns/iter  +/- 22.00ns__
 *   __vec::bench_dedup_none_1000              417.00ns/iter +/- 116.00ns__
 *   __vec::bench_dedup_none_10000               4.11µs/iter +/- 546.00ns__
 *   __vec::bench_dedup_none_100000             40.47µs/iter   +/- 5.36µs__
 *   vec::bench_dedup_random_100              77.00ns/iter  +/- 15.00ns
 *   vec::bench_dedup_random_1000            681.00ns/iter  +/- 86.00ns
 *   vec::bench_dedup_random_10000            11.66µs/iter   +/- 2.22µs
 *   vec::bench_dedup_random_100000          469.35µs/iter  +/- 20.53µs
 *   vec::bench_dedup_slice_truncate_100     100.00ns/iter   +/- 5.00ns
 *   vec::bench_dedup_slice_truncate_1000      2.55µs/iter +/- 224.00ns
 *   vec::bench_dedup_slice_truncate_10000    18.95µs/iter   +/- 2.59µs
 *   vec::bench_dedup_slice_truncate_100000  492.85µs/iter  +/- 72.84µs

Resolves #77772

P.S. Note that this is same PR as #92104 I just missed review then forgot about it.
Also, I cannot reopen that pull request so I am creating a new one.
I responded to remaining questions directly by adding commentaries to my code.
2023-12-05 21:40:02 +00:00
AngelicosPhosphoros
964df019d2 Split Vec::dedup_by into 2 cycles
First cycle runs until we found 2 same elements, second runs after if there any found in the first one. This allows to avoid any memory writes until we found an item which we want to remove.

This leads to significant performance gains if all `Vec` items are kept: -40% on my benchmark with unique integers.

Results of benchmarks before implementation (including new benchmark where nothing needs to be removed):
 *   vec::bench_dedup_all_100                 74.00ns/iter  +/- 13.00ns
 *   vec::bench_dedup_all_1000               572.00ns/iter +/- 272.00ns
 *   vec::bench_dedup_all_100000              64.42µs/iter  +/- 19.47µs
 *   __vec::bench_dedup_none_100                67.00ns/iter  +/- 17.00ns__
 *   __vec::bench_dedup_none_1000              662.00ns/iter  +/- 86.00ns__
 *   __vec::bench_dedup_none_10000               9.16µs/iter   +/- 2.71µs__
 *   __vec::bench_dedup_none_100000             91.25µs/iter   +/- 1.82µs__
 *   vec::bench_dedup_random_100             105.00ns/iter  +/- 11.00ns
 *   vec::bench_dedup_random_1000            781.00ns/iter  +/- 10.00ns
 *   vec::bench_dedup_random_10000             9.00µs/iter   +/- 5.62µs
 *   vec::bench_dedup_random_100000          449.81µs/iter  +/- 74.99µs
 *   vec::bench_dedup_slice_truncate_100     105.00ns/iter  +/- 16.00ns
 *   vec::bench_dedup_slice_truncate_1000      2.65µs/iter +/- 481.00ns
 *   vec::bench_dedup_slice_truncate_10000    18.33µs/iter   +/- 5.23µs
 *   vec::bench_dedup_slice_truncate_100000  501.12µs/iter  +/- 46.97µs

Results after implementation:
 *   vec::bench_dedup_all_100                 75.00ns/iter   +/- 9.00ns
 *   vec::bench_dedup_all_1000               494.00ns/iter +/- 117.00ns
 *   vec::bench_dedup_all_100000              58.13µs/iter   +/- 8.78µs
 *   __vec::bench_dedup_none_100                52.00ns/iter  +/- 22.00ns__
 *   __vec::bench_dedup_none_1000              417.00ns/iter +/- 116.00ns__
 *   __vec::bench_dedup_none_10000               4.11µs/iter +/- 546.00ns__
 *   __vec::bench_dedup_none_100000             40.47µs/iter   +/- 5.36µs__
 *   vec::bench_dedup_random_100              77.00ns/iter  +/- 15.00ns
 *   vec::bench_dedup_random_1000            681.00ns/iter  +/- 86.00ns
 *   vec::bench_dedup_random_10000            11.66µs/iter   +/- 2.22µs
 *   vec::bench_dedup_random_100000          469.35µs/iter  +/- 20.53µs
 *   vec::bench_dedup_slice_truncate_100     100.00ns/iter   +/- 5.00ns
 *   vec::bench_dedup_slice_truncate_1000      2.55µs/iter +/- 224.00ns
 *   vec::bench_dedup_slice_truncate_10000    18.95µs/iter   +/- 2.59µs
 *   vec::bench_dedup_slice_truncate_100000  492.85µs/iter  +/- 72.84µs

Resolves #77772
2023-12-05 21:01:00 +01:00
The 8472
13a843ebcb Fix in-place collect not reallocating when necessary 2023-12-05 20:09:22 +01:00
Jules Bertholet
1265af0396
Remove useless 'static bounds on Box allocator
#79327 added `'static` bounds to the allocator parameter
for various `Box` + `Pin` APIs to ensure soundness.
But it was a bit overzealous, some of the bounds aren't
actually needed.
2023-12-04 23:54:39 -05:00
bors
ec1f21cb04 Auto merge of #118433 - matthiaskrgr:rollup-fi9lrwg, r=matthiaskrgr
Rollup of 7 pull requests

Successful merges:

 - #116839 (Implement thread parking for xous)
 - #118265 (remove the memcpy-on-equal-ptrs assumption)
 - #118269 (Unify `TraitRefs` and `PolyTraitRefs` in `ValuePairs`)
 - #118394 (Remove HIR opkinds)
 - #118398 (Add proper cfgs in std)
 - #118419 (Eagerly return `ExprKind::Err` on `yield`/`await` in wrong coroutine context)
 - #118422 (Fix coroutine validation for mixed panic strategy)

r? `@ghost`
`@rustbot` modify labels: rollup
2023-11-29 08:51:01 +00:00
Matthias Krüger
f8628a179d
Rollup merge of #118383 - shepmaster:unused-tuple-struct-field-cleanup-stdlib, r=m-ou-se
Address unused tuple struct fields in the standard library
2023-11-29 04:23:28 +01:00
Matthias Krüger
b7016ae205
Rollup merge of #118398 - mu001999:std/add_cfgs, r=thomcc
Add proper cfgs in std

Detected by #118257
2023-11-29 04:23:23 +01:00
Jake Goulding
115eac03bb Address unused tuple struct fields in the standard library 2023-11-28 12:00:54 -05:00
bors
df0295f071 Auto merge of #110353 - the8472:in-place-flatten-chunks, r=cuviper
Expand in-place iteration specialization to Flatten, FlatMap and ArrayChunks

This enables the following cases to collect in-place:

```rust
let v = vec![[0u8; 4]; 1024]
let v: Vec<_> = v.into_iter().flatten().collect();

let v: Vec<Option<NonZeroUsize>> = vec![NonZeroUsize::new(0); 1024];
let v: Vec<_> = v.into_iter().flatten().collect();

let v = vec![u8; 4096];
let v: Vec<_> = v.into_iter().array_chunks::<4>().collect();
```

Especially the nicheful-option-flattening should be useful in real code.
2023-11-28 12:22:16 +00:00
r0cky
c751bfa015 Add proper cfgs 2023-11-28 09:02:34 +08:00
Michael Goulet
fcb9fcc28c
Rollup merge of #117968 - Urgau:stabilize-ptr-addr-eq, r=dtolnay
Stabilize `ptr::addr_eq`

This PR stabilize the `ptr_addr_eq` library feature, representing:

```rust
// core::ptr

pub fn addr_eq<T: ?Sized, U: ?Sized>(p: *const T, q: *const U) -> bool;
```

FCP has already started [on the tracking issue](https://github.com/rust-lang/rust/issues/116324#issuecomment-1813008697) and is waiting on the final period comment.

Note: stabilizing this feature is somewhat of requirement for a new T-lang lint, cf. https://github.com/rust-lang/rust/pull/117758#issuecomment-1813183686.
2023-11-25 17:23:33 -05:00
Amanieu d'Antras
d085f34a2d Add UnorderedKeyError 2023-11-23 16:05:29 +00:00
Amanieu d'Antras
166e348564
Update library/alloc/src/collections/btree/map.rs
Co-authored-by: Joe ST <joe@fbstj.net>
2023-11-23 12:37:20 +00:00
Amanieu d'Antras
8ee9693177 Rewrite the BTreeMap cursor API using gaps
Tracking issue: #107540

Currently, a `Cursor` points to a single element in the tree, and allows
moving to the next or previous element while mutating the tree. However
this was found to be confusing and hard to use.

This PR completely refactors cursors to instead point to a gap between
two elements in the tree. This eliminates the need for a "ghost" element
that exists after the last element and before the first one.
Additionally, `upper_bound` and `lower_bound` now have a much clearer
meaning.

The ability to mutate keys is also factored out into a separate
`CursorMutKey` type which is unsafe to create. This makes the API easier
to use since it avoids duplicated versions of each method with and
without key mutation.

API summary:

```rust
impl<K, V> BTreeMap<K, V> {
    fn lower_bound<Q>(&self, bound: Bound<&Q>) -> Cursor<'_, K, V>
    where
        K: Borrow<Q> + Ord,
        Q: Ord;
    fn lower_bound_mut<Q>(&mut self, bound: Bound<&Q>) -> CursorMut<'_, K, V>
    where
        K: Borrow<Q> + Ord,
        Q: Ord;
    fn upper_bound<Q>(&self, bound: Bound<&Q>) -> Cursor<'_, K, V>
    where
        K: Borrow<Q> + Ord,
        Q: Ord;
    fn upper_bound_mut<Q>(&mut self, bound: Bound<&Q>) -> CursorMut<'_, K, V>
    where
        K: Borrow<Q> + Ord,
        Q: Ord;
}

struct Cursor<'a, K: 'a, V: 'a>;

impl<'a, K, V> Cursor<'a, K, V> {
    fn next(&mut self) -> Option<(&'a K, &'a V)>;
    fn prev(&mut self) -> Option<(&'a K, &'a V)>;
    fn peek_next(&self) -> Option<(&'a K, &'a V)>;
    fn peek_prev(&self) -> Option<(&'a K, &'a V)>;
}

struct CursorMut<'a, K: 'a, V: 'a>;

impl<'a, K, V> CursorMut<'a, K, V> {
    fn next(&mut self) -> Option<(&K, &mut V)>;
    fn prev(&mut self) -> Option<(&K, &mut V)>;
    fn peek_next(&mut self) -> Option<(&K, &mut V)>;
    fn peek_prev(&mut self) -> Option<(&K, &mut V)>;

    unsafe fn insert_after_unchecked(&mut self, key: K, value: V);
    unsafe fn insert_before_unchecked(&mut self, key: K, value: V);
    fn insert_after(&mut self, key: K, value: V);
    fn insert_before(&mut self, key: K, value: V);
    fn remove_next(&mut self) -> Option<(K, V)>;
    fn remove_prev(&mut self) -> Option<(K, V)>;

    fn as_cursor(&self) -> Cursor<'_, K, V>;

    unsafe fn with_mutable_key(self) -> CursorMutKey<'a, K, V, A>;
}

struct CursorMutKey<'a, K: 'a, V: 'a>;

impl<'a, K, V> CursorMut<'a, K, V> {
    fn next(&mut self) -> Option<(&mut K, &mut V)>;
    fn prev(&mut self) -> Option<(&mut K, &mut V)>;
    fn peek_next(&mut self) -> Option<(&mut K, &mut V)>;
    fn peek_prev(&mut self) -> Option<(&mut K, &mut V)>;

    unsafe fn insert_after_unchecked(&mut self, key: K, value: V);
    unsafe fn insert_before_unchecked(&mut self, key: K, value: V);
    fn insert_after(&mut self, key: K, value: V);
    fn insert_before(&mut self, key: K, value: V);
    fn remove_next(&mut self) -> Option<(K, V)>;
    fn remove_prev(&mut self) -> Option<(K, V)>;

    fn as_cursor(&self) -> Cursor<'_, K, V>;

    unsafe fn with_mutable_key(self) -> CursorMutKey<'a, K, V, A>;
}
```
2023-11-23 11:42:55 +00:00
Petr Portnov
72a8633ee8
docs(GH-118094): make docs a bit more explicit
Signed-off-by: Petr Portnov <me@progrm-jarvis.ru>
2023-11-20 18:35:04 +03:00
Petr Portnov
91fcdde51b
chore(GH-118094): explicitly mark _elem as unused
Signed-off-by: Petr Portnov <me@progrm-jarvis.ru>
2023-11-20 18:33:55 +03:00