Commit Graph

2366 Commits

Author SHA1 Message Date
bors
fa404339c9 Auto merge of #85528 - the8472:iter-markers, r=dtolnay
Implement iterator specialization traits on more adapters

This adds

* `TrustedLen` to `Skip` and `StepBy`
* `TrustedRandomAccess` to `Skip`
* `InPlaceIterable` and `SourceIter` to  `Copied` and `Cloned`

The first two might improve performance in the compiler itself since `skip` is used in several places. Constellations that would exercise the last point are probably rare since it would require an owning iterator that has references as Items somewhere in its iterator pipeline.

Improvements for `Skip`:

```
# old
test iter::bench_skip_trusted_random_access                     ... bench:       8,335 ns/iter (+/- 90)

# new
test iter::bench_skip_trusted_random_access                     ... bench:       2,753 ns/iter (+/- 27)
```
2024-01-21 11:17:46 +00:00
Guillaume Gomez
b917753d79
Rollup merge of #120116 - the8472:only-same-alignments, r=cuviper
Remove alignment-changing in-place collect

This removes the alignment-changing in-place collect optimization introduced in #110353
Currently stable users can't benefit from the optimization because GlobaAlloc doesn't support alignment-changing realloc and neither do most posix allocators. So in practice it has a negative impact on performance.

Explanation from https://github.com/rust-lang/rust/issues/120091#issuecomment-1899071681:

> > You mention that in case of alignment mismatch -- when the new alignment is less than the old -- the implementation calls `mremap`.
>
> I was trying to note that this isn't really the case in practice, due to the semantics of Rust's allocator APIs. The only use of the allocator within the `in_place_collect` implementation itself is [a call to `Allocator::shrink()`](db7125f008/library/alloc/src/vec/in_place_collect.rs (L299-L303)), which per its documentation [allows decreasing the required alignment](https://doc.rust-lang.org/1.75.0/core/alloc/trait.Allocator.html). However, in stable Rust, the only available `Allocator` is [`Global`](https://doc.rust-lang.org/1.75.0/alloc/alloc/struct.Global.html), which delegates to the registered `GlobalAlloc`. Since `GlobalAlloc::realloc()` [cannot change the required alignment](https://doc.rust-lang.org/1.75.0/core/alloc/trait.GlobalAlloc.html#method.realloc), the implementation of [`<Global as Allocator>::shrink()`](db7125f008/library/alloc/src/alloc.rs (L280-L321)) must fall back to creating a brand-new allocation, `memcpy`ing the data into it, and freeing the old allocation, whenever the alignment doesn't remain exactly the same.
>
> Therefore, the underlying allocator, provided by libc or some other source, has no opportunity to internally `mremap()` the data when the alignment is changed, since it has no way of knowing that the allocation is the same.
2024-01-20 20:06:35 +01:00
Matthias Krüger
7219bd22ed
Rollup merge of #120110 - invpt:patch-1, r=the8472
Update documentation for Vec::into_boxed_slice to be more clear about excess capacity

Currently, the documentation for Vec::into_boxed_slice says that "if the vector has excess capacity, its items will be moved into a newly-allocated buffer with exactly the right capacity." This is misleading, as copies do not necessarily occur, depending on if the allocator supports in-place shrinking. I copied some of the wording from shrink_to_fit, though it could potentially still be worded better than this.
2024-01-19 08:15:05 +01:00
invpt
35a9fc3472 Clarify docs for Vec::into_boxed_slice, Vec::shrink_to_fit 2024-01-18 18:01:36 -05:00
The 8472
85d1787962 remove alignment-changing in-place collect
Currently stable users can't benefit from this because GlobaAlloc doesn't support
alignment-changing realloc and neither do most posix allocators.
So in practice it always results in an extra memcpy.
2024-01-18 22:50:14 +01:00
The 8472
b28a95391b update internal ASCII art comment for vec specializations 2024-01-18 22:47:20 +01:00
Jake Goulding
fb7762b1c5 Remove no-longer-needed allow(dead_code) from the standard library
`repr(transparent)` now silences the lint.
2024-01-18 13:14:42 -05:00
Robert Grosse
db7125f008
Fix typo in comments (in_place_collect) 2024-01-16 20:48:22 -08:00
Matthias Krüger
b3d15ebb08
Rollup merge of #119853 - klensy:rustfmt-ignore, r=cuviper
rustfmt.toml: don't ignore just any tests path, only root one

Previously ignored any `tests` path, now only /tests at repo root.

For reference, https://git-scm.com/docs/gitignore#_pattern_format
2024-01-11 19:42:53 +01:00
klensy
aa696c5a22 apply fmt 2024-01-11 15:04:48 +03:00
hi-rustin
784b50cece chore: remove unnecessary blank line
Signed-off-by: hi-rustin <rustin.liu@gmail.com>
2024-01-11 09:45:21 +08:00
The8472
37d26c719d Implement in-place iteratation markers for iter::{Copied, Cloned} 2024-01-10 19:03:57 +01:00
The 8472
93b34a5ffa mark vec::IntoIter pointers as !nonnull 2024-01-07 03:44:04 +01:00
The 8472
fd8ba7bc3c typo fix 2024-01-07 03:42:45 +01:00
Matthias Krüger
923578e6f9
Rollup merge of #118781 - RalfJung:core-panic-feature, r=the8472
merge core_panic feature into panic_internals

I don't know why those are two separate features, but it does not seem intentional. This merge is useful because with https://github.com/rust-lang/rust/pull/118123, panic_internals is recognized as an internal feature, but core_panic is not -- but core_panic definitely should be internal.
2024-01-06 16:07:46 +01:00
bors
5113ed28ea Auto merge of #118297 - shepmaster:warn-dead-tuple-fields, r=WaffleLapkin
Merge `unused_tuple_struct_fields` into `dead_code`

This implicitly upgrades the lint from `allow` to `warn` and places it into the `unused` lint group.

[Discussion on Zulip](https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Moving.20.60unused_tuple_struct_fields.60.20from.20allow.20to.20warn)
2024-01-05 04:51:55 +00:00
León Orell Valerian Liehr
34ef194859
Rollup merge of #119434 - taiki-e:rc-is-dangling, r=Mark-Simulacrum
rc: Take *const T in is_dangling

It is not important which one is used since `is_dangling` does not access memory, but `*const` removes the needs of `*const T` -> `*mut T` casts in `from_raw_in`.
2024-01-03 16:08:25 +01:00
Jake Goulding
5772818dc8 Adjust library tests for unused_tuple_struct_fields -> dead_code 2024-01-02 15:34:37 -05:00
Matthias Krüger
c67ab2e0b4
Rollup merge of #119158 - JohnTheCoolingFan:arc-weak-clone-pretty, r=cuviper
Clean up alloc::sync::Weak Clone implementation

Since both return points (tail and early return) return the same expression and the only difference is whether inner is available, the code that does the atomic operations and checks on inner was moved into the if body and the only return is at the tail. Original comments preserved.
2023-12-30 11:42:02 +01:00
Taiki Endo
2c23c06c32 rc: Take *const T in is_dangling
It is not important which one is used since `is_dangling` does not access
memory, but `*const` removes the needs of `*const T` -> `*mut T` casts
in `from_raw_in`.
2023-12-30 16:28:00 +09:00
Gurinder Singh
e3aca01343 Italicise "bytes" in the docs of some Vec methods
because on a cursory read it's easy to miss that the limit is
in terms of bytes not no. of elements. The italics should help
with that.
2023-12-29 09:53:29 +05:30
Matthias Krüger
89c3236789
Rollup merge of #119205 - mumbleskates:vecdeque-comment-fix, r=Mark-Simulacrum
fix minor mistake in comments describing VecDeque resizing

Avoiding confusion where one of the items in the deque seems to disappear in two of the three cases
2023-12-24 01:08:09 +01:00
Pietro Albini
c00486c9bb
update version placeholders 2023-12-22 11:01:42 +01:00
Kent Ross
f2e711e4c2 fix minor mistake in comments describing VecDeque resizing 2023-12-21 15:20:14 -08:00
JohnTheCoolingFan
0453d5fe6f
Cleaned up alloc::sync::Weak Clone implementation
Since both return points (tail and early return) return the same
expression and the only difference is whether inner is available, the
code that does the atomic operations and checks on inner was moved into
the if body and the only return is at the tail. Original comments
preserved.
2023-12-20 12:13:34 +03:00
bors
51c0db6a91 Auto merge of #106790 - the8472:rawvec-niche, r=scottmcm
add more niches to rawvec

Previously RawVec only had a single niche in its `NonNull` pointer. With this change it now has `isize::MAX` niches since half the value-space of the capacity field is never needed, we can't have a capacity larger than isize::MAX.
2023-12-20 02:19:10 +00:00
Daniel Huang
f2f9b1d82a
Update c_str.rs 2023-12-14 19:08:36 -05:00
The 8472
6a2f44e9d8 add comment to RawVec::cap field 2023-12-11 23:38:48 +01:00
The 8472
502df1b7d4 add more niches to rawvec 2023-12-11 23:38:48 +01:00
bors
8a3765582c Auto merge of #117758 - Urgau:lint_pointer_trait_comparisons, r=davidtwco
Add lint against ambiguous wide pointer comparisons

This PR is the resolution of https://github.com/rust-lang/rust/issues/106447 decided in https://github.com/rust-lang/rust/issues/117717 by T-lang.

## `ambiguous_wide_pointer_comparisons`

*warn-by-default*

The `ambiguous_wide_pointer_comparisons` lint checks comparison of `*const/*mut ?Sized` as the operands.

### Example

```rust
let ab = (A, B);
let a = &ab.0 as *const dyn T;
let b = &ab.1 as *const dyn T;

let _ = a == b;
```

### Explanation

The comparison includes metadata which may not be expected.

-------

This PR also drops `clippy::vtable_address_comparisons` which is superseded by this one.

~~One thing: is the current naming right? `invalid` seems a bit too much.~~

Fixes https://github.com/rust-lang/rust/issues/117717
2023-12-11 14:33:16 +00:00
bors
84f6130fe3 Auto merge of #118692 - surechen:remove_unused_imports, r=petrochenkov
remove redundant imports

detects redundant imports that can be eliminated.

for #117772 :

In order to facilitate review and modification, split the checking code and removing redundant imports code into two PR.

r? `@petrochenkov`
2023-12-10 11:55:48 +00:00
bors
61afc9c928 Auto merge of #116949 - hamza1311:stablize-arc_unwrap_or_clone, r=dtolnay
Stablize arc_unwrap_or_clone

Fixes: #93610

This likely needs FCP. I created this PR as it's stabilization is trivial and FCP can be just conducted here. Not sure how to ping the libs API team (last attempt didn't work apparently according to GH UI)
2023-12-10 05:01:00 +00:00
surechen
40ae34194c remove redundant imports
detects redundant imports that can be eliminated.

for #117772 :

In order to facilitate review and modification, split the checking code and
removing redundant imports code into two PR.
2023-12-10 10:56:22 +08:00
Ralf Jung
af4913fcf4 merge core_panic feature into panic_internals 2023-12-09 14:49:00 +01:00
bors
1a3aa4ad14 Auto merge of #114136 - TennyZhuang:linked-list-retain, r=thomcc
add LinkedList::{retain,retain_mut}

Implement #114135

The API is consistent with other collections.
2023-12-09 02:38:45 +00:00
bors
c9d85d67c4 Auto merge of #117960 - zhiqiangxu:dry, r=workingjubilee
chore: avoid duplicate code in `Weak::inner`
2023-12-07 02:27:41 +00:00
Matthias Krüger
49d6594278
Rollup merge of #117563 - 0xalpharush:docs/into-raw, r=workingjubilee
docs: clarify explicitly freeing heap allocated memory

The documentation for `Box::into_raw` didn't mention `drop` and wondered if I was doing something wrong. Based off [this](https://stackoverflow.com/questions/75441199/rust-how-do-i-correctly-free-heap-allocated-memory), I think it's helpful to include the more concise yet explicit way to free heap allocated memory. This is my first rust PR and I went through https://std-dev-guide.rust-lang.org/development/, but let me know if I missed something :)
2023-12-06 17:21:57 +01:00
bors
15bb3e204a Auto merge of #118460 - the8472:fix-vec-realloc, r=saethlin
Fix in-place collect not reallocating when necessary

Regression introduced in https://github.com/rust-lang/rust/pull/110353.
This was [caught by miri](https://rust-lang.zulipchat.com/#narrow/stream/269128-miri/topic/Cron.20Job.20Failure.20.28miri-test-libstd.2C.202023-11.29/near/404764617)

r? `@saethlin`
2023-12-06 08:45:11 +00:00
Urgau
5e1bfb538f Adjust tests for newly added ambiguous_wide_pointer_comparisons lint 2023-12-06 09:03:48 +01:00
bors
1dd4db5062 Auto merge of #118655 - compiler-errors:rollup-vrngyzn, r=compiler-errors
Rollup of 9 pull requests

Successful merges:

 - #117793 (Update variable name to fix `unused_variables` warning)
 - #118123 (Add support for making lib features internal)
 - #118268 (Pretty print `Fn<(..., ...)>` trait refs with parentheses (almost) always)
 - #118346 (Add `deeply_normalize_for_diagnostics`, use it in coherence)
 - #118350 (Simplify Default for tuples)
 - #118450 (Use OnceCell in cell module documentation)
 - #118585 (Fix parser ICE when recovering `dyn`/`impl` after `for<...>`)
 - #118587 (Cleanup error handlers some more)
 - #118642 (bootstrap(builder.rs): Don't explicitly warn against `semicolon_in_expressions_from_macros`)

r? `@ghost`
`@rustbot` modify labels: rollup
2023-12-06 04:20:51 +00:00
zhiqiangxu
75d76c8ffe Don't repeat yourself 2023-12-06 09:02:19 +08:00
bors
e9013ac0e4 Auto merge of #118273 - AngelicosPhosphoros:dedup_2_loops_version_77772_2, r=the8472
Split `Vec::dedup_by` into 2 cycles

First cycle runs until we found 2 same elements, second runs after if there any found in the first one. This allows to avoid any memory writes until we found an item which we want to remove.

This leads to significant performance gains if all `Vec` items are kept: -40% on my benchmark with unique integers.

Results of benchmarks before implementation (including new benchmark where nothing needs to be removed):
 *   vec::bench_dedup_all_100                 74.00ns/iter  +/- 13.00ns
 *   vec::bench_dedup_all_1000               572.00ns/iter +/- 272.00ns
 *   vec::bench_dedup_all_100000              64.42µs/iter  +/- 19.47µs
 *   __vec::bench_dedup_none_100                67.00ns/iter  +/- 17.00ns__
 *   __vec::bench_dedup_none_1000              662.00ns/iter  +/- 86.00ns__
 *   __vec::bench_dedup_none_10000               9.16µs/iter   +/- 2.71µs__
 *   __vec::bench_dedup_none_100000             91.25µs/iter   +/- 1.82µs__
 *   vec::bench_dedup_random_100             105.00ns/iter  +/- 11.00ns
 *   vec::bench_dedup_random_1000            781.00ns/iter  +/- 10.00ns
 *   vec::bench_dedup_random_10000             9.00µs/iter   +/- 5.62µs
 *   vec::bench_dedup_random_100000          449.81µs/iter  +/- 74.99µs
 *   vec::bench_dedup_slice_truncate_100     105.00ns/iter  +/- 16.00ns
 *   vec::bench_dedup_slice_truncate_1000      2.65µs/iter +/- 481.00ns
 *   vec::bench_dedup_slice_truncate_10000    18.33µs/iter   +/- 5.23µs
 *   vec::bench_dedup_slice_truncate_100000  501.12µs/iter  +/- 46.97µs

Results after implementation:
 *   vec::bench_dedup_all_100                 75.00ns/iter   +/- 9.00ns
 *   vec::bench_dedup_all_1000               494.00ns/iter +/- 117.00ns
 *   vec::bench_dedup_all_100000              58.13µs/iter   +/- 8.78µs
 *   __vec::bench_dedup_none_100                52.00ns/iter  +/- 22.00ns__
 *   __vec::bench_dedup_none_1000              417.00ns/iter +/- 116.00ns__
 *   __vec::bench_dedup_none_10000               4.11µs/iter +/- 546.00ns__
 *   __vec::bench_dedup_none_100000             40.47µs/iter   +/- 5.36µs__
 *   vec::bench_dedup_random_100              77.00ns/iter  +/- 15.00ns
 *   vec::bench_dedup_random_1000            681.00ns/iter  +/- 86.00ns
 *   vec::bench_dedup_random_10000            11.66µs/iter   +/- 2.22µs
 *   vec::bench_dedup_random_100000          469.35µs/iter  +/- 20.53µs
 *   vec::bench_dedup_slice_truncate_100     100.00ns/iter   +/- 5.00ns
 *   vec::bench_dedup_slice_truncate_1000      2.55µs/iter +/- 224.00ns
 *   vec::bench_dedup_slice_truncate_10000    18.95µs/iter   +/- 2.59µs
 *   vec::bench_dedup_slice_truncate_100000  492.85µs/iter  +/- 72.84µs

Resolves #77772

P.S. Note that this is same PR as #92104 I just missed review then forgot about it.
Also, I cannot reopen that pull request so I am creating a new one.
I responded to remaining questions directly by adding commentaries to my code.
2023-12-05 21:40:02 +00:00
AngelicosPhosphoros
964df019d2 Split Vec::dedup_by into 2 cycles
First cycle runs until we found 2 same elements, second runs after if there any found in the first one. This allows to avoid any memory writes until we found an item which we want to remove.

This leads to significant performance gains if all `Vec` items are kept: -40% on my benchmark with unique integers.

Results of benchmarks before implementation (including new benchmark where nothing needs to be removed):
 *   vec::bench_dedup_all_100                 74.00ns/iter  +/- 13.00ns
 *   vec::bench_dedup_all_1000               572.00ns/iter +/- 272.00ns
 *   vec::bench_dedup_all_100000              64.42µs/iter  +/- 19.47µs
 *   __vec::bench_dedup_none_100                67.00ns/iter  +/- 17.00ns__
 *   __vec::bench_dedup_none_1000              662.00ns/iter  +/- 86.00ns__
 *   __vec::bench_dedup_none_10000               9.16µs/iter   +/- 2.71µs__
 *   __vec::bench_dedup_none_100000             91.25µs/iter   +/- 1.82µs__
 *   vec::bench_dedup_random_100             105.00ns/iter  +/- 11.00ns
 *   vec::bench_dedup_random_1000            781.00ns/iter  +/- 10.00ns
 *   vec::bench_dedup_random_10000             9.00µs/iter   +/- 5.62µs
 *   vec::bench_dedup_random_100000          449.81µs/iter  +/- 74.99µs
 *   vec::bench_dedup_slice_truncate_100     105.00ns/iter  +/- 16.00ns
 *   vec::bench_dedup_slice_truncate_1000      2.65µs/iter +/- 481.00ns
 *   vec::bench_dedup_slice_truncate_10000    18.33µs/iter   +/- 5.23µs
 *   vec::bench_dedup_slice_truncate_100000  501.12µs/iter  +/- 46.97µs

Results after implementation:
 *   vec::bench_dedup_all_100                 75.00ns/iter   +/- 9.00ns
 *   vec::bench_dedup_all_1000               494.00ns/iter +/- 117.00ns
 *   vec::bench_dedup_all_100000              58.13µs/iter   +/- 8.78µs
 *   __vec::bench_dedup_none_100                52.00ns/iter  +/- 22.00ns__
 *   __vec::bench_dedup_none_1000              417.00ns/iter +/- 116.00ns__
 *   __vec::bench_dedup_none_10000               4.11µs/iter +/- 546.00ns__
 *   __vec::bench_dedup_none_100000             40.47µs/iter   +/- 5.36µs__
 *   vec::bench_dedup_random_100              77.00ns/iter  +/- 15.00ns
 *   vec::bench_dedup_random_1000            681.00ns/iter  +/- 86.00ns
 *   vec::bench_dedup_random_10000            11.66µs/iter   +/- 2.22µs
 *   vec::bench_dedup_random_100000          469.35µs/iter  +/- 20.53µs
 *   vec::bench_dedup_slice_truncate_100     100.00ns/iter   +/- 5.00ns
 *   vec::bench_dedup_slice_truncate_1000      2.55µs/iter +/- 224.00ns
 *   vec::bench_dedup_slice_truncate_10000    18.95µs/iter   +/- 2.59µs
 *   vec::bench_dedup_slice_truncate_100000  492.85µs/iter  +/- 72.84µs

Resolves #77772
2023-12-05 21:01:00 +01:00
Michael Goulet
19bf749560
Rollup merge of #118123 - RalfJung:internal-lib-features, r=compiler-errors
Add support for making lib features internal

We have the notion of an "internal" lang feature: a feature that is never intended to be stabilized, and using which can cause ICEs and other issues without that being considered a bug.

This extends that idea to lib features as well. It is an alternative to https://github.com/rust-lang/rust/pull/115623: instead of using an attribute to declare lib features internal, we simply do this based on the name. Everything ending in `_internals` or `_internal` is considered internal.

Then we rename `core_intrinsics` to `core_intrinsics_internal`, which fixes https://github.com/rust-lang/rust/issues/115597.
2023-12-05 14:52:41 -05:00
The 8472
13a843ebcb Fix in-place collect not reallocating when necessary 2023-12-05 20:09:22 +01:00
bors
ec1f21cb04 Auto merge of #118433 - matthiaskrgr:rollup-fi9lrwg, r=matthiaskrgr
Rollup of 7 pull requests

Successful merges:

 - #116839 (Implement thread parking for xous)
 - #118265 (remove the memcpy-on-equal-ptrs assumption)
 - #118269 (Unify `TraitRefs` and `PolyTraitRefs` in `ValuePairs`)
 - #118394 (Remove HIR opkinds)
 - #118398 (Add proper cfgs in std)
 - #118419 (Eagerly return `ExprKind::Err` on `yield`/`await` in wrong coroutine context)
 - #118422 (Fix coroutine validation for mixed panic strategy)

r? `@ghost`
`@rustbot` modify labels: rollup
2023-11-29 08:51:01 +00:00
Matthias Krüger
f8628a179d
Rollup merge of #118383 - shepmaster:unused-tuple-struct-field-cleanup-stdlib, r=m-ou-se
Address unused tuple struct fields in the standard library
2023-11-29 04:23:28 +01:00
Matthias Krüger
b7016ae205
Rollup merge of #118398 - mu001999:std/add_cfgs, r=thomcc
Add proper cfgs in std

Detected by #118257
2023-11-29 04:23:23 +01:00
Jake Goulding
115eac03bb Address unused tuple struct fields in the standard library 2023-11-28 12:00:54 -05:00
bors
df0295f071 Auto merge of #110353 - the8472:in-place-flatten-chunks, r=cuviper
Expand in-place iteration specialization to Flatten, FlatMap and ArrayChunks

This enables the following cases to collect in-place:

```rust
let v = vec![[0u8; 4]; 1024]
let v: Vec<_> = v.into_iter().flatten().collect();

let v: Vec<Option<NonZeroUsize>> = vec![NonZeroUsize::new(0); 1024];
let v: Vec<_> = v.into_iter().flatten().collect();

let v = vec![u8; 4096];
let v: Vec<_> = v.into_iter().array_chunks::<4>().collect();
```

Especially the nicheful-option-flattening should be useful in real code.
2023-11-28 12:22:16 +00:00