rust/library/alloc/src/vec/mod.rs

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

3591 lines
124 KiB
Rust
Raw Normal View History

//! A contiguous growable array type with heap-allocated contents, written
2017-06-02 17:59:19 +00:00
//! `Vec<T>`.
2014-08-30 21:11:22 +00:00
//!
2021-09-24 10:44:28 +00:00
//! Vectors have *O*(1) indexing, amortized *O*(1) push (to the end) and
//! *O*(1) pop (from the end).
2014-12-17 01:12:30 +00:00
//!
2020-01-31 19:41:07 +00:00
//! Vectors ensure they never allocate more than `isize::MAX` bytes.
//!
2014-12-17 01:12:30 +00:00
//! # Examples
//!
//! You can explicitly create a [`Vec`] with [`Vec::new`]:
2014-12-17 01:12:30 +00:00
//!
//! ```
//! let v: Vec<i32> = Vec::new();
2014-12-17 01:12:30 +00:00
//! ```
//!
2016-10-08 15:58:53 +00:00
//! ...or by using the [`vec!`] macro:
2014-12-17 01:12:30 +00:00
//!
//! ```
//! let v: Vec<i32> = vec![];
2014-12-17 01:12:30 +00:00
//!
//! let v = vec![1, 2, 3, 4, 5];
//!
//! let v = vec![0; 10]; // ten zeroes
2014-12-17 01:12:30 +00:00
//! ```
//!
2016-10-08 15:58:53 +00:00
//! You can [`push`] values onto the end of a vector (which will grow the vector
//! as needed):
2014-12-17 01:12:30 +00:00
//!
//! ```
//! let mut v = vec![1, 2];
2014-12-17 01:12:30 +00:00
//!
//! v.push(3);
2014-12-17 01:12:30 +00:00
//! ```
//!
//! Popping values works in much the same way:
2014-12-17 01:12:30 +00:00
//!
//! ```
//! let mut v = vec![1, 2];
2014-12-17 01:12:30 +00:00
//!
//! let two = v.pop();
2014-12-17 01:12:30 +00:00
//! ```
//!
2016-10-08 15:58:53 +00:00
//! Vectors also support indexing (through the [`Index`] and [`IndexMut`] traits):
//!
//! ```
//! let mut v = vec![1, 2, 3];
//! let three = v[2];
//! v[1] = v[1] + 5;
//! ```
2016-10-08 15:58:53 +00:00
//!
//! [`push`]: Vec::push
2015-01-24 05:48:20 +00:00
#![stable(feature = "rust1", since = "1.0.0")]
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
use core::cmp;
use core::cmp::Ordering;
use core::fmt;
2020-06-22 12:52:02 +00:00
use core::hash::{Hash, Hasher};
2023-11-26 06:24:38 +00:00
#[cfg(not(no_global_oom_handling))]
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
use core::iter;
use core::marker::PhantomData;
use core::mem::{self, ManuallyDrop, MaybeUninit, SizedTypeProperties};
use core::ops::{self, Index, IndexMut, Range, RangeBounds};
use core::ptr::{self, NonNull};
use core::slice::{self, SliceIndex};
2019-02-02 09:14:40 +00:00
#[unstable(feature = "extract_if", reason = "recently added", issue = "43244")]
pub use self::extract_if::ExtractIf;
use crate::alloc::{Allocator, Global};
use crate::borrow::{Cow, ToOwned};
use crate::boxed::Box;
use crate::collections::TryReserveError;
use crate::raw_vec::RawVec;
mod extract_if;
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
2020-12-05 00:04:32 +00:00
#[stable(feature = "vec_splice", since = "1.21.0")]
pub use self::splice::Splice;
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
2020-12-05 00:04:32 +00:00
mod splice;
2020-12-05 00:12:42 +00:00
#[stable(feature = "drain", since = "1.6.0")]
pub use self::drain::Drain;
mod drain;
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
mod cow;
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
pub(crate) use self::in_place_collect::AsVecIntoIter;
#[stable(feature = "rust1", since = "1.0.0")]
pub use self::into_iter::IntoIter;
mod into_iter;
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
2020-12-05 00:52:48 +00:00
use self::is_zero::IsZero;
2023-11-26 06:24:38 +00:00
#[cfg(not(no_global_oom_handling))]
2020-12-05 00:52:48 +00:00
mod is_zero;
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
mod in_place_collect;
mod partial_eq;
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
use self::spec_from_elem::SpecFromElem;
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
mod spec_from_elem;
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
use self::set_len_on_drop::SetLenOnDrop;
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
mod set_len_on_drop;
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
use self::in_place_drop::{InPlaceDrop, InPlaceDstDataSrcBufDrop};
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
mod in_place_drop;
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
use self::spec_from_iter_nested::SpecFromIterNested;
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
mod spec_from_iter_nested;
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
use self::spec_from_iter::SpecFromIter;
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
mod spec_from_iter;
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
use self::spec_extend::SpecExtend;
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
mod spec_extend;
2021-12-31 21:04:13 +00:00
/// A contiguous growable array type, written as `Vec<T>`, short for 'vector'.
///
2014-03-20 06:01:08 +00:00
/// # Examples
///
/// ```
2014-03-20 06:01:08 +00:00
/// let mut vec = Vec::new();
2015-01-25 21:05:03 +00:00
/// vec.push(1);
/// vec.push(2);
2014-03-20 06:01:08 +00:00
///
/// assert_eq!(vec.len(), 2);
2014-07-14 23:37:25 +00:00
/// assert_eq!(vec[0], 1);
2014-03-20 06:01:08 +00:00
///
/// assert_eq!(vec.pop(), Some(2));
/// assert_eq!(vec.len(), 1);
///
2015-01-25 21:05:03 +00:00
/// vec[0] = 7;
/// assert_eq!(vec[0], 7);
///
/// vec.extend([1, 2, 3]);
///
/// for x in &vec {
/// println!("{x}");
/// }
2015-02-24 18:15:45 +00:00
/// assert_eq!(vec, [7, 1, 2, 3]);
2014-03-20 06:01:08 +00:00
/// ```
///
/// The [`vec!`] macro is provided for convenient initialization:
///
/// ```
/// let mut vec1 = vec![1, 2, 3];
/// vec1.push(4);
/// let vec2 = Vec::from([1, 2, 3, 4]);
/// assert_eq!(vec1, vec2);
/// ```
///
/// It can also initialize each element of a `Vec<T>` with a given value.
/// This may be more efficient than performing allocation and initialization
/// in separate steps, especially when initializing a vector of zeros:
///
/// ```
/// let vec = vec![0; 5];
/// assert_eq!(vec, [0, 0, 0, 0, 0]);
///
/// // The following is equivalent, but potentially slower:
/// let mut vec = Vec::with_capacity(5);
/// vec.resize(5, 0);
/// assert_eq!(vec, [0, 0, 0, 0, 0]);
/// ```
///
/// For more information, see
/// [Capacity and Reallocation](#capacity-and-reallocation).
///
2014-12-17 01:12:30 +00:00
/// Use a `Vec<T>` as an efficient stack:
///
/// ```
/// let mut stack = Vec::new();
///
2015-01-25 21:05:03 +00:00
/// stack.push(1);
/// stack.push(2);
/// stack.push(3);
///
/// while let Some(top) = stack.pop() {
/// // Prints 3, 2, 1
/// println!("{top}");
/// }
/// ```
///
2016-02-21 02:00:50 +00:00
/// # Indexing
///
/// The `Vec` type allows access to values by index, because it implements the
2016-10-08 15:58:53 +00:00
/// [`Index`] trait. An example will be more explicit:
2016-02-21 02:00:50 +00:00
///
/// ```
/// let v = vec![0, 2, 4, 6];
2016-02-21 02:00:50 +00:00
/// println!("{}", v[1]); // it will display '2'
/// ```
///
2016-10-08 15:58:53 +00:00
/// However be careful: if you try to access an index which isn't in the `Vec`,
2016-02-21 02:00:50 +00:00
/// your software will panic! You cannot do this:
///
/// ```should_panic
/// let v = vec![0, 2, 4, 6];
2016-02-21 02:00:50 +00:00
/// println!("{}", v[6]); // it will panic!
/// ```
///
2019-10-28 05:39:37 +00:00
/// Use [`get`] and [`get_mut`] if you want to check whether the index is in
/// the `Vec`.
2016-02-21 02:00:50 +00:00
///
/// # Slicing
///
2021-03-13 02:28:03 +00:00
/// A `Vec` can be mutable. On the other hand, slices are read-only objects.
/// To get a [slice][prim@slice], use [`&`]. Example:
2016-02-21 02:00:50 +00:00
///
/// ```
/// fn read_slice(slice: &[usize]) {
/// // ...
/// }
///
/// let v = vec![0, 1];
2016-02-21 02:00:50 +00:00
/// read_slice(&v);
///
/// // ... and that's all!
/// // you can also do it like this:
/// let u: &[usize] = &v;
/// // or like this:
/// let u: &[_] = &v;
2016-02-21 02:00:50 +00:00
/// ```
///
/// In Rust, it's more common to pass slices as arguments rather than vectors
2020-01-28 06:59:07 +00:00
/// when you just want to provide read access. The same goes for [`String`] and
2016-10-08 15:58:53 +00:00
/// [`&str`].
2016-02-21 02:00:50 +00:00
///
/// # Capacity and reallocation
///
/// The capacity of a vector is the amount of space allocated for any future
/// elements that will be added onto the vector. This is not to be confused with
/// the *length* of a vector, which specifies the number of actual elements
/// within the vector. If a vector's length exceeds its capacity, its capacity
/// will automatically be increased, but its elements will have to be
2014-12-17 01:12:30 +00:00
/// reallocated.
///
/// For example, a vector with capacity 10 and length 0 would be an empty vector
/// with space for 10 more elements. Pushing 10 or fewer elements onto the
/// vector will not change its capacity or cause reallocation to occur. However,
/// if the vector's length is increased to 11, it will have to reallocate, which
2016-10-08 15:58:53 +00:00
/// can be slow. For this reason, it is recommended to use [`Vec::with_capacity`]
/// whenever possible to specify how big the vector is expected to get.
2015-08-19 17:13:39 +00:00
///
/// # Guarantees
///
/// Due to its incredibly fundamental nature, `Vec` makes a lot of guarantees
2015-08-19 17:13:39 +00:00
/// about its design. This ensures that it's as low-overhead as possible in
/// the general case, and can be correctly manipulated in primitive ways
/// by unsafe code. Note that these guarantees refer to an unqualified `Vec<T>`.
/// If additional type parameters are added (e.g., to support custom allocators),
2015-08-19 17:13:39 +00:00
/// overriding their defaults may change the behavior.
///
/// Most fundamentally, `Vec` is and always will be a (pointer, capacity, length)
2015-08-19 17:13:39 +00:00
/// triplet. No more, no less. The order of these fields is completely
/// unspecified, and you should use the appropriate methods to modify these.
/// The pointer will never be null, so this type is null-pointer-optimized.
///
/// However, the pointer might not actually point to allocated memory. In particular,
/// if you construct a `Vec` with capacity 0 via [`Vec::new`], [`vec![]`][`vec!`],
/// [`Vec::with_capacity(0)`][`Vec::with_capacity`], or by calling [`shrink_to_fit`]
2016-10-08 15:58:53 +00:00
/// on an empty Vec, it will not allocate memory. Similarly, if you store zero-sized
/// types inside a `Vec`, it will not allocate space for them. *Note that in this case
/// the `Vec` might not report a [`capacity`] of 0*. `Vec` will allocate if and only
Apply 16 commits (squashed) ---------- Fix spacing for links inside code blocks, and improve link tooltips in alloc::fmt ---------- Fix spacing for links inside code blocks, and improve link tooltips in alloc::{rc, sync} ---------- Fix spacing for links inside code blocks, and improve link tooltips in alloc::string ---------- Fix spacing for links inside code blocks in alloc::vec ---------- Fix spacing for links inside code blocks in core::option ---------- Fix spacing for links inside code blocks, and improve a few link tooltips in core::result ---------- Fix spacing for links inside code blocks in core::{iter::{self, iterator}, stream::stream, poll} ---------- Fix spacing for links inside code blocks, and improve a few link tooltips in std::{fs, path} ---------- Fix spacing for links inside code blocks in std::{collections, time} ---------- Fix spacing for links inside code blocks in and make formatting of `&str`-like types consistent in std::ffi::{c_str, os_str} ---------- Fix spacing for links inside code blocks, and improve link tooltips in std::ffi ---------- Fix spacing for links inside code blocks, and improve a few link tooltips in std::{io::{self, buffered::{bufreader, bufwriter}, cursor, util}, net::{self, addr}} ---------- Fix typo in link to `into` for `OsString` docs ---------- Remove tooltips that will probably become redundant in the future ---------- Apply suggestions from code review Replacing `…std/primitive.reference.html` paths with just `reference` Co-authored-by: Joshua Nelson <github@jyn.dev> ---------- Also replace `…std/primitive.reference.html` paths with just `reference` in `core::pin`
2021-08-25 09:45:08 +00:00
/// if <code>[mem::size_of::\<T>]\() * [capacity]\() > 0</code>. In general, `Vec`'s allocation
/// details are very subtle --- if you intend to allocate memory using a `Vec`
/// and use it for something else (either to pass to unsafe code, or to build your
/// own memory-backed collection), be sure to deallocate this memory by using
/// `from_raw_parts` to recover the `Vec` and then dropping it.
2015-08-19 17:13:39 +00:00
///
2016-10-08 15:58:53 +00:00
/// If a `Vec` *has* allocated memory, then the memory it points to is on the heap
2015-08-19 17:13:39 +00:00
/// (as defined by the allocator Rust is configured to use by default), and its
2018-02-17 01:05:49 +00:00
/// pointer points to [`len`] initialized, contiguous elements in order (what
Apply 16 commits (squashed) ---------- Fix spacing for links inside code blocks, and improve link tooltips in alloc::fmt ---------- Fix spacing for links inside code blocks, and improve link tooltips in alloc::{rc, sync} ---------- Fix spacing for links inside code blocks, and improve link tooltips in alloc::string ---------- Fix spacing for links inside code blocks in alloc::vec ---------- Fix spacing for links inside code blocks in core::option ---------- Fix spacing for links inside code blocks, and improve a few link tooltips in core::result ---------- Fix spacing for links inside code blocks in core::{iter::{self, iterator}, stream::stream, poll} ---------- Fix spacing for links inside code blocks, and improve a few link tooltips in std::{fs, path} ---------- Fix spacing for links inside code blocks in std::{collections, time} ---------- Fix spacing for links inside code blocks in and make formatting of `&str`-like types consistent in std::ffi::{c_str, os_str} ---------- Fix spacing for links inside code blocks, and improve link tooltips in std::ffi ---------- Fix spacing for links inside code blocks, and improve a few link tooltips in std::{io::{self, buffered::{bufreader, bufwriter}, cursor, util}, net::{self, addr}} ---------- Fix typo in link to `into` for `OsString` docs ---------- Remove tooltips that will probably become redundant in the future ---------- Apply suggestions from code review Replacing `…std/primitive.reference.html` paths with just `reference` Co-authored-by: Joshua Nelson <github@jyn.dev> ---------- Also replace `…std/primitive.reference.html` paths with just `reference` in `core::pin`
2021-08-25 09:45:08 +00:00
/// you would see if you coerced it to a slice), followed by <code>[capacity] - [len]</code>
/// logically uninitialized, contiguous elements.
2015-08-19 17:13:39 +00:00
///
/// A vector containing the elements `'a'` and `'b'` with capacity 4 can be
/// visualized as below. The top part is the `Vec` struct, it contains a
/// pointer to the head of the allocation in the heap, length and capacity.
/// The bottom part is the allocation on the heap, a contiguous memory block.
///
/// ```text
/// ptr len capacity
/// +--------+--------+--------+
/// | 0x0123 | 2 | 4 |
/// +--------+--------+--------+
/// |
/// v
/// Heap +--------+--------+--------+--------+
/// | 'a' | 'b' | uninit | uninit |
/// +--------+--------+--------+--------+
/// ```
///
/// - **uninit** represents memory that is not initialized, see [`MaybeUninit`].
/// - Note: the ABI is not stable and `Vec` makes no guarantees about its memory
/// layout (including the order of fields).
///
2016-10-08 15:58:53 +00:00
/// `Vec` will never perform a "small optimization" where elements are actually
2015-08-19 17:13:39 +00:00
/// stored on the stack for two reasons:
///
/// * It would make it more difficult for unsafe code to correctly manipulate
2016-10-08 15:58:53 +00:00
/// a `Vec`. The contents of a `Vec` wouldn't have a stable address if it were
/// only moved, and it would be more difficult to determine if a `Vec` had
2015-08-19 17:13:39 +00:00
/// actually allocated memory.
///
/// * It would penalize the general case, incurring an additional branch
/// on every access.
///
2016-10-08 15:58:53 +00:00
/// `Vec` will never automatically shrink itself, even if completely empty. This
/// ensures no unnecessary allocations or deallocations occur. Emptying a `Vec`
/// and then filling it back up to the same [`len`] should incur no calls to
2016-10-08 15:58:53 +00:00
/// the allocator. If you wish to free up unused memory, use
/// [`shrink_to_fit`] or [`shrink_to`].
2016-10-08 15:58:53 +00:00
///
/// [`push`] and [`insert`] will never (re)allocate if the reported capacity is
/// sufficient. [`push`] and [`insert`] *will* (re)allocate if
Apply 16 commits (squashed) ---------- Fix spacing for links inside code blocks, and improve link tooltips in alloc::fmt ---------- Fix spacing for links inside code blocks, and improve link tooltips in alloc::{rc, sync} ---------- Fix spacing for links inside code blocks, and improve link tooltips in alloc::string ---------- Fix spacing for links inside code blocks in alloc::vec ---------- Fix spacing for links inside code blocks in core::option ---------- Fix spacing for links inside code blocks, and improve a few link tooltips in core::result ---------- Fix spacing for links inside code blocks in core::{iter::{self, iterator}, stream::stream, poll} ---------- Fix spacing for links inside code blocks, and improve a few link tooltips in std::{fs, path} ---------- Fix spacing for links inside code blocks in std::{collections, time} ---------- Fix spacing for links inside code blocks in and make formatting of `&str`-like types consistent in std::ffi::{c_str, os_str} ---------- Fix spacing for links inside code blocks, and improve link tooltips in std::ffi ---------- Fix spacing for links inside code blocks, and improve a few link tooltips in std::{io::{self, buffered::{bufreader, bufwriter}, cursor, util}, net::{self, addr}} ---------- Fix typo in link to `into` for `OsString` docs ---------- Remove tooltips that will probably become redundant in the future ---------- Apply suggestions from code review Replacing `…std/primitive.reference.html` paths with just `reference` Co-authored-by: Joshua Nelson <github@jyn.dev> ---------- Also replace `…std/primitive.reference.html` paths with just `reference` in `core::pin`
2021-08-25 09:45:08 +00:00
/// <code>[len] == [capacity]</code>. That is, the reported capacity is completely
2016-10-08 15:58:53 +00:00
/// accurate, and can be relied on. It can even be used to manually free the memory
/// allocated by a `Vec` if desired. Bulk insertion methods *may* reallocate, even
/// when not necessary.
///
/// `Vec` does not guarantee any particular growth strategy when reallocating
/// when full, nor when [`reserve`] is called. The current strategy is basic
2015-08-19 17:13:39 +00:00
/// and it may prove desirable to use a non-constant growth factor. Whatever
/// strategy is used will of course guarantee *O*(1) amortized [`push`].
2015-08-19 17:13:39 +00:00
///
2016-10-08 15:58:53 +00:00
/// `vec![x; n]`, `vec![a, b, c, d]`, and
/// [`Vec::with_capacity(n)`][`Vec::with_capacity`], will all produce a `Vec`
/// with at least the requested capacity. If <code>[len] == [capacity]</code>,
/// (as is the case for the [`vec!`] macro), then a `Vec<T>` can be converted to
/// and from a [`Box<[T]>`][owned slice] without reallocating or moving the elements.
2015-08-19 17:13:39 +00:00
///
2016-10-08 15:58:53 +00:00
/// `Vec` will not specifically overwrite any data that is removed from it,
2015-08-19 17:13:39 +00:00
/// but also won't specifically preserve it. Its uninitialized memory is
/// scratch space that it may use however it wants. It will generally just do
/// whatever is most efficient or otherwise easy to implement. Do not rely on
2016-10-08 15:58:53 +00:00
/// removed data to be erased for security purposes. Even if you drop a `Vec`, its
/// buffer may simply be reused by another allocation. Even if you zero a `Vec`'s memory
/// first, that might not actually happen because the optimizer does not consider
/// this a side-effect that must be preserved. There is one case which we will
/// not break, however: using `unsafe` code to write to the excess capacity,
/// and then increasing the length to match, is always valid.
2015-08-19 17:13:39 +00:00
///
2021-03-13 02:28:03 +00:00
/// Currently, `Vec` does not guarantee the order in which elements are dropped.
2018-02-17 01:05:49 +00:00
/// The order has changed in the past and may change again.
2015-08-19 17:13:39 +00:00
///
/// [`get`]: slice::get
/// [`get_mut`]: slice::get_mut
/// [`String`]: crate::string::String
/// [`&str`]: type@str
/// [`shrink_to_fit`]: Vec::shrink_to_fit
/// [`shrink_to`]: Vec::shrink_to
Apply 16 commits (squashed) ---------- Fix spacing for links inside code blocks, and improve link tooltips in alloc::fmt ---------- Fix spacing for links inside code blocks, and improve link tooltips in alloc::{rc, sync} ---------- Fix spacing for links inside code blocks, and improve link tooltips in alloc::string ---------- Fix spacing for links inside code blocks in alloc::vec ---------- Fix spacing for links inside code blocks in core::option ---------- Fix spacing for links inside code blocks, and improve a few link tooltips in core::result ---------- Fix spacing for links inside code blocks in core::{iter::{self, iterator}, stream::stream, poll} ---------- Fix spacing for links inside code blocks, and improve a few link tooltips in std::{fs, path} ---------- Fix spacing for links inside code blocks in std::{collections, time} ---------- Fix spacing for links inside code blocks in and make formatting of `&str`-like types consistent in std::ffi::{c_str, os_str} ---------- Fix spacing for links inside code blocks, and improve link tooltips in std::ffi ---------- Fix spacing for links inside code blocks, and improve a few link tooltips in std::{io::{self, buffered::{bufreader, bufwriter}, cursor, util}, net::{self, addr}} ---------- Fix typo in link to `into` for `OsString` docs ---------- Remove tooltips that will probably become redundant in the future ---------- Apply suggestions from code review Replacing `…std/primitive.reference.html` paths with just `reference` Co-authored-by: Joshua Nelson <github@jyn.dev> ---------- Also replace `…std/primitive.reference.html` paths with just `reference` in `core::pin`
2021-08-25 09:45:08 +00:00
/// [capacity]: Vec::capacity
/// [`capacity`]: Vec::capacity
Apply 16 commits (squashed) ---------- Fix spacing for links inside code blocks, and improve link tooltips in alloc::fmt ---------- Fix spacing for links inside code blocks, and improve link tooltips in alloc::{rc, sync} ---------- Fix spacing for links inside code blocks, and improve link tooltips in alloc::string ---------- Fix spacing for links inside code blocks in alloc::vec ---------- Fix spacing for links inside code blocks in core::option ---------- Fix spacing for links inside code blocks, and improve a few link tooltips in core::result ---------- Fix spacing for links inside code blocks in core::{iter::{self, iterator}, stream::stream, poll} ---------- Fix spacing for links inside code blocks, and improve a few link tooltips in std::{fs, path} ---------- Fix spacing for links inside code blocks in std::{collections, time} ---------- Fix spacing for links inside code blocks in and make formatting of `&str`-like types consistent in std::ffi::{c_str, os_str} ---------- Fix spacing for links inside code blocks, and improve link tooltips in std::ffi ---------- Fix spacing for links inside code blocks, and improve a few link tooltips in std::{io::{self, buffered::{bufreader, bufwriter}, cursor, util}, net::{self, addr}} ---------- Fix typo in link to `into` for `OsString` docs ---------- Remove tooltips that will probably become redundant in the future ---------- Apply suggestions from code review Replacing `…std/primitive.reference.html` paths with just `reference` Co-authored-by: Joshua Nelson <github@jyn.dev> ---------- Also replace `…std/primitive.reference.html` paths with just `reference` in `core::pin`
2021-08-25 09:45:08 +00:00
/// [mem::size_of::\<T>]: core::mem::size_of
/// [len]: Vec::len
/// [`len`]: Vec::len
/// [`push`]: Vec::push
/// [`insert`]: Vec::insert
/// [`reserve`]: Vec::reserve
/// [`MaybeUninit`]: core::mem::MaybeUninit
/// [owned slice]: Box
2015-01-24 05:48:20 +00:00
#[stable(feature = "rust1", since = "1.0.0")]
2021-10-02 23:51:01 +00:00
#[cfg_attr(not(test), rustc_diagnostic_item = "Vec")]
2021-09-22 09:17:30 +00:00
#[rustc_insignificant_dtor]
pub struct Vec<T, #[unstable(feature = "allocator_api", issue = "32838")] A: Allocator = Global> {
buf: RawVec<T, A>,
2015-02-05 02:17:19 +00:00
len: usize,
}
////////////////////////////////////////////////////////////////////////////////
// Inherent methods
////////////////////////////////////////////////////////////////////////////////
impl<T> Vec<T> {
2014-12-17 01:12:30 +00:00
/// Constructs a new, empty `Vec<T>`.
///
/// The vector will not allocate until elements are pushed onto it.
///
/// # Examples
///
/// ```
/// # #![allow(unused_mut)]
/// let mut vec: Vec<i32> = Vec::new();
/// ```
#[inline]
#[rustc_const_stable(feature = "const_vec_new", since = "1.39.0")]
2015-01-24 05:48:20 +00:00
#[stable(feature = "rust1", since = "1.0.0")]
2021-10-10 05:50:06 +00:00
#[must_use]
pub const fn new() -> Self {
2019-08-29 09:32:38 +00:00
Vec { buf: RawVec::NEW, len: 0 }
}
Fix documentation for with_capacity and reserve families of methods Documentation for the following methods with_capacity with_capacity_in with_capacity_and_hasher reserve reserve_exact try_reserve try_reserve_exact was inconsistent and often not entirely correct where they existed on the following types Vec VecDeque String OsString PathBuf BinaryHeap HashSet HashMap BufWriter LineWriter since the allocator is allowed to allocate more than the requested capacity in all such cases, and will frequently "allocate" much more in the case of zero-sized types (I also checked BufReader, but there the docs appear to be accurate as it appears to actually allocate the exact capacity). Some effort was made to make the documentation more consistent between types as well. Fix with_capacity* methods for Vec Fix *reserve* methods for Vec Fix docs for *reserve* methods of VecDeque Fix docs for String::with_capacity Fix docs for *reserve* methods of String Fix docs for OsString::with_capacity Fix docs for *reserve* methods on OsString Fix docs for with_capacity* methods on HashSet Fix docs for *reserve methods of HashSet Fix docs for with_capacity* methods of HashMap Fix docs for *reserve methods on HashMap Fix expect messages about OOM in doctests Fix docs for BinaryHeap::with_capacity Fix docs for *reserve* methods of BinaryHeap Fix typos Fix docs for with_capacity on BufWriter and LineWriter Fix consistent use of `hasher` between `HashMap` and `HashSet` Fix warning in doc test Add test for capacity of vec with ZST Fix doc test error
2022-04-09 15:42:26 +00:00
/// Constructs a new, empty `Vec<T>` with at least the specified capacity.
///
Fix documentation for with_capacity and reserve families of methods Documentation for the following methods with_capacity with_capacity_in with_capacity_and_hasher reserve reserve_exact try_reserve try_reserve_exact was inconsistent and often not entirely correct where they existed on the following types Vec VecDeque String OsString PathBuf BinaryHeap HashSet HashMap BufWriter LineWriter since the allocator is allowed to allocate more than the requested capacity in all such cases, and will frequently "allocate" much more in the case of zero-sized types (I also checked BufReader, but there the docs appear to be accurate as it appears to actually allocate the exact capacity). Some effort was made to make the documentation more consistent between types as well. Fix with_capacity* methods for Vec Fix *reserve* methods for Vec Fix docs for *reserve* methods of VecDeque Fix docs for String::with_capacity Fix docs for *reserve* methods of String Fix docs for OsString::with_capacity Fix docs for *reserve* methods on OsString Fix docs for with_capacity* methods on HashSet Fix docs for *reserve methods of HashSet Fix docs for with_capacity* methods of HashMap Fix docs for *reserve methods on HashMap Fix expect messages about OOM in doctests Fix docs for BinaryHeap::with_capacity Fix docs for *reserve* methods of BinaryHeap Fix typos Fix docs for with_capacity on BufWriter and LineWriter Fix consistent use of `hasher` between `HashMap` and `HashSet` Fix warning in doc test Add test for capacity of vec with ZST Fix doc test error
2022-04-09 15:42:26 +00:00
/// The vector will be able to hold at least `capacity` elements without
/// reallocating. This method is allowed to allocate for more elements than
/// `capacity`. If `capacity` is 0, the vector will not allocate.
///
/// It is important to note that although the returned vector has the
Fix documentation for with_capacity and reserve families of methods Documentation for the following methods with_capacity with_capacity_in with_capacity_and_hasher reserve reserve_exact try_reserve try_reserve_exact was inconsistent and often not entirely correct where they existed on the following types Vec VecDeque String OsString PathBuf BinaryHeap HashSet HashMap BufWriter LineWriter since the allocator is allowed to allocate more than the requested capacity in all such cases, and will frequently "allocate" much more in the case of zero-sized types (I also checked BufReader, but there the docs appear to be accurate as it appears to actually allocate the exact capacity). Some effort was made to make the documentation more consistent between types as well. Fix with_capacity* methods for Vec Fix *reserve* methods for Vec Fix docs for *reserve* methods of VecDeque Fix docs for String::with_capacity Fix docs for *reserve* methods of String Fix docs for OsString::with_capacity Fix docs for *reserve* methods on OsString Fix docs for with_capacity* methods on HashSet Fix docs for *reserve methods of HashSet Fix docs for with_capacity* methods of HashMap Fix docs for *reserve methods on HashMap Fix expect messages about OOM in doctests Fix docs for BinaryHeap::with_capacity Fix docs for *reserve* methods of BinaryHeap Fix typos Fix docs for with_capacity on BufWriter and LineWriter Fix consistent use of `hasher` between `HashMap` and `HashSet` Fix warning in doc test Add test for capacity of vec with ZST Fix doc test error
2022-04-09 15:42:26 +00:00
/// minimum *capacity* specified, the vector will have a zero *length*. For
/// an explanation of the difference between length and capacity, see
/// *[Capacity and reallocation]*.
///
/// If it is important to know the exact allocated capacity of a `Vec`,
Fix documentation for with_capacity and reserve families of methods Documentation for the following methods with_capacity with_capacity_in with_capacity_and_hasher reserve reserve_exact try_reserve try_reserve_exact was inconsistent and often not entirely correct where they existed on the following types Vec VecDeque String OsString PathBuf BinaryHeap HashSet HashMap BufWriter LineWriter since the allocator is allowed to allocate more than the requested capacity in all such cases, and will frequently "allocate" much more in the case of zero-sized types (I also checked BufReader, but there the docs appear to be accurate as it appears to actually allocate the exact capacity). Some effort was made to make the documentation more consistent between types as well. Fix with_capacity* methods for Vec Fix *reserve* methods for Vec Fix docs for *reserve* methods of VecDeque Fix docs for String::with_capacity Fix docs for *reserve* methods of String Fix docs for OsString::with_capacity Fix docs for *reserve* methods on OsString Fix docs for with_capacity* methods on HashSet Fix docs for *reserve methods of HashSet Fix docs for with_capacity* methods of HashMap Fix docs for *reserve methods on HashMap Fix expect messages about OOM in doctests Fix docs for BinaryHeap::with_capacity Fix docs for *reserve* methods of BinaryHeap Fix typos Fix docs for with_capacity on BufWriter and LineWriter Fix consistent use of `hasher` between `HashMap` and `HashSet` Fix warning in doc test Add test for capacity of vec with ZST Fix doc test error
2022-04-09 15:42:26 +00:00
/// always use the [`capacity`] method after construction.
///
/// For `Vec<T>` where `T` is a zero-sized type, there will be no allocation
/// and the capacity will always be `usize::MAX`.
///
/// [Capacity and reallocation]: #capacity-and-reallocation
Fix documentation for with_capacity and reserve families of methods Documentation for the following methods with_capacity with_capacity_in with_capacity_and_hasher reserve reserve_exact try_reserve try_reserve_exact was inconsistent and often not entirely correct where they existed on the following types Vec VecDeque String OsString PathBuf BinaryHeap HashSet HashMap BufWriter LineWriter since the allocator is allowed to allocate more than the requested capacity in all such cases, and will frequently "allocate" much more in the case of zero-sized types (I also checked BufReader, but there the docs appear to be accurate as it appears to actually allocate the exact capacity). Some effort was made to make the documentation more consistent between types as well. Fix with_capacity* methods for Vec Fix *reserve* methods for Vec Fix docs for *reserve* methods of VecDeque Fix docs for String::with_capacity Fix docs for *reserve* methods of String Fix docs for OsString::with_capacity Fix docs for *reserve* methods on OsString Fix docs for with_capacity* methods on HashSet Fix docs for *reserve methods of HashSet Fix docs for with_capacity* methods of HashMap Fix docs for *reserve methods on HashMap Fix expect messages about OOM in doctests Fix docs for BinaryHeap::with_capacity Fix docs for *reserve* methods of BinaryHeap Fix typos Fix docs for with_capacity on BufWriter and LineWriter Fix consistent use of `hasher` between `HashMap` and `HashSet` Fix warning in doc test Add test for capacity of vec with ZST Fix doc test error
2022-04-09 15:42:26 +00:00
/// [`capacity`]: Vec::capacity
///
/// # Panics
///
/// Panics if the new capacity exceeds `isize::MAX` _bytes_.
///
/// # Examples
///
/// ```
/// let mut vec = Vec::with_capacity(10);
///
/// // The vector contains no items, even though it has capacity for more
/// assert_eq!(vec.len(), 0);
Fix documentation for with_capacity and reserve families of methods Documentation for the following methods with_capacity with_capacity_in with_capacity_and_hasher reserve reserve_exact try_reserve try_reserve_exact was inconsistent and often not entirely correct where they existed on the following types Vec VecDeque String OsString PathBuf BinaryHeap HashSet HashMap BufWriter LineWriter since the allocator is allowed to allocate more than the requested capacity in all such cases, and will frequently "allocate" much more in the case of zero-sized types (I also checked BufReader, but there the docs appear to be accurate as it appears to actually allocate the exact capacity). Some effort was made to make the documentation more consistent between types as well. Fix with_capacity* methods for Vec Fix *reserve* methods for Vec Fix docs for *reserve* methods of VecDeque Fix docs for String::with_capacity Fix docs for *reserve* methods of String Fix docs for OsString::with_capacity Fix docs for *reserve* methods on OsString Fix docs for with_capacity* methods on HashSet Fix docs for *reserve methods of HashSet Fix docs for with_capacity* methods of HashMap Fix docs for *reserve methods on HashMap Fix expect messages about OOM in doctests Fix docs for BinaryHeap::with_capacity Fix docs for *reserve* methods of BinaryHeap Fix typos Fix docs for with_capacity on BufWriter and LineWriter Fix consistent use of `hasher` between `HashMap` and `HashSet` Fix warning in doc test Add test for capacity of vec with ZST Fix doc test error
2022-04-09 15:42:26 +00:00
/// assert!(vec.capacity() >= 10);
///
/// // These are all done without reallocating...
2015-01-25 21:05:03 +00:00
/// for i in 0..10 {
/// vec.push(i);
/// }
/// assert_eq!(vec.len(), 10);
Fix documentation for with_capacity and reserve families of methods Documentation for the following methods with_capacity with_capacity_in with_capacity_and_hasher reserve reserve_exact try_reserve try_reserve_exact was inconsistent and often not entirely correct where they existed on the following types Vec VecDeque String OsString PathBuf BinaryHeap HashSet HashMap BufWriter LineWriter since the allocator is allowed to allocate more than the requested capacity in all such cases, and will frequently "allocate" much more in the case of zero-sized types (I also checked BufReader, but there the docs appear to be accurate as it appears to actually allocate the exact capacity). Some effort was made to make the documentation more consistent between types as well. Fix with_capacity* methods for Vec Fix *reserve* methods for Vec Fix docs for *reserve* methods of VecDeque Fix docs for String::with_capacity Fix docs for *reserve* methods of String Fix docs for OsString::with_capacity Fix docs for *reserve* methods on OsString Fix docs for with_capacity* methods on HashSet Fix docs for *reserve methods of HashSet Fix docs for with_capacity* methods of HashMap Fix docs for *reserve methods on HashMap Fix expect messages about OOM in doctests Fix docs for BinaryHeap::with_capacity Fix docs for *reserve* methods of BinaryHeap Fix typos Fix docs for with_capacity on BufWriter and LineWriter Fix consistent use of `hasher` between `HashMap` and `HashSet` Fix warning in doc test Add test for capacity of vec with ZST Fix doc test error
2022-04-09 15:42:26 +00:00
/// assert!(vec.capacity() >= 10);
///
/// // ...but this may make the vector reallocate
/// vec.push(11);
/// assert_eq!(vec.len(), 11);
2020-06-02 17:29:02 +00:00
/// assert!(vec.capacity() >= 11);
Fix documentation for with_capacity and reserve families of methods Documentation for the following methods with_capacity with_capacity_in with_capacity_and_hasher reserve reserve_exact try_reserve try_reserve_exact was inconsistent and often not entirely correct where they existed on the following types Vec VecDeque String OsString PathBuf BinaryHeap HashSet HashMap BufWriter LineWriter since the allocator is allowed to allocate more than the requested capacity in all such cases, and will frequently "allocate" much more in the case of zero-sized types (I also checked BufReader, but there the docs appear to be accurate as it appears to actually allocate the exact capacity). Some effort was made to make the documentation more consistent between types as well. Fix with_capacity* methods for Vec Fix *reserve* methods for Vec Fix docs for *reserve* methods of VecDeque Fix docs for String::with_capacity Fix docs for *reserve* methods of String Fix docs for OsString::with_capacity Fix docs for *reserve* methods on OsString Fix docs for with_capacity* methods on HashSet Fix docs for *reserve methods of HashSet Fix docs for with_capacity* methods of HashMap Fix docs for *reserve methods on HashMap Fix expect messages about OOM in doctests Fix docs for BinaryHeap::with_capacity Fix docs for *reserve* methods of BinaryHeap Fix typos Fix docs for with_capacity on BufWriter and LineWriter Fix consistent use of `hasher` between `HashMap` and `HashSet` Fix warning in doc test Add test for capacity of vec with ZST Fix doc test error
2022-04-09 15:42:26 +00:00
///
/// // A vector of a zero-sized type will always over-allocate, since no
/// // allocation is necessary
/// let vec_units = Vec::<()>::with_capacity(10);
/// assert_eq!(vec_units.capacity(), usize::MAX);
/// ```
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
std: micro-optimize Vec constructors and add benchmarks Generally speaking, inlining doesn't really help out with constructing vectors, except for when we construct a zero-sized vector. This patch allows llvm to optimize this case away in a lot of cases, which shaves off 4-8ns. It's not much, but it might help in some inner loop somewhere. before: running 12 tests test bench_extend_0 ... bench: 123 ns/iter (+/- 6) test bench_extend_5 ... bench: 323 ns/iter (+/- 11) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 49 ns/iter (+/- 6) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 0) test bench_from_iter_5 ... bench: 176 ns/iter (+/- 11) test bench_from_slice_0 ... bench: 8 ns/iter (+/- 1) test bench_from_slice_5 ... bench: 73 ns/iter (+/- 5) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 6 ns/iter (+/- 1) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 3) test bench_with_capacity_5 ... bench: 40 ns/iter (+/- 2) after: test bench_extend_0 ... bench: 123 ns/iter (+/- 7) test bench_extend_5 ... bench: 339 ns/iter (+/- 27) test bench_from_fn_0 ... bench: 7 ns/iter (+/- 0) test bench_from_fn_5 ... bench: 54 ns/iter (+/- 4) test bench_from_iter_0 ... bench: 11 ns/iter (+/- 1) test bench_from_iter_5 ... bench: 182 ns/iter (+/- 16) test bench_from_slice_0 ... bench: 4 ns/iter (+/- 0) test bench_from_slice_5 ... bench: 62 ns/iter (+/- 3) test bench_new ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_0 ... bench: 0 ns/iter (+/- 0) test bench_with_capacity_100 ... bench: 41 ns/iter (+/- 1) test bench_with_capacity_5 ... bench: 41 ns/iter (+/- 3)
2014-06-03 04:56:29 +00:00
#[inline]
2015-01-24 05:48:20 +00:00
#[stable(feature = "rust1", since = "1.0.0")]
2021-10-10 05:50:06 +00:00
#[must_use]
pub fn with_capacity(capacity: usize) -> Self {
Self::with_capacity_in(capacity, Global)
}
/// Constructs a new, empty `Vec<T>` with at least the specified capacity.
///
/// The vector will be able to hold at least `capacity` elements without
/// reallocating. This method is allowed to allocate for more elements than
/// `capacity`. If `capacity` is 0, the vector will not allocate.
///
/// # Errors
///
/// Returns an error if the capacity exceeds `isize::MAX` _bytes_,
/// or if the allocator reports allocation failure.
#[inline]
#[unstable(feature = "try_with_capacity", issue = "91913")]
pub fn try_with_capacity(capacity: usize) -> Result<Self, TryReserveError> {
Self::try_with_capacity_in(capacity, Global)
}
/// Creates a `Vec<T>` directly from a pointer, a length, and a capacity.
2019-10-22 16:48:52 +00:00
///
/// # Safety
2019-10-22 16:48:52 +00:00
///
/// This is highly unsafe, due to the number of invariants that aren't
/// checked:
2019-10-22 16:48:52 +00:00
///
/// * `ptr` must have been allocated using the global allocator, such as via
/// the [`alloc::alloc`] function.
/// * `T` needs to have the same alignment as what `ptr` was allocated with.
/// (`T` having a less strict alignment is not sufficient, the alignment really
/// needs to be equal to satisfy the [`dealloc`] requirement that memory must be
/// allocated and deallocated with the same layout.)
/// * The size of `T` times the `capacity` (ie. the allocated size in bytes) needs
/// to be the same size as the pointer was allocated with. (Because similar to
/// alignment, [`dealloc`] must be called with the same layout `size`.)
/// * `length` needs to be less than or equal to `capacity`.
2022-07-13 22:46:04 +00:00
/// * The first `length` values must be properly initialized values of type `T`.
/// * `capacity` needs to be the capacity that the pointer was allocated with.
2022-07-13 22:46:04 +00:00
/// * The allocated size in bytes must be no larger than `isize::MAX`.
/// See the safety documentation of [`pointer::offset`].
///
2022-07-14 00:49:31 +00:00
/// These requirements are always upheld by any `ptr` that has been allocated
2022-07-13 22:46:04 +00:00
/// via `Vec<T>`. Other allocation sources are allowed if the invariants are
/// upheld.
///
/// Violating these may cause problems like corrupting the allocator's
/// internal data structures. For example it is normally **not** safe
/// to build a `Vec<u8>` from a pointer to a C `char` array with length
/// `size_t`, doing so is only safe if the array was initially allocated by
/// a `Vec` or `String`.
/// It's also not safe to build one from a `Vec<u16>` and its length, because
/// the allocator cares about the alignment, and these two types have different
/// alignments. The buffer was allocated with alignment 2 (for `u16`), but after
/// turning it into a `Vec<u8>` it'll be deallocated with alignment 1. To avoid
/// these issues, it is often preferable to do casting/transmuting using
/// [`slice::from_raw_parts`] instead.
///
/// The ownership of `ptr` is effectively transferred to the
/// `Vec<T>` which may then deallocate, reallocate or change the
/// contents of memory pointed to by the pointer at will. Ensure
/// that nothing else uses the pointer after calling this
/// function.
///
/// [`String`]: crate::string::String
/// [`alloc::alloc`]: crate::alloc::alloc
/// [`dealloc`]: crate::alloc::GlobalAlloc::dealloc
2019-10-22 16:48:52 +00:00
///
/// # Examples
///
/// ```
/// use std::ptr;
/// use std::mem;
2019-10-22 16:48:52 +00:00
///
/// let v = vec![1, 2, 3];
2019-10-22 16:48:52 +00:00
///
// FIXME Update this when vec_into_raw_parts is stabilized
/// // Prevent running `v`'s destructor so we are in complete control
/// // of the allocation.
/// let mut v = mem::ManuallyDrop::new(v);
2019-10-22 16:48:52 +00:00
///
/// // Pull out the various important pieces of information about `v`
/// let p = v.as_mut_ptr();
/// let len = v.len();
/// let cap = v.capacity();
///
/// unsafe {
/// // Overwrite memory with 4, 5, 6
/// for i in 0..len {
/// ptr::write(p.add(i), 4 + i);
/// }
///
/// // Put everything back together into a Vec
/// let rebuilt = Vec::from_raw_parts(p, len, cap);
/// assert_eq!(rebuilt, [4, 5, 6]);
/// }
2019-10-22 16:48:52 +00:00
/// ```
2022-07-14 14:47:06 +00:00
///
/// Using memory that was allocated elsewhere:
///
/// ```rust
/// use std::alloc::{alloc, Layout};
2022-07-14 14:47:06 +00:00
///
/// fn main() {
/// let layout = Layout::array::<u32>(16).expect("overflow cannot happen");
///
/// let vec = unsafe {
/// let mem = alloc(layout).cast::<u32>();
/// if mem.is_null() {
/// return;
/// }
2022-07-14 14:47:06 +00:00
///
/// mem.write(1_000_000);
///
/// Vec::from_raw_parts(mem, 1, 16)
2022-07-14 14:47:06 +00:00
/// };
///
/// assert_eq!(vec, &[1_000_000]);
/// assert_eq!(vec.capacity(), 16);
/// }
/// ```
#[inline]
#[stable(feature = "rust1", since = "1.0.0")]
pub unsafe fn from_raw_parts(ptr: *mut T, length: usize, capacity: usize) -> Self {
unsafe { Self::from_raw_parts_in(ptr, length, capacity, Global) }
2019-10-22 16:48:52 +00:00
}
/// A convenience method for hoisting the non-null precondition out of [`Vec::from_raw_parts`].
///
/// # Safety
///
/// See [`Vec::from_raw_parts`].
#[inline]
#[cfg(not(no_global_oom_handling))] // required by tests/run-make/alloc-no-oom-handling
pub(crate) unsafe fn from_nonnull(ptr: NonNull<T>, length: usize, capacity: usize) -> Self {
unsafe { Self::from_nonnull_in(ptr, length, capacity, Global) }
}
}
2019-10-22 16:48:52 +00:00
impl<T, A: Allocator> Vec<T, A> {
/// Constructs a new, empty `Vec<T, A>`.
///
/// The vector will not allocate until elements are pushed onto it.
///
/// # Examples
///
/// ```
/// #![feature(allocator_api)]
///
/// use std::alloc::System;
///
/// # #[allow(unused_mut)]
/// let mut vec: Vec<i32, _> = Vec::new_in(System);
/// ```
#[inline]
#[unstable(feature = "allocator_api", issue = "32838")]
pub const fn new_in(alloc: A) -> Self {
Vec { buf: RawVec::new_in(alloc), len: 0 }
}
Fix documentation for with_capacity and reserve families of methods Documentation for the following methods with_capacity with_capacity_in with_capacity_and_hasher reserve reserve_exact try_reserve try_reserve_exact was inconsistent and often not entirely correct where they existed on the following types Vec VecDeque String OsString PathBuf BinaryHeap HashSet HashMap BufWriter LineWriter since the allocator is allowed to allocate more than the requested capacity in all such cases, and will frequently "allocate" much more in the case of zero-sized types (I also checked BufReader, but there the docs appear to be accurate as it appears to actually allocate the exact capacity). Some effort was made to make the documentation more consistent between types as well. Fix with_capacity* methods for Vec Fix *reserve* methods for Vec Fix docs for *reserve* methods of VecDeque Fix docs for String::with_capacity Fix docs for *reserve* methods of String Fix docs for OsString::with_capacity Fix docs for *reserve* methods on OsString Fix docs for with_capacity* methods on HashSet Fix docs for *reserve methods of HashSet Fix docs for with_capacity* methods of HashMap Fix docs for *reserve methods on HashMap Fix expect messages about OOM in doctests Fix docs for BinaryHeap::with_capacity Fix docs for *reserve* methods of BinaryHeap Fix typos Fix docs for with_capacity on BufWriter and LineWriter Fix consistent use of `hasher` between `HashMap` and `HashSet` Fix warning in doc test Add test for capacity of vec with ZST Fix doc test error
2022-04-09 15:42:26 +00:00
/// Constructs a new, empty `Vec<T, A>` with at least the specified capacity
/// with the provided allocator.
///
Fix documentation for with_capacity and reserve families of methods Documentation for the following methods with_capacity with_capacity_in with_capacity_and_hasher reserve reserve_exact try_reserve try_reserve_exact was inconsistent and often not entirely correct where they existed on the following types Vec VecDeque String OsString PathBuf BinaryHeap HashSet HashMap BufWriter LineWriter since the allocator is allowed to allocate more than the requested capacity in all such cases, and will frequently "allocate" much more in the case of zero-sized types (I also checked BufReader, but there the docs appear to be accurate as it appears to actually allocate the exact capacity). Some effort was made to make the documentation more consistent between types as well. Fix with_capacity* methods for Vec Fix *reserve* methods for Vec Fix docs for *reserve* methods of VecDeque Fix docs for String::with_capacity Fix docs for *reserve* methods of String Fix docs for OsString::with_capacity Fix docs for *reserve* methods on OsString Fix docs for with_capacity* methods on HashSet Fix docs for *reserve methods of HashSet Fix docs for with_capacity* methods of HashMap Fix docs for *reserve methods on HashMap Fix expect messages about OOM in doctests Fix docs for BinaryHeap::with_capacity Fix docs for *reserve* methods of BinaryHeap Fix typos Fix docs for with_capacity on BufWriter and LineWriter Fix consistent use of `hasher` between `HashMap` and `HashSet` Fix warning in doc test Add test for capacity of vec with ZST Fix doc test error
2022-04-09 15:42:26 +00:00
/// The vector will be able to hold at least `capacity` elements without
/// reallocating. This method is allowed to allocate for more elements than
/// `capacity`. If `capacity` is 0, the vector will not allocate.
///
/// It is important to note that although the returned vector has the
Fix documentation for with_capacity and reserve families of methods Documentation for the following methods with_capacity with_capacity_in with_capacity_and_hasher reserve reserve_exact try_reserve try_reserve_exact was inconsistent and often not entirely correct where they existed on the following types Vec VecDeque String OsString PathBuf BinaryHeap HashSet HashMap BufWriter LineWriter since the allocator is allowed to allocate more than the requested capacity in all such cases, and will frequently "allocate" much more in the case of zero-sized types (I also checked BufReader, but there the docs appear to be accurate as it appears to actually allocate the exact capacity). Some effort was made to make the documentation more consistent between types as well. Fix with_capacity* methods for Vec Fix *reserve* methods for Vec Fix docs for *reserve* methods of VecDeque Fix docs for String::with_capacity Fix docs for *reserve* methods of String Fix docs for OsString::with_capacity Fix docs for *reserve* methods on OsString Fix docs for with_capacity* methods on HashSet Fix docs for *reserve methods of HashSet Fix docs for with_capacity* methods of HashMap Fix docs for *reserve methods on HashMap Fix expect messages about OOM in doctests Fix docs for BinaryHeap::with_capacity Fix docs for *reserve* methods of BinaryHeap Fix typos Fix docs for with_capacity on BufWriter and LineWriter Fix consistent use of `hasher` between `HashMap` and `HashSet` Fix warning in doc test Add test for capacity of vec with ZST Fix doc test error
2022-04-09 15:42:26 +00:00
/// minimum *capacity* specified, the vector will have a zero *length*. For
/// an explanation of the difference between length and capacity, see
/// *[Capacity and reallocation]*.
///
/// If it is important to know the exact allocated capacity of a `Vec`,
Fix documentation for with_capacity and reserve families of methods Documentation for the following methods with_capacity with_capacity_in with_capacity_and_hasher reserve reserve_exact try_reserve try_reserve_exact was inconsistent and often not entirely correct where they existed on the following types Vec VecDeque String OsString PathBuf BinaryHeap HashSet HashMap BufWriter LineWriter since the allocator is allowed to allocate more than the requested capacity in all such cases, and will frequently "allocate" much more in the case of zero-sized types (I also checked BufReader, but there the docs appear to be accurate as it appears to actually allocate the exact capacity). Some effort was made to make the documentation more consistent between types as well. Fix with_capacity* methods for Vec Fix *reserve* methods for Vec Fix docs for *reserve* methods of VecDeque Fix docs for String::with_capacity Fix docs for *reserve* methods of String Fix docs for OsString::with_capacity Fix docs for *reserve* methods on OsString Fix docs for with_capacity* methods on HashSet Fix docs for *reserve methods of HashSet Fix docs for with_capacity* methods of HashMap Fix docs for *reserve methods on HashMap Fix expect messages about OOM in doctests Fix docs for BinaryHeap::with_capacity Fix docs for *reserve* methods of BinaryHeap Fix typos Fix docs for with_capacity on BufWriter and LineWriter Fix consistent use of `hasher` between `HashMap` and `HashSet` Fix warning in doc test Add test for capacity of vec with ZST Fix doc test error
2022-04-09 15:42:26 +00:00
/// always use the [`capacity`] method after construction.
///
/// For `Vec<T, A>` where `T` is a zero-sized type, there will be no allocation
/// and the capacity will always be `usize::MAX`.
///
/// [Capacity and reallocation]: #capacity-and-reallocation
Fix documentation for with_capacity and reserve families of methods Documentation for the following methods with_capacity with_capacity_in with_capacity_and_hasher reserve reserve_exact try_reserve try_reserve_exact was inconsistent and often not entirely correct where they existed on the following types Vec VecDeque String OsString PathBuf BinaryHeap HashSet HashMap BufWriter LineWriter since the allocator is allowed to allocate more than the requested capacity in all such cases, and will frequently "allocate" much more in the case of zero-sized types (I also checked BufReader, but there the docs appear to be accurate as it appears to actually allocate the exact capacity). Some effort was made to make the documentation more consistent between types as well. Fix with_capacity* methods for Vec Fix *reserve* methods for Vec Fix docs for *reserve* methods of VecDeque Fix docs for String::with_capacity Fix docs for *reserve* methods of String Fix docs for OsString::with_capacity Fix docs for *reserve* methods on OsString Fix docs for with_capacity* methods on HashSet Fix docs for *reserve methods of HashSet Fix docs for with_capacity* methods of HashMap Fix docs for *reserve methods on HashMap Fix expect messages about OOM in doctests Fix docs for BinaryHeap::with_capacity Fix docs for *reserve* methods of BinaryHeap Fix typos Fix docs for with_capacity on BufWriter and LineWriter Fix consistent use of `hasher` between `HashMap` and `HashSet` Fix warning in doc test Add test for capacity of vec with ZST Fix doc test error
2022-04-09 15:42:26 +00:00
/// [`capacity`]: Vec::capacity
///
/// # Panics
///
/// Panics if the new capacity exceeds `isize::MAX` _bytes_.
///
/// # Examples
///
/// ```
/// #![feature(allocator_api)]
///
/// use std::alloc::System;
///
/// let mut vec = Vec::with_capacity_in(10, System);
///
/// // The vector contains no items, even though it has capacity for more
/// assert_eq!(vec.len(), 0);
/// assert!(vec.capacity() >= 10);
///
/// // These are all done without reallocating...
/// for i in 0..10 {
/// vec.push(i);
/// }
/// assert_eq!(vec.len(), 10);
/// assert!(vec.capacity() >= 10);
///
/// // ...but this may make the vector reallocate
/// vec.push(11);
/// assert_eq!(vec.len(), 11);
/// assert!(vec.capacity() >= 11);
Fix documentation for with_capacity and reserve families of methods Documentation for the following methods with_capacity with_capacity_in with_capacity_and_hasher reserve reserve_exact try_reserve try_reserve_exact was inconsistent and often not entirely correct where they existed on the following types Vec VecDeque String OsString PathBuf BinaryHeap HashSet HashMap BufWriter LineWriter since the allocator is allowed to allocate more than the requested capacity in all such cases, and will frequently "allocate" much more in the case of zero-sized types (I also checked BufReader, but there the docs appear to be accurate as it appears to actually allocate the exact capacity). Some effort was made to make the documentation more consistent between types as well. Fix with_capacity* methods for Vec Fix *reserve* methods for Vec Fix docs for *reserve* methods of VecDeque Fix docs for String::with_capacity Fix docs for *reserve* methods of String Fix docs for OsString::with_capacity Fix docs for *reserve* methods on OsString Fix docs for with_capacity* methods on HashSet Fix docs for *reserve methods of HashSet Fix docs for with_capacity* methods of HashMap Fix docs for *reserve methods on HashMap Fix expect messages about OOM in doctests Fix docs for BinaryHeap::with_capacity Fix docs for *reserve* methods of BinaryHeap Fix typos Fix docs for with_capacity on BufWriter and LineWriter Fix consistent use of `hasher` between `HashMap` and `HashSet` Fix warning in doc test Add test for capacity of vec with ZST Fix doc test error
2022-04-09 15:42:26 +00:00
///
/// // A vector of a zero-sized type will always over-allocate, since no
/// // allocation is necessary
/// let vec_units = Vec::<(), System>::with_capacity_in(10, System);
/// assert_eq!(vec_units.capacity(), usize::MAX);
/// ```
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
#[inline]
#[unstable(feature = "allocator_api", issue = "32838")]
pub fn with_capacity_in(capacity: usize, alloc: A) -> Self {
Vec { buf: RawVec::with_capacity_in(capacity, alloc), len: 0 }
}
/// Constructs a new, empty `Vec<T, A>` with at least the specified capacity
/// with the provided allocator.
///
/// The vector will be able to hold at least `capacity` elements without
/// reallocating. This method is allowed to allocate for more elements than
/// `capacity`. If `capacity` is 0, the vector will not allocate.
///
/// # Errors
///
/// Returns an error if the capacity exceeds `isize::MAX` _bytes_,
/// or if the allocator reports allocation failure.
#[inline]
#[unstable(feature = "allocator_api", issue = "32838")]
// #[unstable(feature = "try_with_capacity", issue = "91913")]
pub fn try_with_capacity_in(capacity: usize, alloc: A) -> Result<Self, TryReserveError> {
Ok(Vec { buf: RawVec::try_with_capacity_in(capacity, alloc)?, len: 0 })
}
/// Creates a `Vec<T, A>` directly from a pointer, a length, a capacity,
2022-07-13 22:46:04 +00:00
/// and an allocator.
///
/// # Safety
///
/// This is highly unsafe, due to the number of invariants that aren't
/// checked:
///
/// * `ptr` must be [*currently allocated*] via the given allocator `alloc`.
/// * `T` needs to have the same alignment as what `ptr` was allocated with.
/// (`T` having a less strict alignment is not sufficient, the alignment really
2020-09-22 08:54:07 +00:00
/// needs to be equal to satisfy the [`dealloc`] requirement that memory must be
/// allocated and deallocated with the same layout.)
/// * The size of `T` times the `capacity` (ie. the allocated size in bytes) needs
/// to be the same size as the pointer was allocated with. (Because similar to
/// alignment, [`dealloc`] must be called with the same layout `size`.)
2016-07-17 14:55:00 +00:00
/// * `length` needs to be less than or equal to `capacity`.
2022-07-13 22:46:04 +00:00
/// * The first `length` values must be properly initialized values of type `T`.
2022-07-14 01:04:15 +00:00
/// * `capacity` needs to [*fit*] the layout size that the pointer was allocated with.
2022-07-13 22:46:04 +00:00
/// * The allocated size in bytes must be no larger than `isize::MAX`.
/// See the safety documentation of [`pointer::offset`].
///
2022-07-14 00:49:31 +00:00
/// These requirements are always upheld by any `ptr` that has been allocated
2022-07-14 14:47:06 +00:00
/// via `Vec<T, A>`. Other allocation sources are allowed if the invariants are
2022-07-13 22:46:04 +00:00
/// upheld.
///
/// Violating these may cause problems like corrupting the allocator's
2017-08-11 18:34:14 +00:00
/// internal data structures. For example it is **not** safe
/// to build a `Vec<u8>` from a pointer to a C `char` array with length `size_t`.
/// It's also not safe to build one from a `Vec<u16>` and its length, because
/// the allocator cares about the alignment, and these two types have different
/// alignments. The buffer was allocated with alignment 2 (for `u16`), but after
/// turning it into a `Vec<u8>` it'll be deallocated with alignment 1.
///
/// The ownership of `ptr` is effectively transferred to the
/// `Vec<T>` which may then deallocate, reallocate or change the
/// contents of memory pointed to by the pointer at will. Ensure
/// that nothing else uses the pointer after calling this
/// function.
///
/// [`String`]: crate::string::String
/// [`dealloc`]: crate::alloc::GlobalAlloc::dealloc
/// [*currently allocated*]: crate::alloc::Allocator#currently-allocated-memory
2022-07-13 22:46:04 +00:00
/// [*fit*]: crate::alloc::Allocator#memory-fitting
2016-10-08 15:58:53 +00:00
///
/// # Examples
///
/// ```
/// #![feature(allocator_api)]
///
/// use std::alloc::System;
///
/// use std::ptr;
/// use std::mem;
///
/// let mut v = Vec::with_capacity_in(3, System);
/// v.push(1);
/// v.push(2);
/// v.push(3);
///
2019-10-22 16:48:52 +00:00
// FIXME Update this when vec_into_raw_parts is stabilized
/// // Prevent running `v`'s destructor so we are in complete control
/// // of the allocation.
/// let mut v = mem::ManuallyDrop::new(v);
///
/// // Pull out the various important pieces of information about `v`
/// let p = v.as_mut_ptr();
/// let len = v.len();
/// let cap = v.capacity();
/// let alloc = v.allocator();
///
/// unsafe {
/// // Overwrite memory with 4, 5, 6
/// for i in 0..len {
/// ptr::write(p.add(i), 4 + i);
/// }
///
/// // Put everything back together into a Vec
/// let rebuilt = Vec::from_raw_parts_in(p, len, cap, alloc.clone());
/// assert_eq!(rebuilt, [4, 5, 6]);
/// }
/// ```
2022-07-14 14:47:06 +00:00
///
/// Using memory that was allocated elsewhere:
///
/// ```rust
/// #![feature(allocator_api)]
///
/// use std::alloc::{AllocError, Allocator, Global, Layout};
2022-07-14 14:47:06 +00:00
///
/// fn main() {
/// let layout = Layout::array::<u32>(16).expect("overflow cannot happen");
///
2022-07-14 14:47:06 +00:00
/// let vec = unsafe {
/// let mem = match Global.allocate(layout) {
/// Ok(mem) => mem.cast::<u32>().as_ptr(),
/// Err(AllocError) => return,
/// };
2022-07-14 14:47:06 +00:00
///
/// mem.write(1_000_000);
///
/// Vec::from_raw_parts_in(mem, 1, 16, Global)
2022-07-14 14:47:06 +00:00
/// };
///
/// assert_eq!(vec, &[1_000_000]);
/// assert_eq!(vec.capacity(), 16);
/// }
/// ```
#[inline]
#[unstable(feature = "allocator_api", issue = "32838")]
pub unsafe fn from_raw_parts_in(ptr: *mut T, length: usize, capacity: usize, alloc: A) -> Self {
unsafe { Vec { buf: RawVec::from_raw_parts_in(ptr, capacity, alloc), len: length } }
}
/// A convenience method for hoisting the non-null precondition out of [`Vec::from_raw_parts_in`].
///
/// # Safety
///
/// See [`Vec::from_raw_parts_in`].
#[inline]
#[cfg(not(no_global_oom_handling))] // required by tests/run-make/alloc-no-oom-handling
pub(crate) unsafe fn from_nonnull_in(
ptr: NonNull<T>,
length: usize,
capacity: usize,
alloc: A,
) -> Self {
unsafe { Vec { buf: RawVec::from_nonnull_in(ptr, capacity, alloc), len: length } }
}
/// Decomposes a `Vec<T>` into its raw components: `(pointer, length, capacity)`.
///
/// Returns the raw pointer to the underlying data, the length of
/// the vector (in elements), and the allocated capacity of the
/// data (in elements). These are the same arguments in the same
/// order as the arguments to [`from_raw_parts`].
///
/// After calling this function, the caller is responsible for the
/// memory previously managed by the `Vec`. The only way to do
/// this is to convert the raw pointer, length, and capacity back
/// into a `Vec` with the [`from_raw_parts`] function, allowing
/// the destructor to perform the cleanup.
///
/// [`from_raw_parts`]: Vec::from_raw_parts
///
/// # Examples
///
/// ```
/// #![feature(vec_into_raw_parts)]
/// let v: Vec<i32> = vec![-1, 0, 1];
///
/// let (ptr, len, cap) = v.into_raw_parts();
///
/// let rebuilt = unsafe {
/// // We can now make changes to the components, such as
/// // transmuting the raw pointer to a compatible type.
/// let ptr = ptr as *mut u32;
///
/// Vec::from_raw_parts(ptr, len, cap)
/// };
/// assert_eq!(rebuilt, [4294967295, 0, 1]);
/// ```
#[unstable(feature = "vec_into_raw_parts", reason = "new API", issue = "65816")]
pub fn into_raw_parts(self) -> (*mut T, usize, usize) {
let mut me = ManuallyDrop::new(self);
(me.as_mut_ptr(), me.len(), me.capacity())
}
/// Decomposes a `Vec<T>` into its raw components: `(pointer, length, capacity, allocator)`.
///
/// Returns the raw pointer to the underlying data, the length of the vector (in elements),
/// the allocated capacity of the data (in elements), and the allocator. These are the same
/// arguments in the same order as the arguments to [`from_raw_parts_in`].
///
/// After calling this function, the caller is responsible for the
/// memory previously managed by the `Vec`. The only way to do
/// this is to convert the raw pointer, length, and capacity back
/// into a `Vec` with the [`from_raw_parts_in`] function, allowing
/// the destructor to perform the cleanup.
///
/// [`from_raw_parts_in`]: Vec::from_raw_parts_in
///
/// # Examples
///
/// ```
/// #![feature(allocator_api, vec_into_raw_parts)]
///
/// use std::alloc::System;
///
/// let mut v: Vec<i32, System> = Vec::new_in(System);
/// v.push(-1);
/// v.push(0);
/// v.push(1);
///
/// let (ptr, len, cap, alloc) = v.into_raw_parts_with_alloc();
///
/// let rebuilt = unsafe {
/// // We can now make changes to the components, such as
/// // transmuting the raw pointer to a compatible type.
/// let ptr = ptr as *mut u32;
///
/// Vec::from_raw_parts_in(ptr, len, cap, alloc)
/// };
/// assert_eq!(rebuilt, [4294967295, 0, 1]);
/// ```
#[unstable(feature = "allocator_api", issue = "32838")]
// #[unstable(feature = "vec_into_raw_parts", reason = "new API", issue = "65816")]
pub fn into_raw_parts_with_alloc(self) -> (*mut T, usize, usize, A) {
let mut me = ManuallyDrop::new(self);
let len = me.len();
let capacity = me.capacity();
let ptr = me.as_mut_ptr();
let alloc = unsafe { ptr::read(me.allocator()) };
(ptr, len, capacity, alloc)
}
/// Returns the total number of elements the vector can hold without
/// reallocating.
///
/// # Examples
///
/// ```
/// let mut vec: Vec<i32> = Vec::with_capacity(10);
/// vec.push(42);
/// assert!(vec.capacity() >= 10);
/// ```
#[inline]
2015-01-24 05:48:20 +00:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-05 02:17:19 +00:00
pub fn capacity(&self) -> usize {
self.buf.capacity()
}
/// Reserves capacity for at least `additional` more elements to be inserted
Fix documentation for with_capacity and reserve families of methods Documentation for the following methods with_capacity with_capacity_in with_capacity_and_hasher reserve reserve_exact try_reserve try_reserve_exact was inconsistent and often not entirely correct where they existed on the following types Vec VecDeque String OsString PathBuf BinaryHeap HashSet HashMap BufWriter LineWriter since the allocator is allowed to allocate more than the requested capacity in all such cases, and will frequently "allocate" much more in the case of zero-sized types (I also checked BufReader, but there the docs appear to be accurate as it appears to actually allocate the exact capacity). Some effort was made to make the documentation more consistent between types as well. Fix with_capacity* methods for Vec Fix *reserve* methods for Vec Fix docs for *reserve* methods of VecDeque Fix docs for String::with_capacity Fix docs for *reserve* methods of String Fix docs for OsString::with_capacity Fix docs for *reserve* methods on OsString Fix docs for with_capacity* methods on HashSet Fix docs for *reserve methods of HashSet Fix docs for with_capacity* methods of HashMap Fix docs for *reserve methods on HashMap Fix expect messages about OOM in doctests Fix docs for BinaryHeap::with_capacity Fix docs for *reserve* methods of BinaryHeap Fix typos Fix docs for with_capacity on BufWriter and LineWriter Fix consistent use of `hasher` between `HashMap` and `HashSet` Fix warning in doc test Add test for capacity of vec with ZST Fix doc test error
2022-04-09 15:42:26 +00:00
/// in the given `Vec<T>`. The collection may reserve more space to
/// speculatively avoid frequent reallocations. After calling `reserve`,
/// capacity will be greater than or equal to `self.len() + additional`.
/// Does nothing if capacity is already sufficient.
///
/// # Panics
///
/// Panics if the new capacity exceeds `isize::MAX` _bytes_.
///
/// # Examples
///
/// ```
/// let mut vec = vec![1];
/// vec.reserve(10);
/// assert!(vec.capacity() >= 11);
/// ```
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
2015-01-24 05:48:20 +00:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-05 02:17:19 +00:00
pub fn reserve(&mut self, additional: usize) {
2015-07-10 04:57:21 +00:00
self.buf.reserve(self.len, additional);
}
Fix documentation for with_capacity and reserve families of methods Documentation for the following methods with_capacity with_capacity_in with_capacity_and_hasher reserve reserve_exact try_reserve try_reserve_exact was inconsistent and often not entirely correct where they existed on the following types Vec VecDeque String OsString PathBuf BinaryHeap HashSet HashMap BufWriter LineWriter since the allocator is allowed to allocate more than the requested capacity in all such cases, and will frequently "allocate" much more in the case of zero-sized types (I also checked BufReader, but there the docs appear to be accurate as it appears to actually allocate the exact capacity). Some effort was made to make the documentation more consistent between types as well. Fix with_capacity* methods for Vec Fix *reserve* methods for Vec Fix docs for *reserve* methods of VecDeque Fix docs for String::with_capacity Fix docs for *reserve* methods of String Fix docs for OsString::with_capacity Fix docs for *reserve* methods on OsString Fix docs for with_capacity* methods on HashSet Fix docs for *reserve methods of HashSet Fix docs for with_capacity* methods of HashMap Fix docs for *reserve methods on HashMap Fix expect messages about OOM in doctests Fix docs for BinaryHeap::with_capacity Fix docs for *reserve* methods of BinaryHeap Fix typos Fix docs for with_capacity on BufWriter and LineWriter Fix consistent use of `hasher` between `HashMap` and `HashSet` Fix warning in doc test Add test for capacity of vec with ZST Fix doc test error
2022-04-09 15:42:26 +00:00
/// Reserves the minimum capacity for at least `additional` more elements to
/// be inserted in the given `Vec<T>`. Unlike [`reserve`], this will not
/// deliberately over-allocate to speculatively avoid frequent allocations.
/// After calling `reserve_exact`, capacity will be greater than or equal to
/// `self.len() + additional`. Does nothing if the capacity is already
/// sufficient.
///
/// Note that the allocator may give the collection more space than it
2019-02-09 22:16:58 +00:00
/// requests. Therefore, capacity can not be relied upon to be precisely
/// minimal. Prefer [`reserve`] if future insertions are expected.
///
/// [`reserve`]: Vec::reserve
///
/// # Panics
///
/// Panics if the new capacity exceeds `isize::MAX` _bytes_.
///
/// # Examples
///
/// ```
/// let mut vec = vec![1];
/// vec.reserve_exact(10);
/// assert!(vec.capacity() >= 11);
/// ```
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
2015-01-24 05:48:20 +00:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-05 02:17:19 +00:00
pub fn reserve_exact(&mut self, additional: usize) {
2015-07-10 04:57:21 +00:00
self.buf.reserve_exact(self.len, additional);
}
/// Tries to reserve capacity for at least `additional` more elements to be inserted
Fix documentation for with_capacity and reserve families of methods Documentation for the following methods with_capacity with_capacity_in with_capacity_and_hasher reserve reserve_exact try_reserve try_reserve_exact was inconsistent and often not entirely correct where they existed on the following types Vec VecDeque String OsString PathBuf BinaryHeap HashSet HashMap BufWriter LineWriter since the allocator is allowed to allocate more than the requested capacity in all such cases, and will frequently "allocate" much more in the case of zero-sized types (I also checked BufReader, but there the docs appear to be accurate as it appears to actually allocate the exact capacity). Some effort was made to make the documentation more consistent between types as well. Fix with_capacity* methods for Vec Fix *reserve* methods for Vec Fix docs for *reserve* methods of VecDeque Fix docs for String::with_capacity Fix docs for *reserve* methods of String Fix docs for OsString::with_capacity Fix docs for *reserve* methods on OsString Fix docs for with_capacity* methods on HashSet Fix docs for *reserve methods of HashSet Fix docs for with_capacity* methods of HashMap Fix docs for *reserve methods on HashMap Fix expect messages about OOM in doctests Fix docs for BinaryHeap::with_capacity Fix docs for *reserve* methods of BinaryHeap Fix typos Fix docs for with_capacity on BufWriter and LineWriter Fix consistent use of `hasher` between `HashMap` and `HashSet` Fix warning in doc test Add test for capacity of vec with ZST Fix doc test error
2022-04-09 15:42:26 +00:00
/// in the given `Vec<T>`. The collection may reserve more space to speculatively avoid
/// frequent reallocations. After calling `try_reserve`, capacity will be
Fix documentation for with_capacity and reserve families of methods Documentation for the following methods with_capacity with_capacity_in with_capacity_and_hasher reserve reserve_exact try_reserve try_reserve_exact was inconsistent and often not entirely correct where they existed on the following types Vec VecDeque String OsString PathBuf BinaryHeap HashSet HashMap BufWriter LineWriter since the allocator is allowed to allocate more than the requested capacity in all such cases, and will frequently "allocate" much more in the case of zero-sized types (I also checked BufReader, but there the docs appear to be accurate as it appears to actually allocate the exact capacity). Some effort was made to make the documentation more consistent between types as well. Fix with_capacity* methods for Vec Fix *reserve* methods for Vec Fix docs for *reserve* methods of VecDeque Fix docs for String::with_capacity Fix docs for *reserve* methods of String Fix docs for OsString::with_capacity Fix docs for *reserve* methods on OsString Fix docs for with_capacity* methods on HashSet Fix docs for *reserve methods of HashSet Fix docs for with_capacity* methods of HashMap Fix docs for *reserve methods on HashMap Fix expect messages about OOM in doctests Fix docs for BinaryHeap::with_capacity Fix docs for *reserve* methods of BinaryHeap Fix typos Fix docs for with_capacity on BufWriter and LineWriter Fix consistent use of `hasher` between `HashMap` and `HashSet` Fix warning in doc test Add test for capacity of vec with ZST Fix doc test error
2022-04-09 15:42:26 +00:00
/// greater than or equal to `self.len() + additional` if it returns
/// `Ok(())`. Does nothing if capacity is already sufficient. This method
/// preserves the contents even if an error occurs.
///
/// # Errors
///
/// If the capacity overflows, or the allocator reports a failure, then an error
/// is returned.
///
/// # Examples
///
/// ```
/// use std::collections::TryReserveError;
///
/// fn process_data(data: &[u32]) -> Result<Vec<u32>, TryReserveError> {
/// let mut output = Vec::new();
///
/// // Pre-reserve the memory, exiting if we can't
/// output.try_reserve(data.len())?;
///
/// // Now we know this can't OOM in the middle of our complex work
/// output.extend(data.iter().map(|&val| {
/// val * 2 + 5 // very complicated
/// }));
///
/// Ok(output)
/// }
/// # process_data(&[1, 2, 3]).expect("why is the test harness OOMing on 12 bytes?");
/// ```
2021-08-29 22:21:33 +00:00
#[stable(feature = "try_reserve", since = "1.57.0")]
pub fn try_reserve(&mut self, additional: usize) -> Result<(), TryReserveError> {
self.buf.try_reserve(self.len, additional)
}
Fix documentation for with_capacity and reserve families of methods Documentation for the following methods with_capacity with_capacity_in with_capacity_and_hasher reserve reserve_exact try_reserve try_reserve_exact was inconsistent and often not entirely correct where they existed on the following types Vec VecDeque String OsString PathBuf BinaryHeap HashSet HashMap BufWriter LineWriter since the allocator is allowed to allocate more than the requested capacity in all such cases, and will frequently "allocate" much more in the case of zero-sized types (I also checked BufReader, but there the docs appear to be accurate as it appears to actually allocate the exact capacity). Some effort was made to make the documentation more consistent between types as well. Fix with_capacity* methods for Vec Fix *reserve* methods for Vec Fix docs for *reserve* methods of VecDeque Fix docs for String::with_capacity Fix docs for *reserve* methods of String Fix docs for OsString::with_capacity Fix docs for *reserve* methods on OsString Fix docs for with_capacity* methods on HashSet Fix docs for *reserve methods of HashSet Fix docs for with_capacity* methods of HashMap Fix docs for *reserve methods on HashMap Fix expect messages about OOM in doctests Fix docs for BinaryHeap::with_capacity Fix docs for *reserve* methods of BinaryHeap Fix typos Fix docs for with_capacity on BufWriter and LineWriter Fix consistent use of `hasher` between `HashMap` and `HashSet` Fix warning in doc test Add test for capacity of vec with ZST Fix doc test error
2022-04-09 15:42:26 +00:00
/// Tries to reserve the minimum capacity for at least `additional`
/// elements to be inserted in the given `Vec<T>`. Unlike [`try_reserve`],
/// this will not deliberately over-allocate to speculatively avoid frequent
/// allocations. After calling `try_reserve_exact`, capacity will be greater
/// than or equal to `self.len() + additional` if it returns `Ok(())`.
/// Does nothing if the capacity is already sufficient.
///
/// Note that the allocator may give the collection more space than it
2019-02-09 22:16:58 +00:00
/// requests. Therefore, capacity can not be relied upon to be precisely
/// minimal. Prefer [`try_reserve`] if future insertions are expected.
///
/// [`try_reserve`]: Vec::try_reserve
///
/// # Errors
///
/// If the capacity overflows, or the allocator reports a failure, then an error
/// is returned.
///
/// # Examples
///
/// ```
/// use std::collections::TryReserveError;
///
/// fn process_data(data: &[u32]) -> Result<Vec<u32>, TryReserveError> {
/// let mut output = Vec::new();
///
/// // Pre-reserve the memory, exiting if we can't
/// output.try_reserve_exact(data.len())?;
///
/// // Now we know this can't OOM in the middle of our complex work
/// output.extend(data.iter().map(|&val| {
/// val * 2 + 5 // very complicated
/// }));
///
/// Ok(output)
/// }
/// # process_data(&[1, 2, 3]).expect("why is the test harness OOMing on 12 bytes?");
/// ```
2021-08-29 22:21:33 +00:00
#[stable(feature = "try_reserve", since = "1.57.0")]
pub fn try_reserve_exact(&mut self, additional: usize) -> Result<(), TryReserveError> {
self.buf.try_reserve_exact(self.len, additional)
}
2014-12-30 18:51:18 +00:00
/// Shrinks the capacity of the vector as much as possible.
///
/// The behavior of this method depends on the allocator, which may either shrink the vector
/// in-place or reallocate. The resulting vector might still have some excess capacity, just as
/// is the case for [`with_capacity`]. See [`Allocator::shrink`] for more details.
///
/// [`with_capacity`]: Vec::with_capacity
///
/// # Examples
///
/// ```
/// let mut vec = Vec::with_capacity(10);
/// vec.extend([1, 2, 3]);
/// assert!(vec.capacity() >= 10);
/// vec.shrink_to_fit();
/// assert!(vec.capacity() >= 3);
/// ```
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
2015-01-24 05:48:20 +00:00
#[stable(feature = "rust1", since = "1.0.0")]
#[inline]
pub fn shrink_to_fit(&mut self) {
2020-08-18 18:45:40 +00:00
// The capacity is never less than the length, and there's nothing to do when
// they are equal, so we can avoid the panic case in `RawVec::shrink_to_fit`
// by only calling it with a greater capacity.
if self.capacity() > self.len {
self.buf.shrink_to_fit(self.len);
}
}
/// Shrinks the capacity of the vector with a lower bound.
///
/// The capacity will remain at least as large as both the length
/// and the supplied value.
///
/// If the current capacity is less than the lower limit, this is a no-op.
///
/// # Examples
///
/// ```
/// let mut vec = Vec::with_capacity(10);
/// vec.extend([1, 2, 3]);
/// assert!(vec.capacity() >= 10);
/// vec.shrink_to(4);
/// assert!(vec.capacity() >= 4);
/// vec.shrink_to(0);
/// assert!(vec.capacity() >= 3);
/// ```
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
#[stable(feature = "shrink_to", since = "1.56.0")]
pub fn shrink_to(&mut self, min_capacity: usize) {
if self.capacity() > min_capacity {
self.buf.shrink_to_fit(cmp::max(self.len, min_capacity));
}
}
/// Converts the vector into [`Box<[T]>`][owned slice].
///
/// Before doing the conversion, this method discards excess capacity like [`shrink_to_fit`].
2016-10-08 15:58:53 +00:00
///
/// [owned slice]: Box
/// [`shrink_to_fit`]: Vec::shrink_to_fit
2016-08-02 01:28:13 +00:00
///
/// # Examples
///
/// ```
/// let v = vec![1, 2, 3];
///
/// let slice = v.into_boxed_slice();
/// ```
///
/// Any excess capacity is removed:
///
/// ```
/// let mut vec = Vec::with_capacity(10);
/// vec.extend([1, 2, 3]);
2016-08-02 01:28:13 +00:00
///
/// assert!(vec.capacity() >= 10);
2016-08-02 01:28:13 +00:00
/// let slice = vec.into_boxed_slice();
/// assert_eq!(slice.into_vec().capacity(), 3);
/// ```
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
#[stable(feature = "rust1", since = "1.0.0")]
pub fn into_boxed_slice(mut self) -> Box<[T], A> {
unsafe {
2015-07-10 04:57:21 +00:00
self.shrink_to_fit();
let me = ManuallyDrop::new(self);
let buf = ptr::read(&me.buf);
let len = me.len();
buf.into_box(len).assume_init()
}
}
/// Shortens the vector, keeping the first `len` elements and dropping
/// the rest.
///
2023-09-16 13:46:31 +00:00
/// If `len` is greater or equal to the vector's current length, this has
/// no effect.
///
/// The [`drain`] method can emulate `truncate`, but causes the excess
/// elements to be returned instead of dropped.
///
/// Note that this method has no effect on the allocated capacity
/// of the vector.
///
/// # Examples
///
/// Truncating a five element vector to two elements:
///
/// ```
/// let mut vec = vec![1, 2, 3, 4, 5];
/// vec.truncate(2);
2015-02-24 18:15:45 +00:00
/// assert_eq!(vec, [1, 2]);
/// ```
///
/// No truncation occurs when `len` is greater than the vector's current
/// length:
///
/// ```
/// let mut vec = vec![1, 2, 3];
/// vec.truncate(8);
/// assert_eq!(vec, [1, 2, 3]);
/// ```
///
/// Truncating when `len == 0` is equivalent to calling the [`clear`]
/// method.
///
/// ```
/// let mut vec = vec![1, 2, 3];
/// vec.truncate(0);
/// assert_eq!(vec, []);
/// ```
///
2020-08-19 04:59:31 +00:00
/// [`clear`]: Vec::clear
/// [`drain`]: Vec::drain
2015-01-24 05:48:20 +00:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-05 02:17:19 +00:00
pub fn truncate(&mut self, len: usize) {
// This is safe because:
//
// * the slice passed to `drop_in_place` is valid; the `len > self.len`
// case avoids creating an invalid slice, and
// * the `len` of the vector is shrunk before calling `drop_in_place`,
// such that no value will be dropped twice in case `drop_in_place`
// were to panic once (if it panics twice, the program aborts).
unsafe {
// Note: It's intentional that this is `>` and not `>=`.
// Changing it to `>=` has negative performance
// implications in some cases. See #78884 for more.
if len > self.len {
return;
}
2020-03-30 08:40:59 +00:00
let remaining_len = self.len - len;
let s = ptr::slice_from_raw_parts_mut(self.as_mut_ptr().add(len), remaining_len);
self.len = len;
ptr::drop_in_place(s);
}
}
/// Extracts a slice containing the entire vector.
///
/// Equivalent to `&s[..]`.
///
/// # Examples
///
/// ```
/// use std::io::{self, Write};
/// let buffer = vec![1, 2, 3, 5, 8];
/// io::sink().write(buffer.as_slice()).unwrap();
/// ```
#[inline]
std: Stabilize APIs for the 1.7 release This commit stabilizes and deprecates the FCP (final comment period) APIs for the upcoming 1.7 beta release. The specific APIs which changed were: Stabilized * `Path::strip_prefix` (renamed from `relative_from`) * `path::StripPrefixError` (new error type returned from `strip_prefix`) * `Ipv4Addr::is_loopback` * `Ipv4Addr::is_private` * `Ipv4Addr::is_link_local` * `Ipv4Addr::is_multicast` * `Ipv4Addr::is_broadcast` * `Ipv4Addr::is_documentation` * `Ipv6Addr::is_unspecified` * `Ipv6Addr::is_loopback` * `Ipv6Addr::is_unique_local` * `Ipv6Addr::is_multicast` * `Vec::as_slice` * `Vec::as_mut_slice` * `String::as_str` * `String::as_mut_str` * `<[T]>::clone_from_slice` - the `usize` return value is removed * `<[T]>::sort_by_key` * `i32::checked_rem` (and other signed types) * `i32::checked_neg` (and other signed types) * `i32::checked_shl` (and other signed types) * `i32::checked_shr` (and other signed types) * `i32::saturating_mul` (and other signed types) * `i32::overflowing_add` (and other signed types) * `i32::overflowing_sub` (and other signed types) * `i32::overflowing_mul` (and other signed types) * `i32::overflowing_div` (and other signed types) * `i32::overflowing_rem` (and other signed types) * `i32::overflowing_neg` (and other signed types) * `i32::overflowing_shl` (and other signed types) * `i32::overflowing_shr` (and other signed types) * `u32::checked_rem` (and other unsigned types) * `u32::checked_neg` (and other unsigned types) * `u32::checked_shl` (and other unsigned types) * `u32::saturating_mul` (and other unsigned types) * `u32::overflowing_add` (and other unsigned types) * `u32::overflowing_sub` (and other unsigned types) * `u32::overflowing_mul` (and other unsigned types) * `u32::overflowing_div` (and other unsigned types) * `u32::overflowing_rem` (and other unsigned types) * `u32::overflowing_neg` (and other unsigned types) * `u32::overflowing_shl` (and other unsigned types) * `u32::overflowing_shr` (and other unsigned types) * `ffi::IntoStringError` * `CString::into_string` * `CString::into_bytes` * `CString::into_bytes_with_nul` * `From<CString> for Vec<u8>` * `From<CString> for Vec<u8>` * `IntoStringError::into_cstring` * `IntoStringError::utf8_error` * `Error for IntoStringError` Deprecated * `Path::relative_from` - renamed to `strip_prefix` * `Path::prefix` - use `components().next()` instead * `os::unix::fs` constants - moved to the `libc` crate * `fmt::{radix, Radix, RadixFmt}` - not used enough to stabilize * `IntoCow` - conflicts with `Into` and may come back later * `i32::{BITS, BYTES}` (and other integers) - not pulling their weight * `DebugTuple::formatter` - will be removed * `sync::Semaphore` - not used enough and confused with system semaphores Closes #23284 cc #27709 (still lots more methods though) Closes #27712 Closes #27722 Closes #27728 Closes #27735 Closes #27729 Closes #27755 Closes #27782 Closes #27798
2016-01-15 18:07:52 +00:00
#[stable(feature = "vec_as_slice", since = "1.7.0")]
pub fn as_slice(&self) -> &[T] {
self
}
/// Extracts a mutable slice of the entire vector.
///
/// Equivalent to `&mut s[..]`.
///
/// # Examples
///
/// ```
/// use std::io::{self, Read};
/// let mut buffer = vec![0; 3];
/// io::repeat(0b101).read_exact(buffer.as_mut_slice()).unwrap();
/// ```
#[inline]
std: Stabilize APIs for the 1.7 release This commit stabilizes and deprecates the FCP (final comment period) APIs for the upcoming 1.7 beta release. The specific APIs which changed were: Stabilized * `Path::strip_prefix` (renamed from `relative_from`) * `path::StripPrefixError` (new error type returned from `strip_prefix`) * `Ipv4Addr::is_loopback` * `Ipv4Addr::is_private` * `Ipv4Addr::is_link_local` * `Ipv4Addr::is_multicast` * `Ipv4Addr::is_broadcast` * `Ipv4Addr::is_documentation` * `Ipv6Addr::is_unspecified` * `Ipv6Addr::is_loopback` * `Ipv6Addr::is_unique_local` * `Ipv6Addr::is_multicast` * `Vec::as_slice` * `Vec::as_mut_slice` * `String::as_str` * `String::as_mut_str` * `<[T]>::clone_from_slice` - the `usize` return value is removed * `<[T]>::sort_by_key` * `i32::checked_rem` (and other signed types) * `i32::checked_neg` (and other signed types) * `i32::checked_shl` (and other signed types) * `i32::checked_shr` (and other signed types) * `i32::saturating_mul` (and other signed types) * `i32::overflowing_add` (and other signed types) * `i32::overflowing_sub` (and other signed types) * `i32::overflowing_mul` (and other signed types) * `i32::overflowing_div` (and other signed types) * `i32::overflowing_rem` (and other signed types) * `i32::overflowing_neg` (and other signed types) * `i32::overflowing_shl` (and other signed types) * `i32::overflowing_shr` (and other signed types) * `u32::checked_rem` (and other unsigned types) * `u32::checked_neg` (and other unsigned types) * `u32::checked_shl` (and other unsigned types) * `u32::saturating_mul` (and other unsigned types) * `u32::overflowing_add` (and other unsigned types) * `u32::overflowing_sub` (and other unsigned types) * `u32::overflowing_mul` (and other unsigned types) * `u32::overflowing_div` (and other unsigned types) * `u32::overflowing_rem` (and other unsigned types) * `u32::overflowing_neg` (and other unsigned types) * `u32::overflowing_shl` (and other unsigned types) * `u32::overflowing_shr` (and other unsigned types) * `ffi::IntoStringError` * `CString::into_string` * `CString::into_bytes` * `CString::into_bytes_with_nul` * `From<CString> for Vec<u8>` * `From<CString> for Vec<u8>` * `IntoStringError::into_cstring` * `IntoStringError::utf8_error` * `Error for IntoStringError` Deprecated * `Path::relative_from` - renamed to `strip_prefix` * `Path::prefix` - use `components().next()` instead * `os::unix::fs` constants - moved to the `libc` crate * `fmt::{radix, Radix, RadixFmt}` - not used enough to stabilize * `IntoCow` - conflicts with `Into` and may come back later * `i32::{BITS, BYTES}` (and other integers) - not pulling their weight * `DebugTuple::formatter` - will be removed * `sync::Semaphore` - not used enough and confused with system semaphores Closes #23284 cc #27709 (still lots more methods though) Closes #27712 Closes #27722 Closes #27728 Closes #27735 Closes #27729 Closes #27755 Closes #27782 Closes #27798
2016-01-15 18:07:52 +00:00
#[stable(feature = "vec_as_slice", since = "1.7.0")]
pub fn as_mut_slice(&mut self) -> &mut [T] {
self
}
/// Returns a raw pointer to the vector's buffer, or a dangling raw pointer
/// valid for zero sized reads if the vector didn't allocate.
///
/// The caller must ensure that the vector outlives the pointer this
2024-06-06 06:04:34 +00:00
/// function returns, or else it will end up dangling.
/// Modifying the vector may cause its buffer to be reallocated,
/// which would also make any pointers to it invalid.
///
/// The caller must also ensure that the memory the pointer (non-transitively) points to
/// is never written to (except inside an `UnsafeCell`) using this pointer or any pointer
/// derived from it. If you need to mutate the contents of the slice, use [`as_mut_ptr`].
///
2023-07-27 01:41:37 +00:00
/// This method guarantees that for the purpose of the aliasing model, this method
/// does not materialize a reference to the underlying slice, and thus the returned pointer
/// will remain valid when mixed with other calls to [`as_ptr`] and [`as_mut_ptr`].
2023-07-27 01:41:37 +00:00
/// Note that calling other methods that materialize mutable references to the slice,
2023-08-15 22:00:27 +00:00
/// or mutable references to specific elements you are planning on accessing through this pointer,
/// as well as writing to those elements, may still invalidate this pointer.
2023-07-27 01:41:37 +00:00
/// See the second example below for how this guarantee can be used.
///
///
/// # Examples
///
/// ```
/// let x = vec![1, 2, 4];
/// let x_ptr = x.as_ptr();
///
/// unsafe {
/// for i in 0..x.len() {
/// assert_eq!(*x_ptr.add(i), 1 << i);
/// }
/// }
/// ```
///
2023-07-27 01:41:37 +00:00
/// Due to the aliasing guarantee, the following code is legal:
///
/// ```rust
2023-07-27 01:41:37 +00:00
/// unsafe {
2023-08-15 22:00:27 +00:00
/// let mut v = vec![0, 1, 2];
2023-07-27 01:41:37 +00:00
/// let ptr1 = v.as_ptr();
/// let _ = ptr1.read();
2023-08-15 22:00:27 +00:00
/// let ptr2 = v.as_mut_ptr().offset(2);
2023-07-27 01:41:37 +00:00
/// ptr2.write(2);
/// // Notably, the write to `ptr2` did *not* invalidate `ptr1`
/// // because it mutated a different element:
2023-07-27 01:41:37 +00:00
/// let _ = ptr1.read();
/// }
/// ```
2023-07-27 01:41:37 +00:00
///
2020-08-19 04:59:31 +00:00
/// [`as_mut_ptr`]: Vec::as_mut_ptr
2023-07-27 01:41:37 +00:00
/// [`as_ptr`]: Vec::as_ptr
#[stable(feature = "vec_as_ptr", since = "1.37.0")]
2023-10-04 02:27:25 +00:00
#[rustc_never_returns_null_ptr]
#[inline]
pub fn as_ptr(&self) -> *const T {
// We shadow the slice method of the same name to avoid going through
// `deref`, which creates an intermediate reference.
self.buf.ptr()
}
/// Returns an unsafe mutable pointer to the vector's buffer, or a dangling
/// raw pointer valid for zero sized reads if the vector didn't allocate.
///
/// The caller must ensure that the vector outlives the pointer this
2024-06-06 06:04:34 +00:00
/// function returns, or else it will end up dangling.
/// Modifying the vector may cause its buffer to be reallocated,
/// which would also make any pointers to it invalid.
///
2023-07-27 01:41:37 +00:00
/// This method guarantees that for the purpose of the aliasing model, this method
/// does not materialize a reference to the underlying slice, and thus the returned pointer
/// will remain valid when mixed with other calls to [`as_ptr`] and [`as_mut_ptr`].
2023-07-27 01:41:37 +00:00
/// Note that calling other methods that materialize references to the slice,
/// or references to specific elements you are planning on accessing through this pointer,
/// may still invalidate this pointer.
/// See the second example below for how this guarantee can be used.
///
///
/// # Examples
///
/// ```
/// // Allocate vector big enough for 4 elements.
/// let size = 4;
/// let mut x: Vec<i32> = Vec::with_capacity(size);
/// let x_ptr = x.as_mut_ptr();
///
/// // Initialize elements via raw pointer writes, then set length.
/// unsafe {
/// for i in 0..size {
/// *x_ptr.add(i) = i as i32;
/// }
/// x.set_len(size);
/// }
/// assert_eq!(&*x, &[0, 1, 2, 3]);
/// ```
///
2023-07-27 01:41:37 +00:00
/// Due to the aliasing guarantee, the following code is legal:
///
/// ```rust
2023-07-27 01:41:37 +00:00
/// unsafe {
/// let mut v = vec![0];
/// let ptr1 = v.as_mut_ptr();
/// ptr1.write(1);
/// let ptr2 = v.as_mut_ptr();
/// ptr2.write(2);
/// // Notably, the write to `ptr2` did *not* invalidate `ptr1`:
/// ptr1.write(3);
/// }
/// ```
2023-07-27 01:41:37 +00:00
///
/// [`as_mut_ptr`]: Vec::as_mut_ptr
/// [`as_ptr`]: Vec::as_ptr
#[stable(feature = "vec_as_ptr", since = "1.37.0")]
2023-10-04 02:27:25 +00:00
#[rustc_never_returns_null_ptr]
#[inline]
pub fn as_mut_ptr(&mut self) -> *mut T {
// We shadow the slice method of the same name to avoid going through
// `deref_mut`, which creates an intermediate reference.
self.buf.ptr()
}
/// Returns a reference to the underlying allocator.
#[unstable(feature = "allocator_api", issue = "32838")]
#[inline]
pub fn allocator(&self) -> &A {
self.buf.allocator()
}
/// Forces the length of the vector to `new_len`.
///
/// This is a low-level operation that maintains none of the normal
2019-02-09 21:23:30 +00:00
/// invariants of the type. Normally changing the length of a vector
/// is done using one of the safe operations instead, such as
/// [`truncate`], [`resize`], [`extend`], or [`clear`].
///
2020-08-19 04:59:31 +00:00
/// [`truncate`]: Vec::truncate
/// [`resize`]: Vec::resize
/// [`extend`]: Extend::extend
2020-08-19 04:59:31 +00:00
/// [`clear`]: Vec::clear
///
/// # Safety
///
/// - `new_len` must be less than or equal to [`capacity()`].
/// - The elements at `old_len..new_len` must be initialized.
///
2020-08-19 04:59:31 +00:00
/// [`capacity()`]: Vec::capacity
///
/// # Examples
///
/// This method can be useful for situations in which the vector
/// is serving as a buffer for other code, particularly over FFI:
///
/// ```no_run
/// # #![allow(dead_code)]
/// # // This is just a minimal skeleton for the doc example;
/// # // don't use this as a starting point for a real library.
/// # pub struct StreamWrapper { strm: *mut std::ffi::c_void }
/// # const Z_OK: i32 = 0;
/// # extern "C" {
/// # fn deflateGetDictionary(
/// # strm: *mut std::ffi::c_void,
/// # dictionary: *mut u8,
/// # dictLength: *mut usize,
/// # ) -> i32;
/// # }
/// # impl StreamWrapper {
/// pub fn get_dictionary(&self) -> Option<Vec<u8>> {
/// // Per the FFI method's docs, "32768 bytes is always enough".
/// let mut dict = Vec::with_capacity(32_768);
/// let mut dict_length = 0;
/// // SAFETY: When `deflateGetDictionary` returns `Z_OK`, it holds that:
/// // 1. `dict_length` elements were initialized.
/// // 2. `dict_length` <= the capacity (32_768)
/// // which makes `set_len` safe to call.
/// unsafe {
/// // Make the FFI call...
/// let r = deflateGetDictionary(self.strm, dict.as_mut_ptr(), &mut dict_length);
/// if r == Z_OK {
/// // ...and update the length to what was initialized.
/// dict.set_len(dict_length);
/// Some(dict)
/// } else {
/// None
/// }
/// }
/// }
/// # }
/// ```
///
/// While the following example is sound, there is a memory leak since
/// the inner vectors were not freed prior to the `set_len` call:
///
/// ```
/// let mut vec = vec![vec![1, 0, 0],
/// vec![0, 1, 0],
/// vec![0, 0, 1]];
2019-01-09 03:17:24 +00:00
/// // SAFETY:
/// // 1. `old_len..0` is empty so no elements need to be initialized.
/// // 2. `0 <= capacity` always holds whatever `capacity` is.
/// unsafe {
/// vec.set_len(0);
/// # // FIXME(https://github.com/rust-lang/miri/issues/3670):
/// # // use -Zmiri-disable-leak-check instead of unleaking in tests meant to leak.
/// # vec.set_len(3);
/// }
/// ```
///
/// Normally, here, one would use [`clear`] instead to correctly drop
/// the contents and thus not leak memory.
2014-07-13 23:03:23 +00:00
#[inline]
2015-01-24 05:48:20 +00:00
#[stable(feature = "rust1", since = "1.0.0")]
pub unsafe fn set_len(&mut self, new_len: usize) {
2018-12-15 05:41:18 +00:00
debug_assert!(new_len <= self.capacity());
self.len = new_len;
2014-07-13 23:03:23 +00:00
}
/// Removes an element from the vector and returns it.
///
/// The removed element is replaced by the last element of the vector.
2014-12-17 01:12:30 +00:00
///
2024-03-17 14:57:00 +00:00
/// This does not preserve ordering of the remaining elements, but is *O*(1).
2021-11-01 17:52:26 +00:00
/// If you need to preserve the element order, use [`remove`] instead.
///
/// [`remove`]: Vec::remove
///
/// # Panics
///
/// Panics if `index` is out of bounds.
///
/// # Examples
///
/// ```
/// let mut v = vec!["foo", "bar", "baz", "qux"];
///
2014-12-30 18:51:18 +00:00
/// assert_eq!(v.swap_remove(1), "bar");
2015-02-24 18:15:45 +00:00
/// assert_eq!(v, ["foo", "qux", "baz"]);
///
2014-12-30 18:51:18 +00:00
/// assert_eq!(v.swap_remove(0), "foo");
2015-02-24 18:15:45 +00:00
/// assert_eq!(v, ["baz", "qux"]);
/// ```
#[inline]
2015-01-24 05:48:20 +00:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-05 02:17:19 +00:00
pub fn swap_remove(&mut self, index: usize) -> T {
#[cold]
#[cfg_attr(not(feature = "panic_immediate_abort"), inline(never))]
#[track_caller]
fn assert_failed(index: usize, len: usize) -> ! {
panic!("swap_remove index (is {index}) should be < len (is {len})");
}
2020-04-05 06:40:40 +00:00
let len = self.len();
if index >= len {
assert_failed(index, len);
}
unsafe {
2018-07-09 04:31:24 +00:00
// We replace self[index] with the last element. Note that if the
2020-03-30 11:31:16 +00:00
// bounds check above succeeds there must be a last element (which
// can be self[index] itself).
let value = ptr::read(self.as_ptr().add(index));
let base_ptr = self.as_mut_ptr();
ptr::copy(base_ptr.add(len - 1), base_ptr.add(index), 1);
2020-04-05 06:40:40 +00:00
self.set_len(len - 1);
value
}
}
/// Inserts an element at position `index` within the vector, shifting all
/// elements after it to the right.
///
/// # Panics
///
/// Panics if `index > len`.
///
/// # Examples
///
/// ```
2015-01-25 21:05:03 +00:00
/// let mut vec = vec![1, 2, 3];
/// vec.insert(1, 4);
2015-02-24 18:15:45 +00:00
/// assert_eq!(vec, [1, 4, 2, 3]);
/// vec.insert(4, 5);
2015-02-24 18:15:45 +00:00
/// assert_eq!(vec, [1, 4, 2, 3, 5]);
/// ```
2024-02-18 10:14:17 +00:00
///
/// # Time complexity
///
2024-02-18 10:29:56 +00:00
/// Takes *O*([`Vec::len`]) time. All items after the insertion index must be
2024-02-18 10:14:17 +00:00
/// shifted to the right. In the worst case, all elements are shifted when
/// the insertion index is 0.
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
2015-01-24 05:48:20 +00:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-02-05 02:17:19 +00:00
pub fn insert(&mut self, index: usize, element: T) {
#[cold]
#[cfg_attr(not(feature = "panic_immediate_abort"), inline(never))]
#[track_caller]
fn assert_failed(index: usize, len: usize) -> ! {
panic!("insertion index (is {index}) should be <= len (is {len})");
}
let len = self.len();
if index > len {
assert_failed(index, len);
}
2015-07-10 04:57:21 +00:00
// space for the new element
if len == self.buf.capacity() {
self.buf.grow_one();
2015-11-23 22:23:48 +00:00
}
2015-11-23 22:23:48 +00:00
unsafe {
// infallible
// The spot to put the new value
{
let p = self.as_mut_ptr().add(index);
if index < len {
// Shift everything over to make space. (Duplicating the
// `index`th element into two consecutive places.)
ptr::copy(p, p.add(1), len - index);
}
// Write it in, overwriting the first copy of the `index`th
// element.
2015-07-10 04:57:21 +00:00
ptr::write(p, element);
}
self.set_len(len + 1);
}
}
/// Removes and returns the element at position `index` within the vector,
/// shifting all elements after it to the left.
///
/// Note: Because this shifts over the remaining elements, it has a
2021-11-01 17:52:26 +00:00
/// worst-case performance of *O*(*n*). If you don't need the order of elements
/// to be preserved, use [`swap_remove`] instead. If you'd like to remove
/// elements from the beginning of the `Vec`, consider using
/// [`VecDeque::pop_front`] instead.
///
/// [`swap_remove`]: Vec::swap_remove
/// [`VecDeque::pop_front`]: crate::collections::VecDeque::pop_front
///
/// # Panics
///
2015-04-27 14:11:46 +00:00
/// Panics if `index` is out of bounds.
///
/// # Examples
///
/// ```
2015-01-25 21:05:03 +00:00
/// let mut v = vec![1, 2, 3];
2014-12-30 18:51:18 +00:00
/// assert_eq!(v.remove(1), 2);
2015-02-24 18:15:45 +00:00
/// assert_eq!(v, [1, 3]);
/// ```
2015-01-24 05:48:20 +00:00
#[stable(feature = "rust1", since = "1.0.0")]
2021-07-26 17:39:59 +00:00
#[track_caller]
#[rustc_confusables("delete", "take")]
2015-02-05 02:17:19 +00:00
pub fn remove(&mut self, index: usize) -> T {
#[cold]
#[cfg_attr(not(feature = "panic_immediate_abort"), inline(never))]
2021-07-26 17:39:59 +00:00
#[track_caller]
fn assert_failed(index: usize, len: usize) -> ! {
panic!("removal index (is {index}) should be < len (is {len})");
}
2014-03-11 04:53:23 +00:00
let len = self.len();
if index >= len {
assert_failed(index, len);
}
2015-11-23 22:23:48 +00:00
unsafe {
// infallible
let ret;
{
// the place we are taking from.
let ptr = self.as_mut_ptr().add(index);
// copy it out, unsafely having a copy of the value on
// the stack and in the vector at the same time.
ret = ptr::read(ptr);
// Shift everything down to fill in that spot.
ptr::copy(ptr.add(1), ptr, len - index - 1);
}
self.set_len(len - 1);
ret
}
}
2014-04-03 03:10:36 +00:00
/// Retains only the elements specified by the predicate.
2014-12-17 01:12:30 +00:00
///
/// In other words, remove all elements `e` for which `f(&e)` returns `false`.
/// This method operates in place, visiting each element exactly once in the
/// original order, and preserves the order of the retained elements.
///
/// # Examples
///
/// ```
2015-01-25 21:05:03 +00:00
/// let mut vec = vec![1, 2, 3, 4];
/// vec.retain(|&x| x % 2 == 0);
2015-02-24 18:15:45 +00:00
/// assert_eq!(vec, [2, 4]);
/// ```
2019-05-11 01:01:50 +00:00
///
/// Because the elements are visited exactly once in the original order,
/// external state may be used to decide which elements to keep.
2019-05-11 01:01:50 +00:00
///
/// ```
/// let mut vec = vec![1, 2, 3, 4, 5];
/// let keep = [false, true, true, false, true];
/// let mut iter = keep.iter();
/// vec.retain(|_| *iter.next().unwrap());
2019-05-11 01:01:50 +00:00
/// assert_eq!(vec, [2, 3, 5]);
/// ```
2015-01-24 05:48:20 +00:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-11-23 22:23:48 +00:00
pub fn retain<F>(&mut self, mut f: F)
where
F: FnMut(&T) -> bool,
2021-11-10 18:25:54 +00:00
{
self.retain_mut(|elem| f(elem));
}
/// Retains only the elements specified by the predicate, passing a mutable reference to it.
///
/// In other words, remove all elements `e` such that `f(&mut e)` returns `false`.
/// This method operates in place, visiting each element exactly once in the
/// original order, and preserves the order of the retained elements.
///
/// # Examples
///
/// ```
/// let mut vec = vec![1, 2, 3, 4];
/// vec.retain_mut(|x| if *x <= 3 {
2021-11-10 18:25:54 +00:00
/// *x += 1;
/// true
/// } else {
/// false
2021-11-10 18:25:54 +00:00
/// });
/// assert_eq!(vec, [2, 3, 4]);
/// ```
#[stable(feature = "vec_retain_mut", since = "1.61.0")]
2021-11-10 18:25:54 +00:00
pub fn retain_mut<F>(&mut self, mut f: F)
where
F: FnMut(&mut T) -> bool,
2015-11-23 22:23:48 +00:00
{
2021-01-23 16:11:04 +00:00
let original_len = self.len();
2024-07-26 14:58:24 +00:00
if original_len == 0 {
// Empty case: explicit return allows better optimization, vs letting compiler infer it
return;
}
2021-01-17 16:51:37 +00:00
// Avoid double drop if the drop guard is not executed,
// since we may make some holes during the process.
unsafe { self.set_len(0) };
// Vec: [Kept, Kept, Hole, Hole, Hole, Hole, Unchecked, Unchecked]
// |<- processed len ->| ^- next to check
// |<- deleted cnt ->|
// |<- original_len ->|
// Kept: Elements which predicate returns true on.
// Hole: Moved or dropped element slot.
// Unchecked: Unchecked valid elements.
//
// This drop guard will be invoked when predicate or `drop` of element panicked.
// It shifts unchecked elements to cover holes and `set_len` to the correct length.
// In cases when predicate and `drop` never panick, it will be optimized out.
struct BackshiftOnDrop<'a, T, A: Allocator> {
v: &'a mut Vec<T, A>,
processed_len: usize,
deleted_cnt: usize,
original_len: usize,
}
impl<T, A: Allocator> Drop for BackshiftOnDrop<'_, T, A> {
fn drop(&mut self) {
if self.deleted_cnt > 0 {
2021-01-23 16:11:04 +00:00
// SAFETY: Trailing unchecked items must be valid since we never touch them.
2021-01-17 16:51:37 +00:00
unsafe {
ptr::copy(
2021-01-23 16:11:04 +00:00
self.v.as_ptr().add(self.processed_len),
self.v.as_mut_ptr().add(self.processed_len - self.deleted_cnt),
2021-01-17 16:51:37 +00:00
self.original_len - self.processed_len,
);
}
}
2021-01-23 16:11:04 +00:00
// SAFETY: After filling holes, all items are in contiguous memory.
unsafe {
self.v.set_len(self.original_len - self.deleted_cnt);
}
}
}
2021-01-17 16:51:37 +00:00
2021-01-23 16:11:04 +00:00
let mut g = BackshiftOnDrop { v: self, processed_len: 0, deleted_cnt: 0, original_len };
2021-01-17 16:51:37 +00:00
fn process_loop<F, T, A: Allocator, const DELETED: bool>(
original_len: usize,
f: &mut F,
g: &mut BackshiftOnDrop<'_, T, A>,
) where
2021-11-10 18:25:54 +00:00
F: FnMut(&mut T) -> bool,
{
while g.processed_len != original_len {
// SAFETY: Unchecked element must be valid.
let cur = unsafe { &mut *g.v.as_mut_ptr().add(g.processed_len) };
if !f(cur) {
// Advance early to avoid double drop if `drop_in_place` panicked.
g.processed_len += 1;
g.deleted_cnt += 1;
// SAFETY: We never touch this element again after dropped.
unsafe { ptr::drop_in_place(cur) };
// We already advanced the counter.
if DELETED {
continue;
} else {
break;
}
}
if DELETED {
// SAFETY: `deleted_cnt` > 0, so the hole slot must not overlap with current element.
// We use copy for move, and never touch this element again.
unsafe {
let hole_slot = g.v.as_mut_ptr().add(g.processed_len - g.deleted_cnt);
ptr::copy_nonoverlapping(cur, hole_slot, 1);
}
2021-01-17 16:51:37 +00:00
}
g.processed_len += 1;
2021-01-17 16:51:37 +00:00
}
}
// Stage 1: Nothing was deleted.
process_loop::<F, T, A, false>(original_len, &mut f, &mut g);
// Stage 2: Some elements were deleted.
process_loop::<F, T, A, true>(original_len, &mut f, &mut g);
2021-01-17 16:51:37 +00:00
2021-01-23 16:11:04 +00:00
// All item are processed. This can be optimized to `set_len` by LLVM.
drop(g);
}
/// Removes all but the first of consecutive elements in the vector that resolve to the same
/// key.
///
/// If the vector is sorted, this removes all duplicates.
///
/// # Examples
///
/// ```
/// let mut vec = vec![10, 20, 21, 30, 20];
///
/// vec.dedup_by_key(|i| *i / 10);
///
/// assert_eq!(vec, [10, 20, 30, 20]);
/// ```
#[stable(feature = "dedup_by", since = "1.16.0")]
#[inline]
pub fn dedup_by_key<F, K>(&mut self, mut key: F)
where
F: FnMut(&mut T) -> K,
K: PartialEq,
{
self.dedup_by(|a, b| key(a) == key(b))
}
/// Removes all but the first of consecutive elements in the vector satisfying a given equality
/// relation.
///
/// The `same_bucket` function is passed references to two elements from the vector and
/// must determine if the elements compare equal. The elements are passed in opposite order
/// from their order in the slice, so if `same_bucket(a, b)` returns `true`, `a` is removed.
///
/// If the vector is sorted, this removes all duplicates.
///
/// # Examples
///
/// ```
/// let mut vec = vec!["foo", "bar", "Bar", "baz", "bar"];
///
/// vec.dedup_by(|a, b| a.eq_ignore_ascii_case(b));
///
/// assert_eq!(vec, ["foo", "bar", "baz", "bar"]);
/// ```
#[stable(feature = "dedup_by", since = "1.16.0")]
2021-02-16 17:48:42 +00:00
pub fn dedup_by<F>(&mut self, mut same_bucket: F)
where
F: FnMut(&mut T, &mut T) -> bool,
{
2021-02-16 17:48:42 +00:00
let len = self.len();
if len <= 1 {
return;
}
Split `Vec::dedup_by` into 2 cycles First cycle runs until we found 2 same elements, second runs after if there any found in the first one. This allows to avoid any memory writes until we found an item which we want to remove. This leads to significant performance gains if all `Vec` items are kept: -40% on my benchmark with unique integers. Results of benchmarks before implementation (including new benchmark where nothing needs to be removed): * vec::bench_dedup_all_100 74.00ns/iter +/- 13.00ns * vec::bench_dedup_all_1000 572.00ns/iter +/- 272.00ns * vec::bench_dedup_all_100000 64.42µs/iter +/- 19.47µs * __vec::bench_dedup_none_100 67.00ns/iter +/- 17.00ns__ * __vec::bench_dedup_none_1000 662.00ns/iter +/- 86.00ns__ * __vec::bench_dedup_none_10000 9.16µs/iter +/- 2.71µs__ * __vec::bench_dedup_none_100000 91.25µs/iter +/- 1.82µs__ * vec::bench_dedup_random_100 105.00ns/iter +/- 11.00ns * vec::bench_dedup_random_1000 781.00ns/iter +/- 10.00ns * vec::bench_dedup_random_10000 9.00µs/iter +/- 5.62µs * vec::bench_dedup_random_100000 449.81µs/iter +/- 74.99µs * vec::bench_dedup_slice_truncate_100 105.00ns/iter +/- 16.00ns * vec::bench_dedup_slice_truncate_1000 2.65µs/iter +/- 481.00ns * vec::bench_dedup_slice_truncate_10000 18.33µs/iter +/- 5.23µs * vec::bench_dedup_slice_truncate_100000 501.12µs/iter +/- 46.97µs Results after implementation: * vec::bench_dedup_all_100 75.00ns/iter +/- 9.00ns * vec::bench_dedup_all_1000 494.00ns/iter +/- 117.00ns * vec::bench_dedup_all_100000 58.13µs/iter +/- 8.78µs * __vec::bench_dedup_none_100 52.00ns/iter +/- 22.00ns__ * __vec::bench_dedup_none_1000 417.00ns/iter +/- 116.00ns__ * __vec::bench_dedup_none_10000 4.11µs/iter +/- 546.00ns__ * __vec::bench_dedup_none_100000 40.47µs/iter +/- 5.36µs__ * vec::bench_dedup_random_100 77.00ns/iter +/- 15.00ns * vec::bench_dedup_random_1000 681.00ns/iter +/- 86.00ns * vec::bench_dedup_random_10000 11.66µs/iter +/- 2.22µs * vec::bench_dedup_random_100000 469.35µs/iter +/- 20.53µs * vec::bench_dedup_slice_truncate_100 100.00ns/iter +/- 5.00ns * vec::bench_dedup_slice_truncate_1000 2.55µs/iter +/- 224.00ns * vec::bench_dedup_slice_truncate_10000 18.95µs/iter +/- 2.59µs * vec::bench_dedup_slice_truncate_100000 492.85µs/iter +/- 72.84µs Resolves #77772
2021-12-20 16:05:53 +00:00
// Check if we ever want to remove anything.
// This allows to use copy_non_overlapping in next cycle.
// And avoids any memory writes if we don't need to remove anything.
let mut first_duplicate_idx: usize = 1;
let start = self.as_mut_ptr();
while first_duplicate_idx != len {
let found_duplicate = unsafe {
// SAFETY: first_duplicate always in range [1..len)
// Note that we start iteration from 1 so we never overflow.
let prev = start.add(first_duplicate_idx.wrapping_sub(1));
let current = start.add(first_duplicate_idx);
// We explicitly say in docs that references are reversed.
same_bucket(&mut *current, &mut *prev)
};
if found_duplicate {
break;
}
first_duplicate_idx += 1;
}
// Don't need to remove anything.
// We cannot get bigger than len.
if first_duplicate_idx == len {
return;
}
/* INVARIANT: vec.len() > read > write > write-1 >= 0 */
struct FillGapOnDrop<'a, T, A: core::alloc::Allocator> {
/* Offset of the element we want to check if it is duplicate */
read: usize,
/* Offset of the place where we want to place the non-duplicate
* when we find it. */
write: usize,
/* The Vec that would need correction if `same_bucket` panicked */
vec: &'a mut Vec<T, A>,
}
impl<'a, T, A: core::alloc::Allocator> Drop for FillGapOnDrop<'a, T, A> {
fn drop(&mut self) {
/* This code gets executed when `same_bucket` panics */
/* SAFETY: invariant guarantees that `read - write`
* and `len - read` never overflow and that the copy is always
* in-bounds. */
unsafe {
let ptr = self.vec.as_mut_ptr();
let len = self.vec.len();
2021-12-19 20:08:19 +00:00
/* How many items were left when `same_bucket` panicked.
* Basically vec[read..].len() */
let items_left = len.wrapping_sub(self.read);
/* Pointer to first item in vec[write..write+items_left] slice */
let dropped_ptr = ptr.add(self.write);
/* Pointer to first item in vec[read..] slice */
let valid_ptr = ptr.add(self.read);
/* Copy `vec[read..]` to `vec[write..write+items_left]`.
* The slices can overlap, so `copy_nonoverlapping` cannot be used */
ptr::copy(valid_ptr, dropped_ptr, items_left);
/* How many items have been already dropped
* Basically vec[read..write].len() */
let dropped = self.read.wrapping_sub(self.write);
self.vec.set_len(len - dropped);
}
}
}
2021-02-16 17:48:42 +00:00
/* Drop items while going through Vec, it should be more efficient than
* doing slice partition_dedup + truncate */
Split `Vec::dedup_by` into 2 cycles First cycle runs until we found 2 same elements, second runs after if there any found in the first one. This allows to avoid any memory writes until we found an item which we want to remove. This leads to significant performance gains if all `Vec` items are kept: -40% on my benchmark with unique integers. Results of benchmarks before implementation (including new benchmark where nothing needs to be removed): * vec::bench_dedup_all_100 74.00ns/iter +/- 13.00ns * vec::bench_dedup_all_1000 572.00ns/iter +/- 272.00ns * vec::bench_dedup_all_100000 64.42µs/iter +/- 19.47µs * __vec::bench_dedup_none_100 67.00ns/iter +/- 17.00ns__ * __vec::bench_dedup_none_1000 662.00ns/iter +/- 86.00ns__ * __vec::bench_dedup_none_10000 9.16µs/iter +/- 2.71µs__ * __vec::bench_dedup_none_100000 91.25µs/iter +/- 1.82µs__ * vec::bench_dedup_random_100 105.00ns/iter +/- 11.00ns * vec::bench_dedup_random_1000 781.00ns/iter +/- 10.00ns * vec::bench_dedup_random_10000 9.00µs/iter +/- 5.62µs * vec::bench_dedup_random_100000 449.81µs/iter +/- 74.99µs * vec::bench_dedup_slice_truncate_100 105.00ns/iter +/- 16.00ns * vec::bench_dedup_slice_truncate_1000 2.65µs/iter +/- 481.00ns * vec::bench_dedup_slice_truncate_10000 18.33µs/iter +/- 5.23µs * vec::bench_dedup_slice_truncate_100000 501.12µs/iter +/- 46.97µs Results after implementation: * vec::bench_dedup_all_100 75.00ns/iter +/- 9.00ns * vec::bench_dedup_all_1000 494.00ns/iter +/- 117.00ns * vec::bench_dedup_all_100000 58.13µs/iter +/- 8.78µs * __vec::bench_dedup_none_100 52.00ns/iter +/- 22.00ns__ * __vec::bench_dedup_none_1000 417.00ns/iter +/- 116.00ns__ * __vec::bench_dedup_none_10000 4.11µs/iter +/- 546.00ns__ * __vec::bench_dedup_none_100000 40.47µs/iter +/- 5.36µs__ * vec::bench_dedup_random_100 77.00ns/iter +/- 15.00ns * vec::bench_dedup_random_1000 681.00ns/iter +/- 86.00ns * vec::bench_dedup_random_10000 11.66µs/iter +/- 2.22µs * vec::bench_dedup_random_100000 469.35µs/iter +/- 20.53µs * vec::bench_dedup_slice_truncate_100 100.00ns/iter +/- 5.00ns * vec::bench_dedup_slice_truncate_1000 2.55µs/iter +/- 224.00ns * vec::bench_dedup_slice_truncate_10000 18.95µs/iter +/- 2.59µs * vec::bench_dedup_slice_truncate_100000 492.85µs/iter +/- 72.84µs Resolves #77772
2021-12-20 16:05:53 +00:00
// Construct gap first and then drop item to avoid memory corruption if `T::drop` panics.
let mut gap =
FillGapOnDrop { read: first_duplicate_idx + 1, write: first_duplicate_idx, vec: self };
unsafe {
// SAFETY: we checked that first_duplicate_idx in bounds before.
// If drop panics, `gap` would remove this item without drop.
ptr::drop_in_place(start.add(first_duplicate_idx));
}
/* SAFETY: Because of the invariant, read_ptr, prev_ptr and write_ptr
2021-02-16 17:48:42 +00:00
* are always in-bounds and read_ptr never aliases prev_ptr */
unsafe {
while gap.read < len {
Split `Vec::dedup_by` into 2 cycles First cycle runs until we found 2 same elements, second runs after if there any found in the first one. This allows to avoid any memory writes until we found an item which we want to remove. This leads to significant performance gains if all `Vec` items are kept: -40% on my benchmark with unique integers. Results of benchmarks before implementation (including new benchmark where nothing needs to be removed): * vec::bench_dedup_all_100 74.00ns/iter +/- 13.00ns * vec::bench_dedup_all_1000 572.00ns/iter +/- 272.00ns * vec::bench_dedup_all_100000 64.42µs/iter +/- 19.47µs * __vec::bench_dedup_none_100 67.00ns/iter +/- 17.00ns__ * __vec::bench_dedup_none_1000 662.00ns/iter +/- 86.00ns__ * __vec::bench_dedup_none_10000 9.16µs/iter +/- 2.71µs__ * __vec::bench_dedup_none_100000 91.25µs/iter +/- 1.82µs__ * vec::bench_dedup_random_100 105.00ns/iter +/- 11.00ns * vec::bench_dedup_random_1000 781.00ns/iter +/- 10.00ns * vec::bench_dedup_random_10000 9.00µs/iter +/- 5.62µs * vec::bench_dedup_random_100000 449.81µs/iter +/- 74.99µs * vec::bench_dedup_slice_truncate_100 105.00ns/iter +/- 16.00ns * vec::bench_dedup_slice_truncate_1000 2.65µs/iter +/- 481.00ns * vec::bench_dedup_slice_truncate_10000 18.33µs/iter +/- 5.23µs * vec::bench_dedup_slice_truncate_100000 501.12µs/iter +/- 46.97µs Results after implementation: * vec::bench_dedup_all_100 75.00ns/iter +/- 9.00ns * vec::bench_dedup_all_1000 494.00ns/iter +/- 117.00ns * vec::bench_dedup_all_100000 58.13µs/iter +/- 8.78µs * __vec::bench_dedup_none_100 52.00ns/iter +/- 22.00ns__ * __vec::bench_dedup_none_1000 417.00ns/iter +/- 116.00ns__ * __vec::bench_dedup_none_10000 4.11µs/iter +/- 546.00ns__ * __vec::bench_dedup_none_100000 40.47µs/iter +/- 5.36µs__ * vec::bench_dedup_random_100 77.00ns/iter +/- 15.00ns * vec::bench_dedup_random_1000 681.00ns/iter +/- 86.00ns * vec::bench_dedup_random_10000 11.66µs/iter +/- 2.22µs * vec::bench_dedup_random_100000 469.35µs/iter +/- 20.53µs * vec::bench_dedup_slice_truncate_100 100.00ns/iter +/- 5.00ns * vec::bench_dedup_slice_truncate_1000 2.55µs/iter +/- 224.00ns * vec::bench_dedup_slice_truncate_10000 18.95µs/iter +/- 2.59µs * vec::bench_dedup_slice_truncate_100000 492.85µs/iter +/- 72.84µs Resolves #77772
2021-12-20 16:05:53 +00:00
let read_ptr = start.add(gap.read);
let prev_ptr = start.add(gap.write.wrapping_sub(1));
2021-02-16 17:48:42 +00:00
Split `Vec::dedup_by` into 2 cycles First cycle runs until we found 2 same elements, second runs after if there any found in the first one. This allows to avoid any memory writes until we found an item which we want to remove. This leads to significant performance gains if all `Vec` items are kept: -40% on my benchmark with unique integers. Results of benchmarks before implementation (including new benchmark where nothing needs to be removed): * vec::bench_dedup_all_100 74.00ns/iter +/- 13.00ns * vec::bench_dedup_all_1000 572.00ns/iter +/- 272.00ns * vec::bench_dedup_all_100000 64.42µs/iter +/- 19.47µs * __vec::bench_dedup_none_100 67.00ns/iter +/- 17.00ns__ * __vec::bench_dedup_none_1000 662.00ns/iter +/- 86.00ns__ * __vec::bench_dedup_none_10000 9.16µs/iter +/- 2.71µs__ * __vec::bench_dedup_none_100000 91.25µs/iter +/- 1.82µs__ * vec::bench_dedup_random_100 105.00ns/iter +/- 11.00ns * vec::bench_dedup_random_1000 781.00ns/iter +/- 10.00ns * vec::bench_dedup_random_10000 9.00µs/iter +/- 5.62µs * vec::bench_dedup_random_100000 449.81µs/iter +/- 74.99µs * vec::bench_dedup_slice_truncate_100 105.00ns/iter +/- 16.00ns * vec::bench_dedup_slice_truncate_1000 2.65µs/iter +/- 481.00ns * vec::bench_dedup_slice_truncate_10000 18.33µs/iter +/- 5.23µs * vec::bench_dedup_slice_truncate_100000 501.12µs/iter +/- 46.97µs Results after implementation: * vec::bench_dedup_all_100 75.00ns/iter +/- 9.00ns * vec::bench_dedup_all_1000 494.00ns/iter +/- 117.00ns * vec::bench_dedup_all_100000 58.13µs/iter +/- 8.78µs * __vec::bench_dedup_none_100 52.00ns/iter +/- 22.00ns__ * __vec::bench_dedup_none_1000 417.00ns/iter +/- 116.00ns__ * __vec::bench_dedup_none_10000 4.11µs/iter +/- 546.00ns__ * __vec::bench_dedup_none_100000 40.47µs/iter +/- 5.36µs__ * vec::bench_dedup_random_100 77.00ns/iter +/- 15.00ns * vec::bench_dedup_random_1000 681.00ns/iter +/- 86.00ns * vec::bench_dedup_random_10000 11.66µs/iter +/- 2.22µs * vec::bench_dedup_random_100000 469.35µs/iter +/- 20.53µs * vec::bench_dedup_slice_truncate_100 100.00ns/iter +/- 5.00ns * vec::bench_dedup_slice_truncate_1000 2.55µs/iter +/- 224.00ns * vec::bench_dedup_slice_truncate_10000 18.95µs/iter +/- 2.59µs * vec::bench_dedup_slice_truncate_100000 492.85µs/iter +/- 72.84µs Resolves #77772
2021-12-20 16:05:53 +00:00
// We explicitly say in docs that references are reversed.
let found_duplicate = same_bucket(&mut *read_ptr, &mut *prev_ptr);
if found_duplicate {
// Increase `gap.read` now since the drop may panic.
gap.read += 1;
2021-02-16 17:48:42 +00:00
/* We have found duplicate, drop it in-place */
ptr::drop_in_place(read_ptr);
} else {
Split `Vec::dedup_by` into 2 cycles First cycle runs until we found 2 same elements, second runs after if there any found in the first one. This allows to avoid any memory writes until we found an item which we want to remove. This leads to significant performance gains if all `Vec` items are kept: -40% on my benchmark with unique integers. Results of benchmarks before implementation (including new benchmark where nothing needs to be removed): * vec::bench_dedup_all_100 74.00ns/iter +/- 13.00ns * vec::bench_dedup_all_1000 572.00ns/iter +/- 272.00ns * vec::bench_dedup_all_100000 64.42µs/iter +/- 19.47µs * __vec::bench_dedup_none_100 67.00ns/iter +/- 17.00ns__ * __vec::bench_dedup_none_1000 662.00ns/iter +/- 86.00ns__ * __vec::bench_dedup_none_10000 9.16µs/iter +/- 2.71µs__ * __vec::bench_dedup_none_100000 91.25µs/iter +/- 1.82µs__ * vec::bench_dedup_random_100 105.00ns/iter +/- 11.00ns * vec::bench_dedup_random_1000 781.00ns/iter +/- 10.00ns * vec::bench_dedup_random_10000 9.00µs/iter +/- 5.62µs * vec::bench_dedup_random_100000 449.81µs/iter +/- 74.99µs * vec::bench_dedup_slice_truncate_100 105.00ns/iter +/- 16.00ns * vec::bench_dedup_slice_truncate_1000 2.65µs/iter +/- 481.00ns * vec::bench_dedup_slice_truncate_10000 18.33µs/iter +/- 5.23µs * vec::bench_dedup_slice_truncate_100000 501.12µs/iter +/- 46.97µs Results after implementation: * vec::bench_dedup_all_100 75.00ns/iter +/- 9.00ns * vec::bench_dedup_all_1000 494.00ns/iter +/- 117.00ns * vec::bench_dedup_all_100000 58.13µs/iter +/- 8.78µs * __vec::bench_dedup_none_100 52.00ns/iter +/- 22.00ns__ * __vec::bench_dedup_none_1000 417.00ns/iter +/- 116.00ns__ * __vec::bench_dedup_none_10000 4.11µs/iter +/- 546.00ns__ * __vec::bench_dedup_none_100000 40.47µs/iter +/- 5.36µs__ * vec::bench_dedup_random_100 77.00ns/iter +/- 15.00ns * vec::bench_dedup_random_1000 681.00ns/iter +/- 86.00ns * vec::bench_dedup_random_10000 11.66µs/iter +/- 2.22µs * vec::bench_dedup_random_100000 469.35µs/iter +/- 20.53µs * vec::bench_dedup_slice_truncate_100 100.00ns/iter +/- 5.00ns * vec::bench_dedup_slice_truncate_1000 2.55µs/iter +/- 224.00ns * vec::bench_dedup_slice_truncate_10000 18.95µs/iter +/- 2.59µs * vec::bench_dedup_slice_truncate_100000 492.85µs/iter +/- 72.84µs Resolves #77772
2021-12-20 16:05:53 +00:00
let write_ptr = start.add(gap.write);
2021-02-16 17:48:42 +00:00
Split `Vec::dedup_by` into 2 cycles First cycle runs until we found 2 same elements, second runs after if there any found in the first one. This allows to avoid any memory writes until we found an item which we want to remove. This leads to significant performance gains if all `Vec` items are kept: -40% on my benchmark with unique integers. Results of benchmarks before implementation (including new benchmark where nothing needs to be removed): * vec::bench_dedup_all_100 74.00ns/iter +/- 13.00ns * vec::bench_dedup_all_1000 572.00ns/iter +/- 272.00ns * vec::bench_dedup_all_100000 64.42µs/iter +/- 19.47µs * __vec::bench_dedup_none_100 67.00ns/iter +/- 17.00ns__ * __vec::bench_dedup_none_1000 662.00ns/iter +/- 86.00ns__ * __vec::bench_dedup_none_10000 9.16µs/iter +/- 2.71µs__ * __vec::bench_dedup_none_100000 91.25µs/iter +/- 1.82µs__ * vec::bench_dedup_random_100 105.00ns/iter +/- 11.00ns * vec::bench_dedup_random_1000 781.00ns/iter +/- 10.00ns * vec::bench_dedup_random_10000 9.00µs/iter +/- 5.62µs * vec::bench_dedup_random_100000 449.81µs/iter +/- 74.99µs * vec::bench_dedup_slice_truncate_100 105.00ns/iter +/- 16.00ns * vec::bench_dedup_slice_truncate_1000 2.65µs/iter +/- 481.00ns * vec::bench_dedup_slice_truncate_10000 18.33µs/iter +/- 5.23µs * vec::bench_dedup_slice_truncate_100000 501.12µs/iter +/- 46.97µs Results after implementation: * vec::bench_dedup_all_100 75.00ns/iter +/- 9.00ns * vec::bench_dedup_all_1000 494.00ns/iter +/- 117.00ns * vec::bench_dedup_all_100000 58.13µs/iter +/- 8.78µs * __vec::bench_dedup_none_100 52.00ns/iter +/- 22.00ns__ * __vec::bench_dedup_none_1000 417.00ns/iter +/- 116.00ns__ * __vec::bench_dedup_none_10000 4.11µs/iter +/- 546.00ns__ * __vec::bench_dedup_none_100000 40.47µs/iter +/- 5.36µs__ * vec::bench_dedup_random_100 77.00ns/iter +/- 15.00ns * vec::bench_dedup_random_1000 681.00ns/iter +/- 86.00ns * vec::bench_dedup_random_10000 11.66µs/iter +/- 2.22µs * vec::bench_dedup_random_100000 469.35µs/iter +/- 20.53µs * vec::bench_dedup_slice_truncate_100 100.00ns/iter +/- 5.00ns * vec::bench_dedup_slice_truncate_1000 2.55µs/iter +/- 224.00ns * vec::bench_dedup_slice_truncate_10000 18.95µs/iter +/- 2.59µs * vec::bench_dedup_slice_truncate_100000 492.85µs/iter +/- 72.84µs Resolves #77772
2021-12-20 16:05:53 +00:00
/* read_ptr cannot be equal to write_ptr because at this point
* we guaranteed to skip at least one element (before loop starts).
*/
ptr::copy_nonoverlapping(read_ptr, write_ptr, 1);
2021-02-16 17:48:42 +00:00
/* We have filled that place, so go further */
gap.write += 1;
gap.read += 1;
2021-02-16 17:48:42 +00:00
}
}
/* Technically we could let `gap` clean up with its Drop, but
* when `same_bucket` is guaranteed to not panic, this bloats a little
* the codegen, so we just do it manually */
gap.vec.set_len(gap.write);
mem::forget(gap);
2021-02-16 17:48:42 +00:00
}
}
/// Appends an element to the back of a collection.
///
/// # Panics
///
/// Panics if the new capacity exceeds `isize::MAX` _bytes_.
///
/// # Examples
///
/// ```
/// let mut vec = vec![1, 2];
/// vec.push(3);
2015-02-24 18:15:45 +00:00
/// assert_eq!(vec, [1, 2, 3]);
/// ```
2024-02-16 14:41:48 +00:00
///
/// # Time complexity
///
/// Takes amortized *O*(1) time. If the vector's length would exceed its
2024-02-25 07:41:54 +00:00
/// capacity after the push, *O*(*capacity*) time is taken to copy the
/// vector's elements to a larger allocation. This expensive operation is
/// offset by the *capacity* *O*(1) insertions it allows.
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
#[inline]
2015-01-24 05:48:20 +00:00
#[stable(feature = "rust1", since = "1.0.0")]
#[rustc_confusables("push_back", "put", "append")]
pub fn push(&mut self, value: T) {
// Inform codegen that the length does not change across grow_one().
let len = self.len;
2015-07-10 04:57:21 +00:00
// This will panic or abort if we would allocate > isize::MAX bytes
// or if the length increment would overflow for zero-sized types.
if len == self.buf.capacity() {
self.buf.grow_one();
2015-11-23 22:23:48 +00:00
}
unsafe {
let end = self.as_mut_ptr().add(len);
2015-07-10 04:57:21 +00:00
ptr::write(end, value);
self.len = len + 1;
}
}
/// Appends an element if there is sufficient spare capacity, otherwise an error is returned
/// with the element.
///
/// Unlike [`push`] this method will not reallocate when there's insufficient capacity.
/// The caller should use [`reserve`] or [`try_reserve`] to ensure that there is enough capacity.
///
/// [`push`]: Vec::push
/// [`reserve`]: Vec::reserve
/// [`try_reserve`]: Vec::try_reserve
///
/// # Examples
///
/// A manual, panic-free alternative to [`FromIterator`]:
///
/// ```
/// #![feature(vec_push_within_capacity)]
///
/// use std::collections::TryReserveError;
/// fn from_iter_fallible<T>(iter: impl Iterator<Item=T>) -> Result<Vec<T>, TryReserveError> {
/// let mut vec = Vec::new();
/// for value in iter {
/// if let Err(value) = vec.push_within_capacity(value) {
/// vec.try_reserve(1)?;
/// // this cannot fail, the previous line either returned or added at least 1 free slot
/// let _ = vec.push_within_capacity(value);
/// }
/// }
/// Ok(vec)
/// }
/// assert_eq!(from_iter_fallible(0..100), Ok(Vec::from_iter(0..100)));
/// ```
///
/// # Time complexity
///
/// Takes *O*(1) time.
#[inline]
#[unstable(feature = "vec_push_within_capacity", issue = "100486")]
pub fn push_within_capacity(&mut self, value: T) -> Result<(), T> {
if self.len == self.buf.capacity() {
return Err(value);
}
unsafe {
let end = self.as_mut_ptr().add(self.len);
ptr::write(end, value);
self.len += 1;
}
Ok(())
}
2016-10-08 15:58:53 +00:00
/// Removes the last element from a vector and returns it, or [`None`] if it
/// is empty.
///
/// If you'd like to pop the first element, consider using
/// [`VecDeque::pop_front`] instead.
///
/// [`VecDeque::pop_front`]: crate::collections::VecDeque::pop_front
///
/// # Examples
///
/// ```
2015-01-25 21:05:03 +00:00
/// let mut vec = vec![1, 2, 3];
/// assert_eq!(vec.pop(), Some(3));
2015-02-24 18:15:45 +00:00
/// assert_eq!(vec, [1, 2]);
/// ```
2024-02-16 14:59:37 +00:00
///
/// # Time complexity
///
/// Takes *O*(1) time.
#[inline]
2015-01-24 05:48:20 +00:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn pop(&mut self) -> Option<T> {
if self.len == 0 {
None
} else {
unsafe {
self.len -= 1;
core::hint::assert_unchecked(self.len < self.capacity());
Some(ptr::read(self.as_ptr().add(self.len())))
}
}
}
2024-03-26 19:54:22 +00:00
/// Removes and returns the last element in a vector if the predicate
/// returns `true`, or [`None`] if the predicate returns false or the vector
/// is empty.
///
/// # Examples
///
/// ```
/// #![feature(vec_pop_if)]
///
/// let mut vec = vec![1, 2, 3, 4];
/// let pred = |x: &mut i32| *x % 2 == 0;
///
/// assert_eq!(vec.pop_if(pred), Some(4));
/// assert_eq!(vec, [1, 2, 3]);
/// assert_eq!(vec.pop_if(pred), None);
/// ```
#[unstable(feature = "vec_pop_if", issue = "122741")]
pub fn pop_if<F>(&mut self, f: F) -> Option<T>
where
F: FnOnce(&mut T) -> bool,
{
let last = self.last_mut()?;
if f(last) { self.pop() } else { None }
}
/// Moves all the elements of `other` into `self`, leaving `other` empty.
2015-01-17 22:30:16 +00:00
///
/// # Panics
///
/// Panics if the new capacity exceeds `isize::MAX` _bytes_.
2015-01-17 22:30:16 +00:00
///
/// # Examples
///
/// ```
2015-01-17 22:30:16 +00:00
/// let mut vec = vec![1, 2, 3];
/// let mut vec2 = vec![4, 5, 6];
/// vec.append(&mut vec2);
2015-02-24 18:15:45 +00:00
/// assert_eq!(vec, [1, 2, 3, 4, 5, 6]);
/// assert_eq!(vec2, []);
2015-01-17 22:30:16 +00:00
/// ```
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
2015-01-17 22:30:16 +00:00
#[inline]
std: Stabilize/deprecate features for 1.4 The FCP is coming to a close and 1.4 is coming out soon, so this brings in the libs team decision for all library features this cycle. Stabilized APIs: * `<Box<str>>::into_string` * `Arc::downgrade` * `Arc::get_mut` * `Arc::make_mut` * `Arc::try_unwrap` * `Box::from_raw` * `Box::into_raw` * `CStr::to_str` * `CStr::to_string_lossy` * `CString::from_raw` * `CString::into_raw` * `IntoRawFd::into_raw_fd` * `IntoRawFd` * `IntoRawHandle::into_raw_handle` * `IntoRawHandle` * `IntoRawSocket::into_raw_socket` * `IntoRawSocket` * `Rc::downgrade` * `Rc::get_mut` * `Rc::make_mut` * `Rc::try_unwrap` * `Result::expect` * `String::into_boxed_slice` * `TcpSocket::read_timeout` * `TcpSocket::set_read_timeout` * `TcpSocket::set_write_timeout` * `TcpSocket::write_timeout` * `UdpSocket::read_timeout` * `UdpSocket::set_read_timeout` * `UdpSocket::set_write_timeout` * `UdpSocket::write_timeout` * `Vec::append` * `Vec::split_off` * `VecDeque::append` * `VecDeque::retain` * `VecDeque::split_off` * `rc::Weak::upgrade` * `rc::Weak` * `slice::Iter::as_slice` * `slice::IterMut::into_slice` * `str::CharIndices::as_str` * `str::Chars::as_str` * `str::split_at_mut` * `str::split_at` * `sync::Weak::upgrade` * `sync::Weak` * `thread::park_timeout` * `thread::sleep` Deprecated APIs * `BTreeMap::with_b` * `BTreeSet::with_b` * `Option::as_mut_slice` * `Option::as_slice` * `Result::as_mut_slice` * `Result::as_slice` * `f32::from_str_radix` * `f64::from_str_radix` Closes #27277 Closes #27718 Closes #27736 Closes #27764 Closes #27765 Closes #27766 Closes #27767 Closes #27768 Closes #27769 Closes #27771 Closes #27773 Closes #27775 Closes #27776 Closes #27785 Closes #27792 Closes #27795 Closes #27797
2015-09-10 20:26:44 +00:00
#[stable(feature = "append", since = "1.4.0")]
2015-01-17 22:30:16 +00:00
pub fn append(&mut self, other: &mut Self) {
2015-11-23 22:23:48 +00:00
unsafe {
self.append_elements(other.as_slice() as _);
2015-11-23 22:23:48 +00:00
other.set_len(0);
}
2015-01-17 22:30:16 +00:00
}
/// Appends elements to `self` from other buffer.
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
#[inline]
unsafe fn append_elements(&mut self, other: *const [T]) {
let count = unsafe { (*other).len() };
self.reserve(count);
let len = self.len();
unsafe { ptr::copy_nonoverlapping(other as *const T, self.as_mut_ptr().add(len), count) };
self.len += count;
}
/// Removes the specified range from the vector in bulk, returning all
/// removed elements as an iterator. If the iterator is dropped before
/// being fully consumed, it drops the remaining removed elements.
///
/// The returned iterator keeps a mutable borrow on the vector to optimize
/// its implementation.
///
/// # Panics
///
/// Panics if the starting point is greater than the end point or if
/// the end point is greater than the length of the vector.
///
/// # Leaking
///
/// If the returned iterator goes out of scope without being dropped (due to
/// [`mem::forget`], for example), the vector may have lost and leaked
/// elements arbitrarily, including elements outside the range.
///
/// # Examples
///
/// ```
/// let mut v = vec![1, 2, 3];
/// let u: Vec<_> = v.drain(1..).collect();
/// assert_eq!(v, &[1]);
/// assert_eq!(u, &[2, 3]);
///
/// // A full range clears the vector, like `clear()` does
/// v.drain(..);
/// assert_eq!(v, &[]);
/// ```
std: Stabilize APIs for the 1.6 release This commit is the standard API stabilization commit for the 1.6 release cycle. The list of issues and APIs below have all been through their cycle-long FCP and the libs team decisions are listed below Stabilized APIs * `Read::read_exact` * `ErrorKind::UnexpectedEof` (renamed from `UnexpectedEOF`) * libcore -- this was a bit of a nuanced stabilization, the crate itself is now marked as `#[stable]` and the methods appearing via traits for primitives like `char` and `str` are now also marked as stable. Note that the extension traits themeselves are marked as unstable as they're imported via the prelude. The `try!` macro was also moved from the standard library into libcore to have the same interface. Otherwise the functions all have copied stability from the standard library now. * The `#![no_std]` attribute * `fs::DirBuilder` * `fs::DirBuilder::new` * `fs::DirBuilder::recursive` * `fs::DirBuilder::create` * `os::unix::fs::DirBuilderExt` * `os::unix::fs::DirBuilderExt::mode` * `vec::Drain` * `vec::Vec::drain` * `string::Drain` * `string::String::drain` * `vec_deque::Drain` * `vec_deque::VecDeque::drain` * `collections::hash_map::Drain` * `collections::hash_map::HashMap::drain` * `collections::hash_set::Drain` * `collections::hash_set::HashSet::drain` * `collections::binary_heap::Drain` * `collections::binary_heap::BinaryHeap::drain` * `Vec::extend_from_slice` (renamed from `push_all`) * `Mutex::get_mut` * `Mutex::into_inner` * `RwLock::get_mut` * `RwLock::into_inner` * `Iterator::min_by_key` (renamed from `min_by`) * `Iterator::max_by_key` (renamed from `max_by`) Deprecated APIs * `ErrorKind::UnexpectedEOF` (renamed to `UnexpectedEof`) * `OsString::from_bytes` * `OsStr::to_cstring` * `OsStr::to_bytes` * `fs::walk_dir` and `fs::WalkDir` * `path::Components::peek` * `slice::bytes::MutableByteVector` * `slice::bytes::copy_memory` * `Vec::push_all` (renamed to `extend_from_slice`) * `Duration::span` * `IpAddr` * `SocketAddr::ip` * `Read::tee` * `io::Tee` * `Write::broadcast` * `io::Broadcast` * `Iterator::min_by` (renamed to `min_by_key`) * `Iterator::max_by` (renamed to `max_by_key`) * `net::lookup_addr` New APIs (still unstable) * `<[T]>::sort_by_key` (added to mirror `min_by_key`) Closes #27585 Closes #27704 Closes #27707 Closes #27710 Closes #27711 Closes #27727 Closes #27740 Closes #27744 Closes #27799 Closes #27801 cc #27801 (doesn't close as `Chars` is still unstable) Closes #28968
2015-12-03 01:31:49 +00:00
#[stable(feature = "drain", since = "1.6.0")]
pub fn drain<R>(&mut self, range: R) -> Drain<'_, T, A>
where
R: RangeBounds<usize>,
2015-11-23 22:23:48 +00:00
{
// Memory safety
//
// When the Drain is first created, it shortens the length of
2017-11-21 14:33:45 +00:00
// the source vector to make sure no uninitialized or moved-from elements
// are accessible at all if the Drain's destructor never gets to run.
//
// Drain will ptr::read out the values to remove.
// When finished, remaining tail of the vec is copied back to cover
// the hole, and the vector length is restored to the new length.
//
let len = self.len();
let Range { start, end } = slice::range(range, ..len);
unsafe {
// set self.vec length's to start, to be safe in case Drain is leaked
self.set_len(start);
let range_slice = slice::from_raw_parts(self.as_ptr().add(start), end - start);
Drain {
tail_start: end,
tail_len: len - end,
iter: range_slice.iter(),
vec: NonNull::from(self),
}
}
}
/// Clears the vector, removing all values.
///
/// Note that this method has no effect on the allocated capacity
/// of the vector.
///
/// # Examples
///
/// ```
2015-01-25 21:05:03 +00:00
/// let mut v = vec![1, 2, 3];
2014-12-17 01:12:30 +00:00
///
/// v.clear();
2014-12-17 01:12:30 +00:00
///
/// assert!(v.is_empty());
/// ```
#[inline]
2015-01-24 05:48:20 +00:00
#[stable(feature = "rust1", since = "1.0.0")]
pub fn clear(&mut self) {
let elems: *mut [T] = self.as_mut_slice();
// SAFETY:
// - `elems` comes directly from `as_mut_slice` and is therefore valid.
// - Setting `self.len` before calling `drop_in_place` means that,
// if an element's `Drop` impl panics, the vector's `Drop` impl will
// do nothing (leaking the rest of the elements) instead of dropping
// some twice.
unsafe {
self.len = 0;
ptr::drop_in_place(elems);
}
}
/// Returns the number of elements in the vector, also referred to
/// as its 'length'.
///
/// # Examples
///
/// ```
2015-01-25 21:05:03 +00:00
/// let a = vec![1, 2, 3];
/// assert_eq!(a.len(), 3);
/// ```
#[inline]
2015-01-24 05:48:20 +00:00
#[stable(feature = "rust1", since = "1.0.0")]
#[rustc_confusables("length", "size")]
2015-11-23 22:23:48 +00:00
pub fn len(&self) -> usize {
self.len
}
2014-12-17 01:12:30 +00:00
/// Returns `true` if the vector contains no elements.
///
/// # Examples
///
/// ```
/// let mut v = Vec::new();
/// assert!(v.is_empty());
2014-12-17 01:12:30 +00:00
///
2015-01-25 21:05:03 +00:00
/// v.push(1);
/// assert!(!v.is_empty());
/// ```
2015-01-24 05:48:20 +00:00
#[stable(feature = "rust1", since = "1.0.0")]
2015-11-23 22:23:48 +00:00
pub fn is_empty(&self) -> bool {
self.len() == 0
}
2015-01-25 01:23:26 +00:00
/// Splits the collection into two at the given index.
///
/// Returns a newly allocated vector containing the elements in the range
/// `[at, len)`. After the call, the original vector will be left containing
/// the elements `[0, at)` with its previous capacity unchanged.
2015-01-25 01:23:26 +00:00
///
/// - If you want to take ownership of the entire contents and capacity of
/// the vector, see [`mem::take`] or [`mem::replace`].
/// - If you don't need the returned vector at all, see [`Vec::truncate`].
/// - If you want to take ownership of an arbitrary subslice, or you don't
/// necessarily want to store the removed items in a vector, see [`Vec::drain`].
///
/// # Panics
///
/// Panics if `at > len`.
///
2015-01-25 01:23:26 +00:00
/// # Examples
///
/// ```
/// let mut vec = vec![1, 2, 3];
2015-01-25 01:23:26 +00:00
/// let vec2 = vec.split_off(1);
2015-02-24 18:15:45 +00:00
/// assert_eq!(vec, [1]);
/// assert_eq!(vec2, [2, 3]);
2015-01-25 01:23:26 +00:00
/// ```
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
2015-01-25 01:23:26 +00:00
#[inline]
2020-03-20 14:40:35 +00:00
#[must_use = "use `.truncate()` if you don't need the other half"]
std: Stabilize/deprecate features for 1.4 The FCP is coming to a close and 1.4 is coming out soon, so this brings in the libs team decision for all library features this cycle. Stabilized APIs: * `<Box<str>>::into_string` * `Arc::downgrade` * `Arc::get_mut` * `Arc::make_mut` * `Arc::try_unwrap` * `Box::from_raw` * `Box::into_raw` * `CStr::to_str` * `CStr::to_string_lossy` * `CString::from_raw` * `CString::into_raw` * `IntoRawFd::into_raw_fd` * `IntoRawFd` * `IntoRawHandle::into_raw_handle` * `IntoRawHandle` * `IntoRawSocket::into_raw_socket` * `IntoRawSocket` * `Rc::downgrade` * `Rc::get_mut` * `Rc::make_mut` * `Rc::try_unwrap` * `Result::expect` * `String::into_boxed_slice` * `TcpSocket::read_timeout` * `TcpSocket::set_read_timeout` * `TcpSocket::set_write_timeout` * `TcpSocket::write_timeout` * `UdpSocket::read_timeout` * `UdpSocket::set_read_timeout` * `UdpSocket::set_write_timeout` * `UdpSocket::write_timeout` * `Vec::append` * `Vec::split_off` * `VecDeque::append` * `VecDeque::retain` * `VecDeque::split_off` * `rc::Weak::upgrade` * `rc::Weak` * `slice::Iter::as_slice` * `slice::IterMut::into_slice` * `str::CharIndices::as_str` * `str::Chars::as_str` * `str::split_at_mut` * `str::split_at` * `sync::Weak::upgrade` * `sync::Weak` * `thread::park_timeout` * `thread::sleep` Deprecated APIs * `BTreeMap::with_b` * `BTreeSet::with_b` * `Option::as_mut_slice` * `Option::as_slice` * `Result::as_mut_slice` * `Result::as_slice` * `f32::from_str_radix` * `f64::from_str_radix` Closes #27277 Closes #27718 Closes #27736 Closes #27764 Closes #27765 Closes #27766 Closes #27767 Closes #27768 Closes #27769 Closes #27771 Closes #27773 Closes #27775 Closes #27776 Closes #27785 Closes #27792 Closes #27795 Closes #27797
2015-09-10 20:26:44 +00:00
#[stable(feature = "split_off", since = "1.4.0")]
pub fn split_off(&mut self, at: usize) -> Self
where
A: Clone,
{
#[cold]
#[cfg_attr(not(feature = "panic_immediate_abort"), inline(never))]
#[track_caller]
fn assert_failed(at: usize, len: usize) -> ! {
panic!("`at` split index (is {at}) should be <= len (is {len})");
}
if at > self.len() {
assert_failed(at, self.len());
}
2015-01-25 01:23:26 +00:00
let other_len = self.len - at;
let mut other = Vec::with_capacity_in(other_len, self.allocator().clone());
2015-01-25 01:23:26 +00:00
// Unsafely `set_len` and copy items to `other`.
unsafe {
self.set_len(at);
other.set_len(other_len);
ptr::copy_nonoverlapping(self.as_ptr().add(at), other.as_mut_ptr(), other.len());
2015-01-25 01:23:26 +00:00
}
other
}
/// Resizes the `Vec` in-place so that `len` is equal to `new_len`.
///
/// If `new_len` is greater than `len`, the `Vec` is extended by the
/// difference, with each additional slot filled with the result of
/// calling the closure `f`. The return values from `f` will end up
/// in the `Vec` in the order they have been generated.
///
/// If `new_len` is less than `len`, the `Vec` is simply truncated.
///
/// This method uses a closure to create new values on every push. If
/// you'd rather [`Clone`] a given value, use [`Vec::resize`]. If you
/// want to use the [`Default`] trait to generate values, you can
/// pass [`Default::default`] as the second argument.
///
/// # Examples
///
/// ```
/// let mut vec = vec![1, 2, 3];
/// vec.resize_with(5, Default::default);
/// assert_eq!(vec, [1, 2, 3, 0, 0]);
///
/// let mut vec = vec![];
/// let mut p = 1;
/// vec.resize_with(4, || { p *= 2; p });
/// assert_eq!(vec, [2, 4, 8, 16]);
/// ```
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
#[stable(feature = "vec_resize_with", since = "1.33.0")]
pub fn resize_with<F>(&mut self, new_len: usize, f: F)
where
F: FnMut() -> T,
{
let len = self.len();
if new_len > len {
self.extend_trusted(iter::repeat_with(f).take(new_len - len));
} else {
self.truncate(new_len);
}
}
2019-06-27 23:11:46 +00:00
/// Consumes and leaks the `Vec`, returning a mutable reference to the contents,
/// `&'a mut [T]`.
///
/// Note that the type `T` must outlive the chosen lifetime `'a`. If the type
/// has only static references, or none at all, then this may be chosen to be
/// `'static`.
2019-06-27 23:11:46 +00:00
///
/// As of Rust 1.57, this method does not reallocate or shrink the `Vec`,
/// so the leaked allocation may include unused capacity that is not part
/// of the returned slice.
2019-06-27 23:11:46 +00:00
///
/// This function is mainly useful for data that lives for the remainder of
/// the program's life. Dropping the returned reference will cause a memory
/// leak.
///
/// # Examples
///
/// Simple usage:
///
/// ```
/// let x = vec![1, 2, 3];
/// let static_ref: &'static mut [usize] = x.leak();
/// static_ref[0] += 1;
/// assert_eq!(static_ref, &[2, 2, 3]);
/// # // FIXME(https://github.com/rust-lang/miri/issues/3670):
/// # // use -Zmiri-disable-leak-check instead of unleaking in tests meant to leak.
/// # drop(unsafe { Box::from_raw(static_ref) });
2019-06-27 23:11:46 +00:00
/// ```
2020-07-21 21:08:27 +00:00
#[stable(feature = "vec_leak", since = "1.47.0")]
2019-06-27 23:11:46 +00:00
#[inline]
pub fn leak<'a>(self) -> &'a mut [T]
2019-06-27 23:11:46 +00:00
where
A: 'a,
2019-06-27 23:11:46 +00:00
{
let mut me = ManuallyDrop::new(self);
unsafe { slice::from_raw_parts_mut(me.as_mut_ptr(), me.len) }
2019-06-27 23:11:46 +00:00
}
2020-08-01 16:03:04 +00:00
/// Returns the remaining spare capacity of the vector as a slice of
/// `MaybeUninit<T>`.
///
/// The returned slice can be used to fill the vector with data (e.g. by
/// reading from a file) before marking the data as initialized using the
/// [`set_len`] method.
///
2020-08-19 04:59:31 +00:00
/// [`set_len`]: Vec::set_len
2020-08-01 16:03:04 +00:00
///
/// # Examples
///
/// ```
/// // Allocate vector big enough for 10 elements.
/// let mut v = Vec::with_capacity(10);
///
/// // Fill in the first 3 elements.
/// let uninit = v.spare_capacity_mut();
/// uninit[0].write(0);
/// uninit[1].write(1);
/// uninit[2].write(2);
///
/// // Mark the first 3 elements of the vector as being initialized.
/// unsafe {
/// v.set_len(3);
/// }
///
/// assert_eq!(&v, &[0, 1, 2]);
/// ```
#[stable(feature = "vec_spare_capacity", since = "1.60.0")]
2020-08-01 16:03:04 +00:00
#[inline]
pub fn spare_capacity_mut(&mut self) -> &mut [MaybeUninit<T>] {
// Note:
// This method is not implemented in terms of `split_at_spare_mut`,
// to prevent invalidation of pointers to the buffer.
unsafe {
slice::from_raw_parts_mut(
self.as_mut_ptr().add(self.len) as *mut MaybeUninit<T>,
self.buf.capacity() - self.len,
)
}
}
/// Returns vector content as a slice of `T`, along with the remaining spare
/// capacity of the vector as a slice of `MaybeUninit<T>`.
///
/// The returned spare capacity slice can be used to fill the vector with data
/// (e.g. by reading from a file) before marking the data as initialized using
/// the [`set_len`] method.
///
/// [`set_len`]: Vec::set_len
///
/// Note that this is a low-level API, which should be used with care for
/// optimization purposes. If you need to append data to a `Vec`
/// you can use [`push`], [`extend`], [`extend_from_slice`],
/// [`extend_from_within`], [`insert`], [`append`], [`resize`] or
/// [`resize_with`], depending on your exact needs.
///
/// [`push`]: Vec::push
/// [`extend`]: Vec::extend
/// [`extend_from_slice`]: Vec::extend_from_slice
/// [`extend_from_within`]: Vec::extend_from_within
/// [`insert`]: Vec::insert
/// [`append`]: Vec::append
/// [`resize`]: Vec::resize
/// [`resize_with`]: Vec::resize_with
///
/// # Examples
///
/// ```
/// #![feature(vec_split_at_spare)]
///
/// let mut v = vec![1, 1, 2];
///
/// // Reserve additional space big enough for 10 elements.
/// v.reserve(10);
///
/// let (init, uninit) = v.split_at_spare_mut();
/// let sum = init.iter().copied().sum::<u32>();
///
/// // Fill in the next 4 elements.
/// uninit[0].write(sum);
/// uninit[1].write(sum * 2);
/// uninit[2].write(sum * 3);
/// uninit[3].write(sum * 4);
///
/// // Mark the 4 elements of the vector as being initialized.
/// unsafe {
/// let len = v.len();
/// v.set_len(len + 4);
/// }
///
/// assert_eq!(&v, &[1, 1, 2, 4, 8, 12, 16]);
/// ```
#[unstable(feature = "vec_split_at_spare", issue = "81944")]
#[inline]
pub fn split_at_spare_mut(&mut self) -> (&mut [T], &mut [MaybeUninit<T>]) {
// SAFETY:
// - len is ignored and so never changed
let (init, spare, _) = unsafe { self.split_at_spare_mut_with_len() };
(init, spare)
}
/// Safety: changing returned .2 (&mut usize) is considered the same as calling `.set_len(_)`.
///
/// This method provides unique access to all vec parts at once in `extend_from_within`.
unsafe fn split_at_spare_mut_with_len(
&mut self,
) -> (&mut [T], &mut [MaybeUninit<T>], &mut usize) {
let ptr = self.as_mut_ptr();
// SAFETY:
// - `ptr` is guaranteed to be valid for `self.len` elements
2021-12-31 23:03:07 +00:00
// - but the allocation extends out to `self.buf.capacity()` elements, possibly
// uninitialized
let spare_ptr = unsafe { ptr.add(self.len) };
let spare_ptr = spare_ptr.cast::<MaybeUninit<T>>();
let spare_len = self.buf.capacity() - self.len;
// SAFETY:
// - `ptr` is guaranteed to be valid for `self.len` elements
// - `spare_ptr` is pointing one element past the buffer, so it doesn't overlap with `initialized`
2020-08-01 16:03:04 +00:00
unsafe {
let initialized = slice::from_raw_parts_mut(ptr, self.len);
let spare = slice::from_raw_parts_mut(spare_ptr, spare_len);
(initialized, spare, &mut self.len)
2020-08-01 16:03:04 +00:00
}
}
}
2014-04-03 03:10:36 +00:00
impl<T: Clone, A: Allocator> Vec<T, A> {
2017-05-05 18:42:39 +00:00
/// Resizes the `Vec` in-place so that `len` is equal to `new_len`.
///
2017-05-05 18:42:39 +00:00
/// If `new_len` is greater than `len`, the `Vec` is extended by the
/// difference, with each additional slot filled with `value`.
2017-05-05 18:42:39 +00:00
/// If `new_len` is less than `len`, the `Vec` is simply truncated.
///
/// This method requires `T` to implement [`Clone`],
/// in order to be able to clone the passed value.
/// If you need more flexibility (or want to rely on [`Default`] instead of
/// [`Clone`]), use [`Vec::resize_with`].
2021-09-30 16:21:03 +00:00
/// If you only need to resize to a smaller size, use [`Vec::truncate`].
///
/// # Examples
///
/// ```
/// let mut vec = vec!["hello"];
/// vec.resize(3, "world");
2015-02-24 18:15:45 +00:00
/// assert_eq!(vec, ["hello", "world", "world"]);
///
2015-01-25 21:05:03 +00:00
/// let mut vec = vec![1, 2, 3, 4];
/// vec.resize(2, 0);
2015-02-24 18:15:45 +00:00
/// assert_eq!(vec, [1, 2]);
/// ```
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
std: Stabilize library APIs for 1.5 This commit stabilizes and deprecates library APIs whose FCP has closed in the last cycle, specifically: Stabilized APIs: * `fs::canonicalize` * `Path::{metadata, symlink_metadata, canonicalize, read_link, read_dir, exists, is_file, is_dir}` - all moved to inherent methods from the `PathExt` trait. * `Formatter::fill` * `Formatter::width` * `Formatter::precision` * `Formatter::sign_plus` * `Formatter::sign_minus` * `Formatter::alternate` * `Formatter::sign_aware_zero_pad` * `string::ParseError` * `Utf8Error::valid_up_to` * `Iterator::{cmp, partial_cmp, eq, ne, lt, le, gt, ge}` * `<[T]>::split_{first,last}{,_mut}` * `Condvar::wait_timeout` - note that `wait_timeout_ms` is not yet deprecated but will be once 1.5 is released. * `str::{R,}MatchIndices` * `str::{r,}match_indices` * `char::from_u32_unchecked` * `VecDeque::insert` * `VecDeque::shrink_to_fit` * `VecDeque::as_slices` * `VecDeque::as_mut_slices` * `VecDeque::swap_remove_front` - (renamed from `swap_front_remove`) * `VecDeque::swap_remove_back` - (renamed from `swap_back_remove`) * `Vec::resize` * `str::slice_mut_unchecked` * `FileTypeExt` * `FileTypeExt::{is_block_device, is_char_device, is_fifo, is_socket}` * `BinaryHeap::from` - `from_vec` deprecated in favor of this * `BinaryHeap::into_vec` - plus a `Into` impl * `BinaryHeap::into_sorted_vec` Deprecated APIs * `slice::ref_slice` * `slice::mut_ref_slice` * `iter::{range_inclusive, RangeInclusive}` * `std::dynamic_lib` Closes #27706 Closes #27725 cc #27726 (align not stabilized yet) Closes #27734 Closes #27737 Closes #27742 Closes #27743 Closes #27772 Closes #27774 Closes #27777 Closes #27781 cc #27788 (a few remaining methods though) Closes #27790 Closes #27793 Closes #27796 Closes #27810 cc #28147 (not all parts stabilized)
2015-10-22 23:28:45 +00:00
#[stable(feature = "vec_resize", since = "1.5.0")]
2015-02-05 02:17:19 +00:00
pub fn resize(&mut self, new_len: usize, value: T) {
let len = self.len();
if new_len > len {
2023-06-04 06:06:25 +00:00
self.extend_with(new_len - len, value)
2017-05-05 18:42:39 +00:00
} else {
self.truncate(new_len);
}
}
/// Clones and appends all elements in a slice to the `Vec`.
///
/// Iterates over the slice `other`, clones each element, and then appends
2021-12-04 14:22:49 +00:00
/// it to this `Vec`. The `other` slice is traversed in-order.
2017-05-05 18:42:39 +00:00
///
2018-03-09 13:08:59 +00:00
/// Note that this function is same as [`extend`] except that it is
2017-05-05 18:42:39 +00:00
/// specialized to work with slices instead. If and when Rust gets
/// specialization this function will likely be deprecated (but still
/// available).
///
/// # Examples
///
/// ```
/// let mut vec = vec![1];
/// vec.extend_from_slice(&[2, 3, 4]);
/// assert_eq!(vec, [1, 2, 3, 4]);
/// ```
2018-03-09 13:08:59 +00:00
///
/// [`extend`]: Vec::extend
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
2017-05-05 18:42:39 +00:00
#[stable(feature = "vec_extend_from_slice", since = "1.6.0")]
pub fn extend_from_slice(&mut self, other: &[T]) {
self.spec_extend(other.iter())
}
/// Copies elements from `src` range to the end of the vector.
///
/// # Panics
///
/// Panics if the starting point is greater than the end point or if
/// the end point is greater than the length of the vector.
///
/// # Examples
///
/// ```
/// let mut vec = vec![0, 1, 2, 3, 4];
///
/// vec.extend_from_within(2..);
/// assert_eq!(vec, [0, 1, 2, 3, 4, 2, 3, 4]);
///
/// vec.extend_from_within(..2);
/// assert_eq!(vec, [0, 1, 2, 3, 4, 2, 3, 4, 0, 1]);
///
/// vec.extend_from_within(4..8);
/// assert_eq!(vec, [0, 1, 2, 3, 4, 2, 3, 4, 0, 1, 4, 2, 3, 4]);
/// ```
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
2021-04-28 06:12:49 +00:00
#[stable(feature = "vec_extend_from_within", since = "1.53.0")]
pub fn extend_from_within<R>(&mut self, src: R)
where
R: RangeBounds<usize>,
{
2021-02-13 03:03:39 +00:00
let range = slice::range(src, ..self.len());
self.reserve(range.len());
// SAFETY:
// - `slice::range` guarantees that the given range is valid for indexing self
unsafe {
self.spec_extend_from_within(range);
}
}
2017-05-05 18:42:39 +00:00
}
impl<T, A: Allocator, const N: usize> Vec<[T; N], A> {
/// Takes a `Vec<[T; N]>` and flattens it into a `Vec<T>`.
///
/// # Panics
///
/// Panics if the length of the resulting vector would overflow a `usize`.
///
/// This is only possible when flattening a vector of arrays of zero-sized
/// types, and thus tends to be irrelevant in practice. If
/// `size_of::<T>() > 0`, this will never panic.
///
/// # Examples
///
/// ```
/// let mut vec = vec![[1, 2, 3], [4, 5, 6], [7, 8, 9]];
/// assert_eq!(vec.pop(), Some([7, 8, 9]));
///
/// let mut flattened = vec.into_flattened();
/// assert_eq!(flattened.pop(), Some(6));
/// ```
2024-06-10 12:50:54 +00:00
#[stable(feature = "slice_flatten", since = "1.80.0")]
pub fn into_flattened(self) -> Vec<T, A> {
let (ptr, len, cap, alloc) = self.into_raw_parts_with_alloc();
let (new_len, new_cap) = if T::IS_ZST {
(len.checked_mul(N).expect("vec len overflow"), usize::MAX)
} else {
// SAFETY:
// - `cap * N` cannot overflow because the allocation is already in
// the address space.
// - Each `[T; N]` has `N` valid elements, so there are `len * N`
// valid elements in the allocation.
unsafe { (len.unchecked_mul(N), cap.unchecked_mul(N)) }
};
// SAFETY:
// - `ptr` was allocated by `self`
// - `ptr` is well-aligned because `[T; N]` has the same alignment as `T`.
// - `new_cap` refers to the same sized allocation as `cap` because
// `new_cap * size_of::<T>()` == `cap * size_of::<[T; N]>()`
// - `len` <= `cap`, so `len * N` <= `cap * N`.
unsafe { Vec::<T, A>::from_raw_parts_in(ptr.cast(), new_len, new_cap, alloc) }
}
}
2023-06-04 06:06:25 +00:00
impl<T: Clone, A: Allocator> Vec<T, A> {
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
2023-06-04 06:06:25 +00:00
/// Extend the vector by `n` clones of value.
fn extend_with(&mut self, n: usize, value: T) {
self.reserve(n);
unsafe {
let mut ptr = self.as_mut_ptr().add(self.len());
// Use SetLenOnDrop to work around bug where compiler
// might not realize the store through `ptr` through self.set_len()
// don't alias.
let mut local_len = SetLenOnDrop::new(&mut self.len);
// Write all elements except the last one
for _ in 1..n {
2023-06-04 06:06:25 +00:00
ptr::write(ptr, value.clone());
ptr = ptr.add(1);
2023-06-04 06:06:25 +00:00
// Increment the length in every step in case clone() panics
local_len.increment_len(1);
}
if n > 0 {
// We can write the last element directly without cloning needlessly
2023-06-04 06:06:25 +00:00
ptr::write(ptr, value);
local_len.increment_len(1);
}
// len set by scope guard
}
}
}
impl<T: PartialEq, A: Allocator> Vec<T, A> {
/// Removes consecutive repeated elements in the vector according to the
/// [`PartialEq`] trait implementation.
///
/// If the vector is sorted, this removes all duplicates.
///
/// # Examples
///
/// ```
2015-01-25 21:05:03 +00:00
/// let mut vec = vec![1, 2, 2, 3, 2];
2014-12-17 01:12:30 +00:00
///
/// vec.dedup();
2014-12-17 01:12:30 +00:00
///
2015-02-24 18:15:45 +00:00
/// assert_eq!(vec, [1, 2, 3, 2]);
/// ```
2015-01-24 05:48:20 +00:00
#[stable(feature = "rust1", since = "1.0.0")]
#[inline]
pub fn dedup(&mut self) {
self.dedup_by(|a, b| a == b)
}
2020-01-04 18:01:32 +00:00
}
////////////////////////////////////////////////////////////////////////////////
// Internal methods and functions
////////////////////////////////////////////////////////////////////////////////
#[doc(hidden)]
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
#[stable(feature = "rust1", since = "1.0.0")]
pub fn from_elem<T: Clone>(elem: T, n: usize) -> Vec<T> {
<T as SpecFromElem>::from_elem(elem, n, Global)
}
#[doc(hidden)]
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
#[unstable(feature = "allocator_api", issue = "32838")]
pub fn from_elem_in<T: Clone, A: Allocator>(elem: T, n: usize, alloc: A) -> Vec<T, A> {
<T as SpecFromElem>::from_elem(elem, n, alloc)
}
2023-11-26 06:24:38 +00:00
#[cfg(not(no_global_oom_handling))]
trait ExtendFromWithinSpec {
/// # Safety
///
/// - `src` needs to be valid index
/// - `self.capacity() - self.len()` must be `>= src.len()`
unsafe fn spec_extend_from_within(&mut self, src: Range<usize>);
}
2023-11-26 06:24:38 +00:00
#[cfg(not(no_global_oom_handling))]
impl<T: Clone, A: Allocator> ExtendFromWithinSpec for Vec<T, A> {
default unsafe fn spec_extend_from_within(&mut self, src: Range<usize>) {
// SAFETY:
// - len is increased only after initializing elements
let (this, spare, len) = unsafe { self.split_at_spare_mut_with_len() };
// SAFETY:
// - caller guarantees that src is a valid index
let to_clone = unsafe { this.get_unchecked(src) };
2021-03-08 23:32:41 +00:00
iter::zip(to_clone, spare)
.map(|(src, dst)| dst.write(src.clone()))
// Note:
// - Element was just initialized with `MaybeUninit::write`, so it's ok to increase len
// - len is increased after each element to prevent leaks (see issue #82533)
.for_each(|_| *len += 1);
}
}
2023-11-26 06:24:38 +00:00
#[cfg(not(no_global_oom_handling))]
impl<T: Copy, A: Allocator> ExtendFromWithinSpec for Vec<T, A> {
unsafe fn spec_extend_from_within(&mut self, src: Range<usize>) {
let count = src.len();
{
let (init, spare) = self.split_at_spare_mut();
// SAFETY:
// - caller guarantees that `src` is a valid index
let source = unsafe { init.get_unchecked(src) };
// SAFETY:
// - Both pointers are created from unique slice references (`&mut [_]`)
// so they are valid and do not overlap.
// - Elements are :Copy so it's OK to copy them, without doing
// anything with the original values
// - `count` is equal to the len of `source`, so source is valid for
// `count` reads
// - `.reserve(count)` guarantees that `spare.len() >= count` so spare
// is valid for `count` writes
unsafe { ptr::copy_nonoverlapping(source.as_ptr(), spare.as_mut_ptr() as _, count) };
}
// SAFETY:
// - The elements were just initialized by `copy_nonoverlapping`
self.len += count;
}
}
////////////////////////////////////////////////////////////////////////////////
// Common trait implementations for Vec
////////////////////////////////////////////////////////////////////////////////
#[stable(feature = "rust1", since = "1.0.0")]
impl<T, A: Allocator> ops::Deref for Vec<T, A> {
type Target = [T];
#[inline]
fn deref(&self) -> &[T] {
unsafe { slice::from_raw_parts(self.as_ptr(), self.len) }
}
}
#[stable(feature = "rust1", since = "1.0.0")]
impl<T, A: Allocator> ops::DerefMut for Vec<T, A> {
#[inline]
fn deref_mut(&mut self) -> &mut [T] {
unsafe { slice::from_raw_parts_mut(self.as_mut_ptr(), self.len) }
}
}
2024-03-21 18:37:43 +00:00
#[unstable(feature = "deref_pure_trait", issue = "87121")]
unsafe impl<T, A: Allocator> ops::DerefPure for Vec<T, A> {}
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
#[stable(feature = "rust1", since = "1.0.0")]
impl<T: Clone, A: Allocator + Clone> Clone for Vec<T, A> {
2015-03-12 00:44:02 +00:00
#[cfg(not(test))]
fn clone(&self) -> Self {
let alloc = self.allocator().clone();
<[T]>::to_vec_in(&**self, alloc)
2015-11-23 22:23:48 +00:00
}
2015-03-11 04:13:29 +00:00
2015-03-18 16:36:18 +00:00
// HACK(japaric): with cfg(test) the inherent `[T]::to_vec` method, which is
// required for this method definition, is not available. Instead use the
// `slice::to_vec` function which is only available with cfg(test)
// NB see the slice::hack module in slice.rs for more information
2015-03-12 00:44:02 +00:00
#[cfg(test)]
fn clone(&self) -> Self {
let alloc = self.allocator().clone();
crate::slice::to_vec(&**self, alloc)
}
2015-03-12 00:44:02 +00:00
/// Overwrites the contents of `self` with a clone of the contents of `source`.
///
/// This method is preferred over simply assigning `source.clone()` to `self`,
/// as it avoids reallocation if possible. Additionally, if the element type
/// `T` overrides `clone_from()`, this will reuse the resources of `self`'s
/// elements as well.
///
/// # Examples
///
/// ```
/// let x = vec![5, 6, 7];
/// let mut y = vec![8, 9, 10];
/// let yp: *const i32 = y.as_ptr();
///
/// y.clone_from(&x);
///
/// // The value is the same
/// assert_eq!(x, y);
///
/// // And no reallocation occurred
/// assert_eq!(yp, y.as_ptr());
/// ```
fn clone_from(&mut self, source: &Self) {
crate::slice::SpecCloneIntoVec::clone_into(source.as_slice(), self);
}
}
/// The hash of a vector is the same as that of the corresponding slice,
/// as required by the `core::borrow::Borrow` implementation.
///
/// ```
/// use std::hash::BuildHasher;
///
/// let b = std::hash::RandomState::new();
/// let v: Vec<u8> = vec![0xa8, 0x3c, 0x09];
/// let s: &[u8] = &[0xa8, 0x3c, 0x09];
/// assert_eq!(b.hash_one(v), b.hash_one(s));
/// ```
#[stable(feature = "rust1", since = "1.0.0")]
impl<T: Hash, A: Allocator> Hash for Vec<T, A> {
#[inline]
2020-06-22 12:52:02 +00:00
fn hash<H: Hasher>(&self, state: &mut H) {
Hash::hash(&**self, state)
}
}
#[stable(feature = "rust1", since = "1.0.0")]
#[rustc_on_unimplemented(
message = "vector indices are of type `usize` or ranges of `usize`",
label = "vector indices are of type `usize` or ranges of `usize`"
)]
impl<T, I: SliceIndex<[T]>, A: Allocator> Index<I> for Vec<T, A> {
type Output = I::Output;
2015-01-03 15:40:10 +00:00
#[inline]
fn index(&self, index: I) -> &Self::Output {
Index::index(&**self, index)
}
2015-01-03 15:40:10 +00:00
}
#[stable(feature = "rust1", since = "1.0.0")]
#[rustc_on_unimplemented(
message = "vector indices are of type `usize` or ranges of `usize`",
label = "vector indices are of type `usize` or ranges of `usize`"
)]
impl<T, I: SliceIndex<[T]>, A: Allocator> IndexMut<I> for Vec<T, A> {
#[inline]
fn index_mut(&mut self, index: I) -> &mut Self::Output {
IndexMut::index_mut(&mut **self, index)
}
2015-01-03 15:40:10 +00:00
}
/// Collects an iterator into a Vec, commonly called via [`Iterator::collect()`]
///
/// # Allocation behavior
///
/// In general `Vec` does not guarantee any particular growth or allocation strategy.
/// That also applies to this trait impl.
///
/// **Note:** This section covers implementation details and is therefore exempt from
/// stability guarantees.
///
/// Vec may use any or none of the following strategies,
/// depending on the supplied iterator:
///
/// * preallocate based on [`Iterator::size_hint()`]
/// * and panic if the number of items is outside the provided lower/upper bounds
/// * use an amortized growth strategy similar to `pushing` one item at a time
/// * perform the iteration in-place on the original allocation backing the iterator
///
/// The last case warrants some attention. It is an optimization that in many cases reduces peak memory
/// consumption and improves cache locality. But when big, short-lived allocations are created,
/// only a small fraction of their items get collected, no further use is made of the spare capacity
/// and the resulting `Vec` is moved into a longer-lived structure, then this can lead to the large
/// allocations having their lifetimes unnecessarily extended which can result in increased memory
/// footprint.
///
/// In cases where this is an issue, the excess capacity can be discarded with [`Vec::shrink_to()`],
/// [`Vec::shrink_to_fit()`] or by collecting into [`Box<[T]>`][owned slice] instead, which additionally reduces
/// the size of the long-lived struct.
///
/// [owned slice]: Box
///
/// ```rust
/// # use std::sync::Mutex;
/// static LONG_LIVED: Mutex<Vec<Vec<u16>>> = Mutex::new(Vec::new());
///
/// for i in 0..10 {
/// let big_temporary: Vec<u16> = (0..1024).collect();
/// // discard most items
/// let mut result: Vec<_> = big_temporary.into_iter().filter(|i| i % 100 == 0).collect();
/// // without this a lot of unused capacity might be moved into the global
/// result.shrink_to_fit();
/// LONG_LIVED.lock().unwrap().push(result);
/// }
/// ```
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
2015-01-24 05:48:20 +00:00
#[stable(feature = "rust1", since = "1.0.0")]
impl<T> FromIterator<T> for Vec<T> {
#[inline]
fn from_iter<I: IntoIterator<Item = T>>(iter: I) -> Vec<T> {
2020-09-02 19:40:19 +00:00
<Self as SpecFromIter<T, I::IntoIter>>::from_iter(iter.into_iter())
}
}
#[stable(feature = "rust1", since = "1.0.0")]
impl<T, A: Allocator> IntoIterator for Vec<T, A> {
type Item = T;
type IntoIter = IntoIter<T, A>;
/// Creates a consuming iterator, that is, one that moves each value out of
/// the vector (from start to end). The vector cannot be used after calling
/// this.
///
/// # Examples
///
/// ```
/// let v = vec!["a".to_string(), "b".to_string()];
/// let mut v_iter = v.into_iter();
///
/// let first_element: Option<String> = v_iter.next();
///
/// assert_eq!(first_element, Some("a".to_string()));
/// assert_eq!(v_iter.next(), Some("b".to_string()));
/// assert_eq!(v_iter.next(), None);
/// ```
#[inline]
fn into_iter(self) -> Self::IntoIter {
unsafe {
let me = ManuallyDrop::new(self);
let alloc = ManuallyDrop::new(ptr::read(me.allocator()));
let buf = me.buf.non_null();
let begin = buf.as_ptr();
let end = if T::IS_ZST {
begin.wrapping_byte_add(me.len())
} else {
begin.add(me.len()) as *const T
};
let cap = me.buf.capacity();
IntoIter { buf, phantom: PhantomData, cap, alloc, ptr: buf, end }
}
}
}
#[stable(feature = "rust1", since = "1.0.0")]
impl<'a, T, A: Allocator> IntoIterator for &'a Vec<T, A> {
type Item = &'a T;
type IntoIter = slice::Iter<'a, T>;
2015-01-08 03:01:05 +00:00
fn into_iter(self) -> Self::IntoIter {
2015-01-08 03:01:05 +00:00
self.iter()
}
}
#[stable(feature = "rust1", since = "1.0.0")]
impl<'a, T, A: Allocator> IntoIterator for &'a mut Vec<T, A> {
type Item = &'a mut T;
type IntoIter = slice::IterMut<'a, T>;
2015-01-08 03:01:05 +00:00
fn into_iter(self) -> Self::IntoIter {
2015-01-08 03:01:05 +00:00
self.iter_mut()
}
}
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
#[stable(feature = "rust1", since = "1.0.0")]
impl<T, A: Allocator> Extend<T> for Vec<T, A> {
#[inline]
fn extend<I: IntoIterator<Item = T>>(&mut self, iter: I) {
<Self as SpecExtend<T, I::IntoIter>>::spec_extend(self, iter.into_iter())
}
#[inline]
fn extend_one(&mut self, item: T) {
self.push(item);
}
#[inline]
fn extend_reserve(&mut self, additional: usize) {
self.reserve(additional);
}
#[inline]
unsafe fn extend_one_unchecked(&mut self, item: T) {
// SAFETY: Our preconditions ensure the space has been reserved, and `extend_reserve` is implemented correctly.
unsafe {
let len = self.len();
ptr::write(self.as_mut_ptr().add(len), item);
self.set_len(len + 1);
}
}
}
impl<T, A: Allocator> Vec<T, A> {
2019-11-21 12:44:28 +00:00
// leaf method to which various SpecFrom/SpecExtend implementations delegate when
// they have no further optimizations to apply
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
fn extend_desugared<I: Iterator<Item = T>>(&mut self, mut iterator: I) {
// This is the case for a general iterator.
//
// This function should be the moral equivalent of:
//
// for item in iterator {
// self.push(item);
// }
while let Some(element) = iterator.next() {
let len = self.len();
if len == self.capacity() {
let (lower, _) = iterator.size_hint();
self.reserve(lower.saturating_add(1));
}
unsafe {
ptr::write(self.as_mut_ptr().add(len), element);
// Since next() executes user code which can panic we have to bump the length
// after each step.
// NB can't overflow since we would have had to alloc the address space
self.set_len(len + 1);
}
}
}
2017-04-08 20:55:53 +00:00
// specific extend for `TrustedLen` iterators, called both by the specializations
// and internal places where resolving specialization makes compilation slower
#[cfg(not(no_global_oom_handling))]
fn extend_trusted(&mut self, iterator: impl iter::TrustedLen<Item = T>) {
let (low, high) = iterator.size_hint();
if let Some(additional) = high {
debug_assert_eq!(
low,
additional,
"TrustedLen iterator's size hint is not exact: {:?}",
(low, high)
);
self.reserve(additional);
unsafe {
let ptr = self.as_mut_ptr();
let mut local_len = SetLenOnDrop::new(&mut self.len);
iterator.for_each(move |element| {
ptr::write(ptr.add(local_len.current_len()), element);
// Since the loop executes user code which can panic we have to update
// the length every step to correctly drop what we've written.
// NB can't overflow since we would have had to alloc the address space
local_len.increment_len(1);
});
}
} else {
// Per TrustedLen contract a `None` upper bound means that the iterator length
// truly exceeds usize::MAX, which would eventually lead to a capacity overflow anyway.
// Since the other branch already panics eagerly (via `reserve()`) we do the same here.
// This avoids additional codegen for a fallback code path which would eventually
// panic anyway.
panic!("capacity overflow");
}
}
2017-04-08 20:55:53 +00:00
/// Creates a splicing iterator that replaces the specified range in the vector
/// with the given `replace_with` iterator and yields the removed items.
/// `replace_with` does not need to be the same length as `range`.
///
/// `range` is removed even if the iterator is not consumed until the end.
2017-04-08 20:55:53 +00:00
///
2019-06-01 09:26:08 +00:00
/// It is unspecified how many elements are removed from the vector
2017-04-08 20:55:53 +00:00
/// if the `Splice` value is leaked.
///
2019-06-01 09:26:08 +00:00
/// The input iterator `replace_with` is only consumed when the `Splice` value is dropped.
2017-04-08 20:55:53 +00:00
///
2019-06-01 09:26:08 +00:00
/// This is optimal if:
2017-04-08 20:55:53 +00:00
///
/// * The tail (elements in the vector after `range`) is empty,
/// * or `replace_with` yields fewer or equal elements than `range`s length
2017-04-08 20:55:53 +00:00
/// * or the lower bound of its `size_hint()` is exact.
///
/// Otherwise, a temporary vector is allocated and the tail is moved twice.
///
/// # Panics
///
/// Panics if the starting point is greater than the end point or if
/// the end point is greater than the length of the vector.
///
/// # Examples
///
/// ```
/// let mut v = vec![1, 2, 3, 4];
/// let new = [7, 8, 9];
/// let u: Vec<_> = v.splice(1..3, new).collect();
/// assert_eq!(v, &[1, 7, 8, 9, 4]);
/// assert_eq!(u, &[2, 3]);
2017-04-08 20:55:53 +00:00
/// ```
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
2017-04-08 20:55:53 +00:00
#[inline]
#[stable(feature = "vec_splice", since = "1.21.0")]
pub fn splice<R, I>(&mut self, range: R, replace_with: I) -> Splice<'_, I::IntoIter, A>
where
R: RangeBounds<usize>,
I: IntoIterator<Item = T>,
2017-04-08 20:55:53 +00:00
{
Splice { drain: self.drain(range), replace_with: replace_with.into_iter() }
}
2017-07-15 01:54:17 +00:00
/// Creates an iterator which uses a closure to determine if an element should be removed.
///
/// If the closure returns true, then the element is removed and yielded.
/// If the closure returns false, the element will remain in the vector and will not be yielded
/// by the iterator.
2017-07-15 01:54:17 +00:00
///
/// If the returned `ExtractIf` is not exhausted, e.g. because it is dropped without iterating
/// or the iteration short-circuits, then the remaining elements will be retained.
/// Use [`retain`] with a negated predicate if you do not need the returned iterator.
///
/// [`retain`]: Vec::retain
///
2017-07-15 01:54:17 +00:00
/// Using this method is equivalent to the following code:
///
/// ```
2017-09-13 05:33:27 +00:00
/// # let some_predicate = |x: &mut i32| { *x == 2 || *x == 3 || *x == 6 };
/// # let mut vec = vec![1, 2, 3, 4, 5, 6];
2017-07-15 01:54:17 +00:00
/// let mut i = 0;
/// while i < vec.len() {
2017-07-15 01:54:17 +00:00
/// if some_predicate(&mut vec[i]) {
/// let val = vec.remove(i);
/// // your code here
2017-09-13 05:33:27 +00:00
/// } else {
/// i += 1;
2017-07-15 01:54:17 +00:00
/// }
/// }
2017-09-13 05:33:27 +00:00
///
/// # assert_eq!(vec, vec![1, 4, 5]);
2017-07-15 01:54:17 +00:00
/// ```
///
/// But `extract_if` is easier to use. `extract_if` is also more efficient,
2017-07-15 01:54:17 +00:00
/// because it can backshift the elements of the array in bulk.
///
/// Note that `extract_if` also lets you mutate every element in the filter closure,
2017-07-15 01:54:17 +00:00
/// regardless of whether you choose to keep or remove it.
///
/// # Examples
///
/// Splitting an array into evens and odds, reusing the original allocation:
///
/// ```
/// #![feature(extract_if)]
2017-07-15 01:54:17 +00:00
/// let mut numbers = vec![1, 2, 3, 4, 5, 6, 8, 9, 11, 13, 14, 15];
///
/// let evens = numbers.extract_if(|x| *x % 2 == 0).collect::<Vec<_>>();
2017-07-15 01:54:17 +00:00
/// let odds = numbers;
///
/// assert_eq!(evens, vec![2, 4, 6, 8, 14]);
/// assert_eq!(odds, vec![1, 3, 5, 9, 11, 13, 15]);
/// ```
#[unstable(feature = "extract_if", reason = "recently added", issue = "43244")]
pub fn extract_if<F>(&mut self, filter: F) -> ExtractIf<'_, T, F, A>
2017-07-15 01:54:17 +00:00
where
F: FnMut(&mut T) -> bool,
{
let old_len = self.len();
// Guard against us getting leaked (leak amplification)
unsafe {
self.set_len(0);
}
2019-12-22 22:42:04 +00:00
ExtractIf { vec: self, idx: 0, del: 0, old_len, pred: filter }
2017-07-15 01:54:17 +00:00
}
}
/// Extend implementation that copies elements out of references before pushing them onto the Vec.
///
/// This implementation is specialized for slice iterators, where it uses [`copy_from_slice`] to
/// append the entire slice at once.
///
/// [`copy_from_slice`]: slice::copy_from_slice
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
2015-06-03 10:38:42 +00:00
#[stable(feature = "extend_ref", since = "1.2.0")]
impl<'a, T: Copy + 'a, A: Allocator> Extend<&'a T> for Vec<T, A> {
2015-11-23 22:23:48 +00:00
fn extend<I: IntoIterator<Item = &'a T>>(&mut self, iter: I) {
self.spec_extend(iter.into_iter())
2015-06-03 10:38:42 +00:00
}
#[inline]
fn extend_one(&mut self, &item: &'a T) {
self.push(item);
}
#[inline]
fn extend_reserve(&mut self, additional: usize) {
self.reserve(additional);
}
#[inline]
unsafe fn extend_one_unchecked(&mut self, &item: &'a T) {
// SAFETY: Our preconditions ensure the space has been reserved, and `extend_reserve` is implemented correctly.
unsafe {
let len = self.len();
ptr::write(self.as_mut_ptr().add(len), item);
self.set_len(len + 1);
}
}
2015-06-03 10:38:42 +00:00
}
2022-08-09 00:14:43 +00:00
/// Implements comparison of vectors, [lexicographically](Ord#lexicographical-comparison).
#[stable(feature = "rust1", since = "1.0.0")]
impl<T, A1, A2> PartialOrd<Vec<T, A2>> for Vec<T, A1>
where
T: PartialOrd,
A1: Allocator,
A2: Allocator,
{
#[inline]
fn partial_cmp(&self, other: &Vec<T, A2>) -> Option<Ordering> {
PartialOrd::partial_cmp(&**self, &**other)
}
}
#[stable(feature = "rust1", since = "1.0.0")]
impl<T: Eq, A: Allocator> Eq for Vec<T, A> {}
2022-08-09 00:14:43 +00:00
/// Implements ordering of vectors, [lexicographically](Ord#lexicographical-comparison).
#[stable(feature = "rust1", since = "1.0.0")]
impl<T: Ord, A: Allocator> Ord for Vec<T, A> {
#[inline]
fn cmp(&self, other: &Self) -> Ordering {
Ord::cmp(&**self, &**other)
}
}
2015-01-24 05:48:20 +00:00
#[stable(feature = "rust1", since = "1.0.0")]
unsafe impl<#[may_dangle] T, A: Allocator> Drop for Vec<T, A> {
fn drop(&mut self) {
unsafe {
// use drop for [T]
// use a raw slice to refer to the elements of the vector as weakest necessary type;
// could avoid questions of validity in certain cases
ptr::drop_in_place(ptr::slice_from_raw_parts_mut(self.as_mut_ptr(), self.len))
}
2015-07-10 04:57:21 +00:00
// RawVec handles deallocation
}
}
2015-01-24 05:48:20 +00:00
#[stable(feature = "rust1", since = "1.0.0")]
2023-04-16 06:49:27 +00:00
impl<T> Default for Vec<T> {
/// Creates an empty `Vec<T>`.
///
/// The vector will not allocate until elements are pushed onto it.
fn default() -> Vec<T> {
Vec::new()
}
}
#[stable(feature = "rust1", since = "1.0.0")]
impl<T: fmt::Debug, A: Allocator> fmt::Debug for Vec<T, A> {
2019-02-02 11:48:12 +00:00
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
fmt::Debug::fmt(&**self, f)
}
}
#[stable(feature = "rust1", since = "1.0.0")]
impl<T, A: Allocator> AsRef<Vec<T, A>> for Vec<T, A> {
fn as_ref(&self) -> &Vec<T, A> {
self
}
}
2015-09-25 17:54:15 +00:00
#[stable(feature = "vec_as_mut", since = "1.5.0")]
impl<T, A: Allocator> AsMut<Vec<T, A>> for Vec<T, A> {
fn as_mut(&mut self) -> &mut Vec<T, A> {
2015-09-25 17:54:15 +00:00
self
}
}
#[stable(feature = "rust1", since = "1.0.0")]
impl<T, A: Allocator> AsRef<[T]> for Vec<T, A> {
fn as_ref(&self) -> &[T] {
self
}
}
2015-09-25 15:43:58 +00:00
#[stable(feature = "vec_as_mut", since = "1.5.0")]
impl<T, A: Allocator> AsMut<[T]> for Vec<T, A> {
2015-09-25 15:43:58 +00:00
fn as_mut(&mut self) -> &mut [T] {
self
}
}
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
#[stable(feature = "rust1", since = "1.0.0")]
impl<T: Clone> From<&[T]> for Vec<T> {
/// Allocates a `Vec<T>` and fills it by cloning `s`'s items.
///
/// # Examples
///
/// ```
/// assert_eq!(Vec::from(&[1, 2, 3][..]), vec![1, 2, 3]);
/// ```
#[cfg(not(test))]
fn from(s: &[T]) -> Vec<T> {
s.to_vec()
}
#[cfg(test)]
fn from(s: &[T]) -> Vec<T> {
crate::slice::to_vec(s, Global)
}
}
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
#[stable(feature = "vec_from_mut", since = "1.19.0")]
impl<T: Clone> From<&mut [T]> for Vec<T> {
/// Allocates a `Vec<T>` and fills it by cloning `s`'s items.
///
/// # Examples
///
/// ```
/// assert_eq!(Vec::from(&mut [1, 2, 3][..]), vec![1, 2, 3]);
/// ```
2017-04-25 10:34:45 +00:00
#[cfg(not(test))]
fn from(s: &mut [T]) -> Vec<T> {
2017-04-25 10:34:45 +00:00
s.to_vec()
}
#[cfg(test)]
fn from(s: &mut [T]) -> Vec<T> {
crate::slice::to_vec(s, Global)
2017-04-25 10:34:45 +00:00
}
}
#[cfg(not(no_global_oom_handling))]
2023-10-02 23:12:46 +00:00
#[stable(feature = "vec_from_array_ref", since = "1.74.0")]
impl<T: Clone, const N: usize> From<&[T; N]> for Vec<T> {
/// Allocates a `Vec<T>` and fills it by cloning `s`'s items.
///
/// # Examples
///
/// ```
/// assert_eq!(Vec::from(&[1, 2, 3]), vec![1, 2, 3]);
/// ```
fn from(s: &[T; N]) -> Vec<T> {
Self::from(s.as_slice())
}
}
#[cfg(not(no_global_oom_handling))]
2023-10-02 23:12:46 +00:00
#[stable(feature = "vec_from_array_ref", since = "1.74.0")]
impl<T: Clone, const N: usize> From<&mut [T; N]> for Vec<T> {
/// Allocates a `Vec<T>` and fills it by cloning `s`'s items.
///
/// # Examples
///
/// ```
/// assert_eq!(Vec::from(&mut [1, 2, 3]), vec![1, 2, 3]);
/// ```
fn from(s: &mut [T; N]) -> Vec<T> {
Self::from(s.as_mut_slice())
}
}
#[cfg(not(no_global_oom_handling))]
2020-03-10 23:13:56 +00:00
#[stable(feature = "vec_from_array", since = "1.44.0")]
impl<T, const N: usize> From<[T; N]> for Vec<T> {
/// Allocates a `Vec<T>` and moves `s`'s items into it.
///
/// # Examples
///
/// ```
/// assert_eq!(Vec::from([1, 2, 3]), vec![1, 2, 3]);
/// ```
#[cfg(not(test))]
fn from(s: [T; N]) -> Vec<T> {
2023-02-26 01:05:27 +00:00
<[T]>::into_vec(Box::new(s))
}
2020-02-01 13:15:50 +00:00
#[cfg(test)]
fn from(s: [T; N]) -> Vec<T> {
crate::slice::into_vec(Box::new(s))
2020-01-31 05:30:55 +00:00
}
}
#[stable(feature = "vec_from_cow_slice", since = "1.14.0")]
impl<'a, T> From<Cow<'a, [T]>> for Vec<T>
where
[T]: ToOwned<Owned = Vec<T>>,
{
/// Converts a clone-on-write slice into a vector.
///
/// If `s` already owns a `Vec<T>`, it will be returned directly.
/// If `s` is borrowing a slice, a new `Vec<T>` will be allocated and
/// filled by cloning `s`'s items into it.
///
/// # Examples
///
/// ```
/// # use std::borrow::Cow;
/// let o: Cow<'_, [i32]> = Cow::Owned(vec![1, 2, 3]);
/// let b: Cow<'_, [i32]> = Cow::Borrowed(&[1, 2, 3]);
/// assert_eq!(Vec::from(o), Vec::from(b));
/// ```
fn from(s: Cow<'a, [T]>) -> Vec<T> {
s.into_owned()
}
}
// note: test pulls in std, which causes errors here
2017-02-14 01:37:42 +00:00
#[cfg(not(test))]
#[stable(feature = "vec_from_box", since = "1.18.0")]
impl<T, A: Allocator> From<Box<[T], A>> for Vec<T, A> {
/// Converts a boxed slice into a vector by transferring ownership of
2021-03-25 09:58:34 +00:00
/// the existing heap allocation.
///
/// # Examples
///
/// ```
/// let b: Box<[i32]> = vec![1, 2, 3].into_boxed_slice();
/// assert_eq!(Vec::from(b), vec![1, 2, 3]);
/// ```
fn from(s: Box<[T], A>) -> Self {
s.into_vec()
2017-02-14 01:37:42 +00:00
}
}
// note: test pulls in std, which causes errors here
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
#[cfg(not(test))]
#[stable(feature = "box_from_vec", since = "1.20.0")]
impl<T, A: Allocator> From<Vec<T, A>> for Box<[T], A> {
/// Converts a vector into a boxed slice.
///
/// Before doing the conversion, this method discards excess capacity like [`Vec::shrink_to_fit`].
///
/// [owned slice]: Box
/// [`Vec::shrink_to_fit`]: Vec::shrink_to_fit
///
/// # Examples
///
/// ```
/// assert_eq!(Box::from(vec![1, 2, 3]), vec![1, 2, 3].into_boxed_slice());
/// ```
2022-11-28 21:38:23 +00:00
///
/// Any excess capacity is removed:
/// ```
/// let mut vec = Vec::with_capacity(10);
/// vec.extend([1, 2, 3]);
///
/// assert_eq!(Box::from(vec), vec![1, 2, 3].into_boxed_slice());
/// ```
fn from(v: Vec<T, A>) -> Self {
v.into_boxed_slice()
2017-02-14 01:37:42 +00:00
}
}
alloc: Add unstable Cfg feature `no-global_oom_handling` For certain sorts of systems, programming, it's deemed essential that all allocation failures be explicitly handled where they occur. For example, see Linus Torvald's opinion in [1]. Merely not calling global panic handlers, or always `try_reserving` first (for vectors), is not deemed good enough, because the mere presence of the global OOM handlers is burdens static analysis. One option for these projects to use rust would just be to skip `alloc`, rolling their own allocation abstractions. But this would, in my opinion be a real shame. `alloc` has a few `try_*` methods already, and we could easily have more. Features like custom allocator support also demonstrate and existing to support diverse use-cases with the same abstractions. A natural way to add such a feature flag would a Cargo feature, but there are currently uncertainties around how std library crate's Cargo features may or not be stable, so to avoid any risk of stabilizing by mistake we are going with a more low-level "raw cfg" token, which cannot be interacted with via Cargo alone. Note also that since there is no notion of "default cfg tokens" outside of Cargo features, we have to invert the condition from `global_oom_handling` to to `not(no_global_oom_handling)`. This breaks the monotonicity that would be important for a Cargo feature (i.e. turning on more features should never break compatibility), but it doesn't matter for raw cfg tokens which are not intended to be "constraint solved" by Cargo or anything else. To support this use-case we create a new feature, "global-oom-handling", on by default, and put the global OOM handler infra and everything else it that depends on it behind it. By default, nothing is changed, but users concerned about global handling can make sure it is disabled, and be confident that all OOM handling is local and explicit. For this first iteration, non-flat collections are outright disabled. `Vec` and `String` don't yet have `try_*` allocation methods, but are kept anyways since they can be oom-safely created "from parts", and we hope to add those `try_` methods in the future. [1]: https://lore.kernel.org/lkml/CAHk-=wh_sNLoz84AUUzuqXEsYH35u=8HV3vK-jbRbJ_B-JjGrg@mail.gmail.com/
2021-04-17 00:18:04 +00:00
#[cfg(not(no_global_oom_handling))]
#[stable(feature = "rust1", since = "1.0.0")]
impl From<&str> for Vec<u8> {
/// Allocates a `Vec<u8>` and fills it with a UTF-8 string.
///
/// # Examples
///
/// ```
/// assert_eq!(Vec::from("123"), vec![b'1', b'2', b'3']);
/// ```
fn from(s: &str) -> Vec<u8> {
From::from(s.as_bytes())
}
}
2020-09-05 19:02:21 +00:00
#[stable(feature = "array_try_from_vec", since = "1.48.0")]
impl<T, A: Allocator, const N: usize> TryFrom<Vec<T, A>> for [T; N] {
type Error = Vec<T, A>;
/// Gets the entire contents of the `Vec<T>` as an array,
/// if its size exactly matches that of the requested array.
///
/// # Examples
///
/// ```
/// assert_eq!(vec![1, 2, 3].try_into(), Ok([1, 2, 3]));
/// assert_eq!(<Vec<i32>>::new().try_into(), Ok([]));
/// ```
///
/// If the length doesn't match, the input comes back in `Err`:
/// ```
/// let r: Result<[i32; 4], _> = (0..10).collect::<Vec<_>>().try_into();
/// assert_eq!(r, Err(vec![0, 1, 2, 3, 4, 5, 6, 7, 8, 9]));
/// ```
///
/// If you're fine with just getting a prefix of the `Vec<T>`,
/// you can call [`.truncate(N)`](Vec::truncate) first.
/// ```
/// let mut v = String::from("hello world").into_bytes();
/// v.sort();
/// v.truncate(2);
/// let [a, b]: [_; 2] = v.try_into().unwrap();
/// assert_eq!(a, b' ');
/// assert_eq!(b, b'd');
/// ```
fn try_from(mut vec: Vec<T, A>) -> Result<[T; N], Vec<T, A>> {
if vec.len() != N {
return Err(vec);
}
// SAFETY: `.set_len(0)` is always sound.
unsafe { vec.set_len(0) };
// SAFETY: A `Vec`'s pointer is always aligned properly, and
// the alignment the array needs is the same as the items.
// We checked earlier that we have sufficient items.
// The items will not double-drop as the `set_len`
// tells the `Vec` not to also drop them.
let array = unsafe { ptr::read(vec.as_ptr() as *const [T; N]) };
Ok(array)
}
}