Commit Graph

8380 Commits

Author SHA1 Message Date
bors
b04bfb4aea Auto merge of #97437 - jyn514:impl-asrawfd-arc, r=dtolnay
`impl<T: AsRawFd> AsRawFd for {Arc,Box}<T>`

This allows implementing traits that require a raw FD on Arc and Box.

Previously, you'd have to add the function to the trait itself:

```rust
trait MyTrait {
    fn as_raw_fd(&self) -> RawFd;
}

impl<T: MyTrait> MyTrait for Arc<T> {
    fn as_raw_fd(&self) -> RawFd {
        (**self).as_raw_fd()
    }
}
```

In particular, this leads to lots of "multiple applicable items in scope" errors because you have to disambiguate `MyTrait::as_raw_fd` from `AsRawFd::as_raw_fd` at each call site. In generic contexts, when passing the type to a function that takes `impl AsRawFd` it's also sometimes required to use `T: MyTrait + AsRawFd`, which wouldn't be necessary if I could write `MyTrait: AsRawFd`.

After this PR, the code can be simpler:
```rust
trait MyTrait: AsRawFd {}

impl<T: MyTrait> MyTrait for Arc<T> {}
```
2022-07-03 12:17:19 +00:00
bors
f99f9e48ed Auto merge of #98755 - nnethercote:faster-vec-insert, r=cuviper
Optimize `Vec::insert` for the case where `index == len`.

By skipping the call to `copy` with a zero length. This makes it closer
to `push`.

I did this recently for `SmallVec`
(https://github.com/servo/rust-smallvec/pull/282) and it was a big perf win in
one case. Although I don't have a specific use case in mind, it seems
worth doing it for `Vec` as well.

Things to note:
- In the `index < len` case, the number of conditions checked is
  unchanged.
- In the `index == len` case, the number of conditions checked increases
  by one, but the more expensive zero-length copy is avoided.
- In the `index > len` case the code now reserves space for the extra
  element before panicking. This seems like an unimportant change.

r? `@cuviper`
2022-07-03 09:36:37 +00:00
bors
ada8c80bed Auto merge of #98673 - pietroalbini:pa-bootstrap-update, r=Mark-Simulacrum
Bump bootstrap compiler

r? `@Mark-Simulacrum`
2022-07-03 06:55:50 +00:00
bors
6a10920564 Auto merge of #97235 - nbdd0121:unwind, r=Amanieu
Fix FFI-unwind unsoundness with mixed panic mode

UB maybe introduced when an FFI exception happens in a `C-unwind` foreign function and it propagates through a crate compiled with `-C panic=unwind` into a crate compiled with `-C panic=abort` (#96926).

To prevent this unsoundness from happening, we will disallow a crate compiled with `-C panic=unwind` to be linked into `panic-abort` *if* it contains a call to `C-unwind` foreign function or function pointer. If no such call exists, then we continue to allow such mixed panic mode linking because it's sound (and stable). In fact we still need the ability to do mixed panic mode linking for std, because we only compile std once with `-C panic=unwind` and link it regardless panic strategy.

For libraries that wish to remain compile-once-and-linkable-to-both-panic-runtimes, a `ffi_unwind_calls` lint is added (gated under `c_unwind` feature gate) to flag any FFI unwind calls that will cause the linkable panic runtime be restricted.

In summary:
```rust
#![warn(ffi_unwind_calls)]

mod foo {
    #[no_mangle]
    pub extern "C-unwind" fn foo() {}
}

extern "C-unwind" {
    fn foo();
}

fn main() {
    // Call to Rust function is fine regardless ABI.
    foo::foo();
    // Call to foreign function, will cause the crate to be unlinkable to panic-abort if compiled with `-Cpanic=unwind`.
    unsafe { foo(); }
    //~^ WARNING call to foreign function with FFI-unwind ABI
    let ptr: extern "C-unwind" fn() = foo::foo;
    // Call to function pointer, will cause the crate to be unlinkable to panic-abort if compiled with `-Cpanic=unwind`.
    ptr();
    //~^ WARNING call to function pointer with FFI-unwind ABI
}
```

Fix #96926

`@rustbot` label: T-compiler F-c_unwind
2022-07-02 14:06:27 +00:00
Mark Rousskov
d8bfae4f99 Adjust for rustfmt order change 2022-07-01 18:13:55 -04:00
Dylan DPC
9dd3288557
Rollup merge of #98585 - cuviper:covariant-thinbox, r=thomcc
Make `ThinBox<T>` covariant in `T`

Just like `Box<T>`, we want `ThinBox<T>` to be covariant in `T`, but the
projection in `WithHeader<<T as Pointee>::Metadata>` was making it
invariant. This is now hidden as `WithOpaqueHeader`, which we type-cast
whenever the real `WithHeader<H>` type is needed.

Fixes the problem noted in <https://github.com/rust-lang/rust/issues/92791#issuecomment-1104636249>.
2022-07-01 20:19:17 +05:30
Pietro Albini
6b2d3d5f3c
update cfg(bootstrap)s 2022-07-01 15:48:23 +02:00
bors
ca1e68b322 Auto merge of #98730 - matthiaskrgr:rollup-2c4d4x5, r=matthiaskrgr
Rollup of 10 pull requests

Successful merges:

 - #97629 ([core] add `Exclusive` to sync)
 - #98503 (fix data race in thread::scope)
 - #98670 (llvm-wrapper: adapt for LLVMConstExtractValue removal)
 - #98671 (Fix source sidebar bugs)
 - #98677 (For diagnostic information of Boolean, remind it as use the type: 'bool')
 - #98684 (add test for 72793)
 - #98688 (interpret: add From<&MplaceTy> for PlaceTy)
 - #98695 (use "or pattern")
 - #98709 (Remove unneeded methods declaration for old web browsers)
 - #98717 (get rid of tidy 'unnecessarily ignored' warnings)

Failed merges:

r? `@ghost`
`@rustbot` modify labels: rollup
2022-07-01 11:09:35 +00:00
Nicholas Nethercote
679c5ee244 Optimize Vec::insert for the case where index == len.
By skipping the call to `copy` with a zero length. This makes it closer
to `push`.

I did this recently for `SmallVec`
(https://github.com/servo/rust-smallvec/pull/282) and it was a big perf win in
one case. Although I don't have a specific use case in mind, it seems
worth doing it for `Vec` as well.

Things to note:
- In the `index < len` case, the number of conditions checked is
  unchanged.
- In the `index == len` case, the number of conditions checked increases
  by one, but the more expensive zero-length copy is avoided.
- In the `index > len` case the code now reserves space for the extra
  element before panicking. This seems like an unimportant change.
2022-07-01 06:46:30 +10:00
Matthias Krüger
ebecc13106
Rollup merge of #98503 - RalfJung:scope-race, r=m-ou-se
fix data race in thread::scope

Puts the `ScopeData` into an `Arc` so it sticks around as long as we need it.
This means one extra `Arc::clone` per spawned scoped thread, which I hope is fine.

Fixes https://github.com/rust-lang/rust/issues/98498
r? `````@m-ou-se`````
2022-06-30 19:55:51 +02:00
Matthias Krüger
0e71d1f237
Rollup merge of #97629 - guswynn:exclusive_struct, r=m-ou-se
[core] add `Exclusive` to sync

(discussed here: https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/Adding.20.60SyncWrapper.60.20to.20std)

`Exclusive` is a wrapper that exclusively allows mutable access to the inner value if you have exclusive access to the wrapper. It acts like a compile time mutex, and hold an unconditional `Sync` implementation.

## Justification for inclusion into std
- This wrapper unblocks actual problems:
  - The example that I hit was a vector of `futures::future::BoxFuture`'s causing a central struct in a script to be non-`Sync`. To work around it, you either write really difficult code, or wrap the futures in a needless mutex.
- Easy to maintain: this struct is as simple as a wrapper can get, and its `Sync` implementation has very clear reasoning
- Fills a gap: `&/&mut` are to `RwLock` as `Exclusive` is to `Mutex`

## Public Api
```rust
// core::sync
#[derive(Default)]
struct Exclusive<T: ?Sized> { ... }

impl<T: ?Sized> Sync for Exclusive {}

impl<T> Exclusive<T> {
    pub const fn new(t: T) -> Self;
    pub const fn into_inner(self) -> T;
}

impl<T: ?Sized> Exclusive<T> {
    pub const fn get_mut(&mut self) -> &mut T;
    pub const fn get_pin_mut(Pin<&mut self>) -> Pin<&mut T>;
    pub const fn from_mut(&mut T) -> &mut Exclusive<T>;
    pub const fn from_pin_mut(Pin<&mut T>) -> Pin<&mut Exclusive<T>>;
}

impl<T: Future> Future for Exclusive { ... }

impl<T> From<T> for Exclusive<T> { ... }
impl<T: ?Sized> Debug for Exclusive { ... }
```

## Naming
This is a big bikeshed, but I felt that `Exclusive` captured its general purpose quite well.

## Stability and location
As this is so simple, it can be in `core`. I feel that it can be stabilized quite soon after it is merged, if the libs teams feels its reasonable to add. Also, I don't really know how unstable feature work in std/core's codebases, so I might need help fixing them

## Tips for review
The docs probably are the thing that needs to be reviewed! I tried my best, but I'm sure people have more experience than me writing docs for `Core`

### Implementation:
The API is mostly pulled from https://docs.rs/sync_wrapper/latest/sync_wrapper/struct.SyncWrapper.html (which is apache 2.0 licenesed), and the implementation is trivial:
- its an unsafe justification for pinning
- its an unsafe justification for the `Sync` impl (mostly reasoned about by ````@danielhenrymantilla```` here: https://github.com/Actyx/sync_wrapper/pull/2)
- and forwarding impls, starting with derivable ones and `Future`
2022-06-30 19:55:50 +02:00
The 8472
3fcf84a68e clarify that ExactSizeIterator::len returns the remaining length 2022-06-30 19:45:36 +02:00
Matthias Krüger
7dedec7be7
Rollup merge of #98652 - ojeda:warning-free-no_global_oom_handling, r=joshtriplett
`alloc`: clean and ensure `no_global_oom_handling`  builds are warning-free

Rust 1.62.0 introduced a couple new `unused_imports` warnings
in `no_global_oom_handling` builds, making a total of 5 warnings.

<details>

```txt
warning: unused import: `Unsize`
 --> library/alloc/src/boxed/thin.rs:6:33
  |
6 | use core::marker::{PhantomData, Unsize};
  |                                 ^^^^^^
  |
  = note: `#[warn(unused_imports)]` on by default

warning: unused import: `from_fn`
  --> library/alloc/src/string.rs:51:18
   |
51 | use core::iter::{from_fn, FusedIterator};
   |                  ^^^^^^^

warning: unused import: `core::ops::Deref`
  --> library/alloc/src/vec/into_iter.rs:12:5
   |
12 | use core::ops::Deref;
   |     ^^^^^^^^^^^^^^^^

warning: associated function `shrink` is never used
   --> library/alloc/src/raw_vec.rs:424:8
    |
424 |     fn shrink(&mut self, cap: usize) -> Result<(), TryReserveError> {
    |        ^^^^^^
    |
    = note: `#[warn(dead_code)]` on by default

warning: associated function `forget_remaining_elements` is never used
   --> library/alloc/src/vec/into_iter.rs:126:19
    |
126 |     pub(crate) fn forget_remaining_elements(&mut self) {
    |                   ^^^^^^^^^^^^^^^^^^^^^^^^^
```

</details>

This PR cleans them and ensures no new ones are introduced
so that projects compiling `alloc` without infallible allocations
do not see them (and may want to enable `-Dwarnings`).

The couple `dead_code` ones may be reverted when some fallible
allocation support starts using them.
2022-06-29 20:35:04 +02:00
Dylan DPC
375ab3e44f
Rollup merge of #98516 - dlrobertson:uefi_va_list, r=joshtriplett
library: fix uefi va_list type definition

For uefi the `va_list` should always be the void pointer variant.

Related to: https://github.com/rust-lang/rust/issues/44930
2022-06-29 17:59:34 +05:30
Dylan DPC
3f2ba25159
Rollup merge of #98479 - leocth:atomic-bool-fetch-not, r=joshtriplett
Add `fetch_not` method on `AtomicBool`

This PR adds a `fetch_not` method on `AtomicBool` performs the NOT operation on the inner value.
Internally, this just calls the `fetch_xor` method with the value `true`.

[See this IRLO discussion](https://internals.rust-lang.org/t/could-we-have-fetch-not-for-atomicbool-s/16881)
2022-06-29 17:59:32 +05:30
Dylan DPC
45740acd34
Rollup merge of #97423 - m-ou-se:memory-ordering-intrinsics, r=tmiasko
Simplify memory ordering intrinsics

This changes the names of the atomic intrinsics to always fully include their memory ordering arguments.

```diff
- atomic_cxchg
+ atomic_cxchg_seqcst_seqcst

- atomic_cxchg_acqrel
+ atomic_cxchg_acqrel_release

- atomic_cxchg_acqrel_failrelaxed
+ atomic_cxchg_acqrel_relaxed

// And so on.
```

- `seqcst` is no longer implied
- The failure ordering on chxchg is no longer implied in some cases, but now always explicitly part of the name.
- `release` is no longer shortened to just `rel`. That was especially confusing, since `relaxed` also starts with `rel`.
- `acquire` is no longer shortened to just `acq`, such that the names now all match the `std::sync::atomic::Ordering` variants exactly.
- This now allows for more combinations on the compare exchange operations, such as `atomic_cxchg_acquire_release`, which is necessary for #68464.
- This PR only exposes the new possibilities through unstable intrinsics, but not yet through the stable API. That's for [a separate PR](https://github.com/rust-lang/rust/pull/98383) that requires an FCP.

Suffixes for operations with a single memory order:

| Order   | Before       | After      |
|---------|--------------|------------|
| Relaxed | `_relaxed`   | `_relaxed` |
| Acquire | `_acq`       | `_acquire` |
| Release | `_rel`       | `_release` |
| AcqRel  | `_acqrel`    | `_acqrel`  |
| SeqCst  | (none)       | `_seqcst`  |

Suffixes for compare-and-exchange operations with two memory orderings:

| Success | Failure | Before                   | After              |
|---------|---------|--------------------------|--------------------|
| Relaxed | Relaxed | `_relaxed`               | `_relaxed_relaxed` |
| Relaxed | Acquire |                       | `_relaxed_acquire` |
| Relaxed | SeqCst  |                       | `_relaxed_seqcst`  |
| Acquire | Relaxed | `_acq_failrelaxed`       | `_acquire_relaxed` |
| Acquire | Acquire | `_acq`                   | `_acquire_acquire` |
| Acquire | SeqCst  |                       | `_acquire_seqcst`  |
| Release | Relaxed | `_rel`                   | `_release_relaxed` |
| Release | Acquire |                       | `_release_acquire` |
| Release | SeqCst  |                       | `_release_seqcst`  |
| AcqRel  | Relaxed | `_acqrel_failrelaxed`    | `_acqrel_relaxed`  |
| AcqRel  | Acquire | `_acqrel`                | `_acqrel_acquire`  |
| AcqRel  | SeqCst  |                       | `_acqrel_seqcst`   |
| SeqCst  | Relaxed | `_failrelaxed`           | `_seqcst_relaxed`  |
| SeqCst  | Acquire | `_failacq`               | `_seqcst_acquire`  |
| SeqCst  | SeqCst  | (none)                   | `_seqcst_seqcst`   |
2022-06-29 10:28:18 +05:30
Miguel Ojeda
83addf2540 alloc: fix no_global_oom_handling warnings
Rust 1.62.0 introduced a couple new `unused_imports` warnings
in `no_global_oom_handling` builds, making a total of 5 warnings:

```txt
warning: unused import: `Unsize`
 --> library/alloc/src/boxed/thin.rs:6:33
  |
6 | use core::marker::{PhantomData, Unsize};
  |                                 ^^^^^^
  |
  = note: `#[warn(unused_imports)]` on by default

warning: unused import: `from_fn`
  --> library/alloc/src/string.rs:51:18
   |
51 | use core::iter::{from_fn, FusedIterator};
   |                  ^^^^^^^

warning: unused import: `core::ops::Deref`
  --> library/alloc/src/vec/into_iter.rs:12:5
   |
12 | use core::ops::Deref;
   |     ^^^^^^^^^^^^^^^^

warning: associated function `shrink` is never used
   --> library/alloc/src/raw_vec.rs:424:8
    |
424 |     fn shrink(&mut self, cap: usize) -> Result<(), TryReserveError> {
    |        ^^^^^^
    |
    = note: `#[warn(dead_code)]` on by default

warning: associated function `forget_remaining_elements` is never used
   --> library/alloc/src/vec/into_iter.rs:126:19
    |
126 |     pub(crate) fn forget_remaining_elements(&mut self) {
    |                   ^^^^^^^^^^^^^^^^^^^^^^^^^
```

This patch cleans them so that projects compiling `alloc` without
infallible allocations do not see the warnings. It also enables
the use of `-Dwarnings`.

The couple `dead_code` ones may be reverted when some fallible
allocation support starts using them.

Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
2022-06-29 04:44:23 +02:00
bors
8308806403 Auto merge of #98632 - matthiaskrgr:rollup-peg868d, r=matthiaskrgr
Rollup of 11 pull requests

Successful merges:

 - #98548 (rustdoc-json: Allow Typedef to be different in sanity assert)
 - #98560 (Add regression test for #85907)
 - #98564 (Remove references to `./tmp` in-tree)
 - #98602 (Add regression test for #80074)
 - #98606 (⬆️ rust-analyzer)
 - #98609 (Fix ICE for associated constant generics)
 - #98611 (Fix glob import ICE in rustdoc JSON format)
 - #98617 (Remove feature `const_option` from std)
 - #98619 (Fix mir-opt wg name)
 - #98621 (llvm-wrapper: adapt for removal of the ASanGlobalsMetadataAnalysis LLVM API)
 - #98623 (fix typo in comment)

Failed merges:

r? `@ghost`
`@rustbot` modify labels: rollup
2022-06-28 18:36:42 +00:00
Matthias Krüger
a3bdd46431
Rollup merge of #98617 - ChrisDenton:const-unwrap, r=Mark-Simulacrum
Remove feature `const_option` from std

This is part of the effort to reduce the number of unstable features used by std. This one is easy as it's only used in one place.
2022-06-28 18:34:33 +02:00
bors
94e93749ab Auto merge of #98188 - mystor:fast_group_punct, r=eddyb
proc_macro/bridge: stop using a remote object handle for proc_macro Punct and Group

This is the third part of https://github.com/rust-lang/rust/pull/86822, split off as requested in https://github.com/rust-lang/rust/pull/86822#pullrequestreview-1008655452. This patch transforms the `Punct` and `Group` types into structs serialized over IPC rather than handles, making them more efficient to create and manipulate from within proc-macros.
2022-06-28 16:10:30 +00:00
Nika Layzell
64a7d57046 review changes
longer names for RPC generics and reduced dependency on macros in the server.
2022-06-28 09:54:29 -04:00
Chris Denton
720c430822
Add a fixme comment 2022-06-28 12:18:16 +01:00
Chris Denton
2ee92419dd
Remove feature const_option from std 2022-06-28 11:37:48 +01:00
Dylan DPC
e5b82de04c
Rollup merge of #98595 - cuviper:send-sync-thinbox, r=m-ou-se
Implement `Send` and `Sync` for `ThinBox<T>`

Just like `Box<T>`, `ThinBox<T>` owns its data on the heap, so it should
implement `Send` and `Sync` when `T` does.

This extends tracking issue #92791.
2022-06-28 15:30:07 +05:30
Dylan DPC
f181ae9946
Rollup merge of #98555 - mkroening:hermit-lock-init, r=m-ou-se
Hermit: Fix initializing lazy locks

Closes https://github.com/hermitcore/rusty-hermit/issues/322.

The initialization function of hermit's `Condvar` is not called since https://github.com/rust-lang/rust/pull/97647 and was erroneously removed in https://github.com/rust-lang/rust/pull/97879.

r? ``@m-ou-se``

CC: ``@stlankes``
2022-06-28 15:30:06 +05:30
Dylan DPC
ff223ff297
Rollup merge of #98430 - camsteffen:flatten-refactor, r=joshtriplett
Refactor iter adapters with less macros

Just some code cleanup. Introduced a util `and_then_or_clear` for each of chain, flatten and fuse iter adapter impls. This reduces code nicely for flatten, but admittedly the other modules are more of a lateral move replacing macros with a function. But I think consistency across the modules and avoiding macros when possible is good.
2022-06-28 15:30:05 +05:30
Mara Bos
4982a59986 Rename/restructure memory ordering intrinsics. 2022-06-28 08:58:27 +02:00
bors
64eb9ab869 Auto merge of #98324 - conradludgate:write-vectored-vec, r=Mark-Simulacrum
attempt to optimise vectored write

benchmarked:

old:
```
test io::cursor::tests::bench_write_vec                     ... bench:          68 ns/iter (+/- 2)
test io::cursor::tests::bench_write_vec_vectored            ... bench:         913 ns/iter (+/- 31)
```

new:
```
test io::cursor::tests::bench_write_vec                     ... bench:          64 ns/iter (+/- 0)
test io::cursor::tests::bench_write_vec_vectored            ... bench:         747 ns/iter (+/- 27)
```

More unsafe than I wanted (and less gains) in the end, but it still does the job
2022-06-28 06:25:19 +00:00
Josh Stone
6400736142 Implement Send and Sync for ThinBox<T>
Just like `Box<T>`, `ThinBox<T>` owns its data on the heap, so it should
implement `Send` and `Sync` when `T` does.
2022-06-27 15:49:59 -07:00
Ralf Jung
af0c1fe83d fix data race in thread::scope 2022-06-27 16:50:42 -04:00
Matthias Krüger
f266821d8f
Rollup merge of #98587 - RalfJung:core-tests, r=thomcc
libcore tests: avoid int2ptr casts

We don't need any of these pointers to actually be dereferenceable so using `ptr::invalid` should be fine. And then we can run Miri with strict provenance enforcement on the tests.
2022-06-27 22:35:14 +02:00
Matthias Krüger
b52c362b8b
Rollup merge of #98579 - RalfJung:alloc-tests, r=thomcc
liballoc tests: avoid int2ptr cast

I think we don't need `ptr::from_exposed_addr` here; `ptr::invalid` should be enough for this test. (And this makes Miri less unhappy when running these tests.)
2022-06-27 22:35:13 +02:00
Ralf Jung
8c977cfda8 libcore tests: avoid int2ptr casts 2022-06-27 13:30:44 -04:00
Josh Stone
e67e165585 Make ThinBox<T> covariant in T
Just like `Box<T>`, we want `ThinBox<T>` to be covariant in `T`, but the
projection in `WithHeader<<T as Pointee>::Metadata>` was making it
invariant. This is now hidden as `WithOpaqueHeader`, which we type-cast
whenever the real `WithHeader<H>` type is needed.
2022-06-27 10:05:55 -07:00
Ralf Jung
9b497abb9a liballoc tests: avoid int2ptr cast 2022-06-27 10:50:56 -04:00
Nika Layzell
f28dfdf1c7 proc_macro: stop using a remote object handle for Group
This greatly reduces round-trips to fetch relevant extra information about the
token in proc macro code, and avoids RPC messages to create Group tokens.
2022-06-26 22:20:33 -04:00
Nika Layzell
72bfe618fa proc_macro: stop using a remote object handle for Punct
This greatly reduces round-trips to fetch relevant extra information about the
token in proc macro code, and avoids RPC messages to create Punct tokens.
2022-06-26 22:20:33 -04:00
Wilfred Hughes
1c1ae78db7
Fix spelling in SAFETY comment
"can not" should be "cannot", and add punctuation.
2022-06-26 19:17:34 -07:00
bors
3b0d4813ab Auto merge of #98187 - mystor:fast_span_call_site, r=eddyb
proc_macro/bridge: cache static spans in proc_macro's client thread-local state

This is the second part of https://github.com/rust-lang/rust/pull/86822, split off as requested in https://github.com/rust-lang/rust/pull/86822#pullrequestreview-1008655452. This patch removes the RPC calls required for the very common operations of `Span::call_site()`, `Span::def_site()` and `Span::mixed_site()`.

Some notes:

This part is one of the ones I don't love as a final solution from a design standpoint, because I don't like how the spans are serialized immediately at macro invocation. I think a more elegant solution might've been to reserve special IDs for `call_site`, `def_site`, and `mixed_site` at compile time (either starting at 1 or from `u32::MAX`) and making reading a Span handle automatically map these IDs to the relevant values, rather than doing extra serialization.

This would also have an advantage for potential future work to allow `proc_macro` to operate more independently from the compiler (e.g. to reduce the necessity of `proc-macro2`), as methods like `Span::call_site()` could be made to function without access to the compiler backend.

That was unfortunately tricky to do at the time, as this was the first part I wrote of the patches. After the later part (#98188, #98189), the other uses of `InternedStore` are removed meaning that a custom serialization strategy for `Span` is easier to implement.

If we want to go that path, we'll still need the majority of the work to split the bridge object and introduce the `Context` trait for free methods, and it will be easier to do after `Span` is the only user of `InternedStore` (after #98189).
2022-06-26 21:28:24 +00:00
Martin Kröning
0c8860273c Hermit: Make Mutex::init a no-op 2022-06-26 23:20:41 +02:00
Martin Kröning
f954f7b23b Hermit: Fix initializing lazy locks 2022-06-26 23:19:38 +02:00
Matthias Krüger
935958e6e4
Rollup merge of #98541 - Veykril:patch-2, r=Dylan-DPC
Update `std::alloc::System` doc example code style

`return` on the last line of a block is unidiomatic so I don't think the example should be using that here
2022-06-26 19:47:09 +02:00
Matthias Krüger
e8a2e265b5
Rollup merge of #97908 - iago-lito:stabilize_nonzero_checked_ops_constness, r=scottmcm
Stabilize NonZero* checked operations constness.

Partial stabilization for #97547 (continued).
2022-06-26 19:47:02 +02:00
Matthias Krüger
c348beacea
Rollup merge of #97140 - joboet:solid_parker, r=m-ou-se
std: use an event-flag-based thread parker on SOLID

`Mutex` and `Condvar` are being replaced by more efficient implementations, which need thread parking themselves (see #93740). Therefore, the generic `Parker` needs to be replaced on all platforms where the new lock implementation will be used, which, after #96393, are SOLID, SGX and Hermit (more PRs coming soon).

SOLID, conforming to the [μITRON specification](http://www.ertl.jp/ITRON/SPEC/FILE/mitron-400e.pdf), has event flags, which are a thread parking primitive very similar to `Parker`. However, they do not make any atomic ordering guarantees (even though those can probably be assumed) and necessitate a system call even when the thread token is already available. Hence, this `Parker`, like the Windows parker, uses an extra atomic state variable.

I future-proofed the code by wrapping the event flag in a `WaitFlag` structure, as both SGX and Hermit can share the Parker implementation, they just have slightly different primitives (SGX uses signals and Hermit has a thread blocking API).

`````@kawadakk````` I assume you are the target maintainer? Could you test this for me?
2022-06-26 19:46:59 +02:00
Nika Layzell
e32ee19b3a proc_macro: Rename ExpnContext to ExpnGlobals, and unify method on Server trait 2022-06-26 12:48:33 -04:00
Conrad Ludgate
803083a9d9 attempt to optimise vectored write 2022-06-26 17:15:31 +01:00
bors
788ddedb0d Auto merge of #98190 - nnethercote:optimize-derive-Debug-code, r=scottmcm
Improve `derive(Debug)`

r? `@ghost`
2022-06-26 15:00:04 +00:00
Lukas Wirth
756118e2b9
Update std::alloc::System docs 2022-06-26 16:31:29 +02:00
scottmcm
2339bb20a6
Update since to 1.64 (since we're after 1.63) 2022-06-26 08:45:53 +00:00
leocth
9c5ae20c59 forgot about the feature flag in the doctest 2022-06-26 10:49:05 +08:00