Don't force include Windows goop when documenting
Why do we need to include all the windows bits on non-windows platforms? Let's try not doing that.
Possible alternative to #111394, if it works.
Fix incorrect implication of transmuting slices
transmute<&[u8]> would be useful and as a beginner it is confusing to see documents casually confuse the types of &[u8] and [u8; SZ]
Remove some `assume`s from slice iterators that don't do anything
Because the start pointer is iterators is already a `NonNull`, we emit the appropriate `!nonnull` metadata when loading the pointer to tell LLVM that it's non-null.
Probably the best way to see that it's the metadata that's important (and not the `assume`) is to observe that LLVM actually *removes* the `assume` from the optimized IR: <https://rust.godbolt.org/z/KhE6G963n>.
(I also checked that, yes, the if-not-ZST `assume` on `end` is still doing something: it's how there's a `!nonnull` metadata on its load, even though it's an ordinary raw pointer. The codegen test added in this PR fails if the other `assume` is removed.)
Revert "Populate effective visibilities in `rustc_privacy`"
This reverts commit cff85f22f5, cc #110907. It needs to be fixed, but there are too many issues being reported that I wanted to put up a revert until a proper fix can be committed.
Fixes a ton of issues where private but still reachable impls were missing during codegen:
Fixes#111320Fixes#111321Fixes#111334Fixes#111357Fixes#111368Fixes#111373Fixes#111377Fixes#111386Fixes#111387
`@bors` p=1
r? `@petrochenkov`
Implement builtin # syntax and use it for offset_of!(...)
Add `builtin #` syntax to the parser, as well as a generic infrastructure to support both item and expression position builtin syntaxes. The PR also uses this infrastructure for the implementation of the `offset_of!` macro, added by #106934.
cc `@petrochenkov` `@DrMeepster`
cc #110680 `builtin #` tracking issue
cc #106655 `offset_of!` tracking issue
Start using `windows sys` for Windows FFI bindings in std
Switch to using windows-sys for FFI. In order to avoid some currently contentious issues, this uses windows-bindgen to generate a smaller set of bindings instead of using the full crate.
Unlike the windows-sys crate, the generated bindings uses `*mut c_void` for handle types instead of `isize`. This to sidestep opsem concerns about mixing pointer types and integers between languages. Note that `SOCKET` remains defined as an integer but instead of being a usize, it's changed to fit the [standard library definition](a41fc00eaf/library/std/src/os/windows/raw.rs (L12-L16)):
```rust
#[cfg(target_pointer_width = "32")]
pub type SOCKET = u32;
#[cfg(target_pointer_width = "64")]
pub type SOCKET = u64;
```
The generated bindings also customizes the `#[link]` imports. I hope to switch to using raw-dylib but I don't want to tie that too closely with the switch to windows-sys.
---
Changes outside of the bindings are, for the most part, fairly minimal (e.g. some differences in `*mut` vs. `*const` or a few types differ). One issue is that our own bindings sometimes mix in higher level types, like `BorrowedHandle`. This is pretty adhoc though.
Add `#[inline]` to functions that are never called
This makes libcore binary size reduce by ~300 bytes. Not much, but these functions are never called so it doesn't make sense for them to get into the binary anyway.
Always const-evaluate the GCD in `slice::align_to_offsets`
Use an inline `const`-block to force the compiler to calculate the GCD at compile time, even in debug mode. This shouldn't affect the behavior of the program at all, but it drastically cuts down on the number of instructions emitted with optimizations disabled.
With the current implementation, a single `slice::align_to` instantiation (specifically `<[u8]>::align_to::<u128>()`) generates 676 instructions (on x86-64). Forcing the GCD computation to be const cuts it down to 327 instructions, so just over 50% less. This is obviously not representative of actual runtime gains, but I still see it as a significant win as long as it doesn't degrade compile times.
Not having to worry about LLVM const-evaluating the GCD function also allows it to use the textbook recursive euclidean algorithm instead of a much more complicated iterative implementation with multiple `unsafe`-blocks.
Remove `identity_future` from stdlib
This function/lang_item was introduced in #104321 as a temporary workaround of future lowering. The usage and need for it went away in #104833.
After a bootstrap update, the function itself can be removed from `std`.
STD support for PSVita
This PR adds std support for `armv7-sony-vita-newlibeabihf` target.
The work here is fairly similar to #95897, just for a different target platform.
This depends on the following pull requests:
rust-lang/backtrace-rs#523rust-lang/libc#3209
enable `rust_2018_idioms` lint group for doctests
With this change, `rust_2018_idioms` lint group will be enabled for compiler/libstd doctests.
Resolves#106086Resolves#99144
Signed-off-by: ozkanonur <work@onurozkan.dev>
Constify `[u8]::is_ascii` (unstably)
UTF-8 checking in `const fn`-stabilized back in 1.63 (#97367), but apparently somehow ASCII checking was never const-ified, despite being simpler.
New constness-tracking issue for `is_ascii`: #111090
I noticed this working on `ascii::Char`: #110998
This function/lang_item was introduced in #104321 as a temporary workaround of future lowering.
The usage and need for it went away in #104833.
After a bootstrap update, the function itself can be removed from `std`.
Remove calls to `mem::forget` and `mem::replace` in `Option::get_or_insert_with`.
This removes the unneeded calls to `mem::forget` and `mem::replace` in `Option::get_or_insert_with`.
clean up `transmute`s in `core`
* Use `transmute_unchecked` instead of `transmute_copy` for `MaybeUninit::transpose`.
* Use manual transmute for `Option<Ordering>` → `i8`.
Fix MXCSR configuration dependent timing
Dependent on the (potentially secret) data some vector instructions operate on, and the content in MXCSR, instruction retirement may be delayed by one cycle. This is a potential side channel.
This PR fixes this vulnerability for the `x86_64-fortanix-unknown-sgx` platform by loading MXCSR with `0x1fbf` through an `xrstor` instruction when the enclave is entered and executing an `lfence` immediately after. Other changes of the MXCSR happen only when the enclave is about to be exited and no vector instructions will be executed before it will actually do so. Users of EDP who change the MXCSR and do wish to defend against this side channel, will need to implement the software mitigation described [here](https://www.intel.com/content/www/us/en/developer/articles/technical/software-security-guidance/best-practices/mxcsr-configuration-dependent-timing.html).
cc: `@jethrogb` `@monokles`
Add FreeBSD cpuset support to `std:🧵:available_concurrency`
Use libc::cpuset_getaffinity to determine the CPUs available to the current process.
The existing sysconf and sysctl paths are left as fallback.
btree_map: `Cursor{,Mut}::peek_prev` must agree
Our `Cursor::peek_prev` and `CursorMut::peek_prev` must agree on how to behave when they are called on the "null element". This will fix rust-lang#111228.
r? `@Amanieu`
Fix `checked_{add,sub}_duration` incorrectly returning `None` when `other` has more than `i64::MAX` seconds
Use `checked_{add,sub}_unsigned` in `checked_{add,sub}_duration` so that the correct result is returned when adding/subtracting durations with more than `i64::MAX` seconds.
avoid duplicating TLS state between test std and realstd
This basically re-lands https://github.com/rust-lang/rust/pull/100201 and https://github.com/rust-lang/rust/pull/106638, which got reverted by https://github.com/rust-lang/rust/pull/110861. This works around 2 Miri limitations:
- Miri doesn't support the magic linker section that our Windows TLS support relies on, and instead knows where in std to find the symbol that stores the thread callback.
- For macOS, Miri only supports at most one destructor to be registered per thread.
The 2nd would not be very hard to fix (though the intended destructor order is unclear); the first would be a lot of work to fix. Neither of these is a problem for regular Rust code, but in the std test suite we have essentially 2 copies of the std code and then these both become issues. To avoid that we have the std test crate import the TLS code from the real std instead of having its own copy.
r? ``````@m-ou-se``````
Add a `sysroot` crate to represent the standard library crates
This adds a dummy crate named `sysroot` to represent the standard library target instead of using the `test` crate. This allows the removal of `proc_macro` as a dependency of `test` allowing these 2 crates to build in parallel saving around 9 seconds locally.
Replace generic thread parker with explicit no-op parker
With #98391 merged, all platforms supporting threads now have their own parking implementations. Therefore, the generic implementation can be removed. On the remaining platforms (really just WASM without atomics), parking is not supported, so calls to `thread::park` now return instantly, which is [allowed by their API](https://doc.rust-lang.org/nightly/std/thread/fn.park.html). This is a change in behaviour, as spurious wakeups do not currently occur since all platforms guard against them. It is invalid to depend on this, but I'm still going to tag this as libs-api for confirmation.
````@rustbot```` label +T-libs +T-libs-api +A-atomic
r? rust-lang/libs
Add cross-language LLVM CFI support to the Rust compiler
This PR adds cross-language LLVM Control Flow Integrity (CFI) support to the Rust compiler by adding the `-Zsanitizer-cfi-normalize-integers` option to be used with Clang `-fsanitize-cfi-icall-normalize-integers` for normalizing integer types (see https://reviews.llvm.org/D139395).
It provides forward-edge control flow protection for C or C++ and Rust -compiled code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code share the same virtual address space). For more information about LLVM CFI and cross-language LLVM CFI support for the Rust compiler, see design document in the tracking issue #89653.
Cross-language LLVM CFI can be enabled with -Zsanitizer=cfi and -Zsanitizer-cfi-normalize-integers, and requires proper (i.e., non-rustc) LTO (i.e., -Clinker-plugin-lto).
Thank you again, ``@bjorn3,`` ``@nikic,`` ``@samitolvanen,`` and the Rust community for all the help!
Implement tuple<->array convertions via `From`
This PR adds the following impls that convert between homogeneous tuples and arrays of the corresponding lengths:
```rust
impl<T> From<[T; 1]> for (T,) { ... }
impl<T> From<[T; 2]> for (T, T) { ... }
/* ... */
impl<T> From<[T; 12]> for (T, T, T, T, T, T, T, T, T, T, T, T) { ... }
impl<T> From<(T,)> for [T; 1] { ... }
impl<T> From<(T, T)> for [T; 2] { ... }
/* ... */
impl<T> From<(T, T, T, T, T, T, T, T, T, T, T, T)> for [T; 12] { ... }
```
IMO these are quite uncontroversial but note that they are, just like any other trait impls, insta-stable.
This commit adds cross-language LLVM Control Flow Integrity (CFI)
support to the Rust compiler by adding the
`-Zsanitizer-cfi-normalize-integers` option to be used with Clang
`-fsanitize-cfi-icall-normalize-integers` for normalizing integer types
(see https://reviews.llvm.org/D139395).
It provides forward-edge control flow protection for C or C++ and Rust
-compiled code "mixed binaries" (i.e., for when C or C++ and Rust
-compiled code share the same virtual address space). For more
information about LLVM CFI and cross-language LLVM CFI support for the
Rust compiler, see design document in the tracking issue #89653.
Cross-language LLVM CFI can be enabled with -Zsanitizer=cfi and
-Zsanitizer-cfi-normalize-integers, and requires proper (i.e.,
non-rustc) LTO (i.e., -Clinker-plugin-lto).
Some data-independent timing vector instructions may have subtle data-dependent
timing due to MXCSR configuration; dependent on (potentially secret) data
instruction retirement may be delayed by one cycle.
This can be done by simply changing the `\??\` prefix to `\\?\` and then attempting to convert to a user path.
Currently it simply strips off the prefix which could lead to the wrong path being returned (e.g. if it's not a drive path or if the path contains trailing spaces, etc).
Ensure test library issues json string line-by-line
#108659 introduces a custom test display implementation. It does so by using libtest to output json. The stdout is read line by line and parsed. The code trims the line read and checks whether it starts with a `{` and ends with a `}`.
Unfortunately, there is a race condition in how json data is written to stdout. The `write_message` function calls `self.out.write_all` repeatedly to write a buffer that contains (partial) json data, or a new line. There is no lock around the `self.out.write_all` functions. Similarly, the `write_message` function itself is called with only partial json data. As these functions are called from concurrent threads, this may result in json data ending up on the same stdout line. This PR avoids this by buffering the complete json data before issuing a single `self.out.write_all`.
(#109484 implemented a partial fix for this issue; it only avoids that failed json parsing would result in a panic.)
cc: `@jethrogb,` `@pietroalbini`
Remove `all` in target_thread_local cfg
I think it was left there by mistake after the previous refactoring. I just came across it while rebasing to master.
Add `ConstParamTy` trait
This is a bit sketch, but idk.
r? `@BoxyUwU`
Yet to be done:
- [x] ~~Figure out if it's okay to implement `StructuralEq` for primitives / possibly remove their special casing~~ (it should be okay, but maybe not in this PR...)
- [ ] Maybe refactor the code a little bit
- [x] Use a macro to make impls a bit nicer
Future work:
- [ ] Actually™ use the trait when checking if a `const` generic type is allowed
- [ ] _Really_ refactor the surrounding code
- [ ] Refactor `marker.rs` into multiple modules for each "theme" of markers
Refactor core::char::EscapeDefault and co. structures
Change core::char::{EscapeUnicode, EscapeDefault and EscapeDebug}
structures from using a state machine to computing escaped sequence
upfront and during iteration just going through the characters.
This is arguably simpler since it’s easier to think about having
a buffer and start..end range to iterate over rather than thinking
about a state machine.
This also harmonises implementation of aforementioned iterators and
core::ascii::EscapeDefault struct. This is done by introducing a new
helper EscapeIterInner struct which holds the buffer and offers simple
methods for iterating over range.
As a side effect, this probably optimises Display implementation for
those types since rather than calling write_char repeatedly, write_str
is invoked once. On 64-bit platforms, it also reduces size of some of
the structs:
| Struct | Before | After |
|----------------------------+--------+-------+
| core::char::EscapeUnicode | 16 | 12 |
| core::char::EscapeDefault | 16 | 12 |
| core::char::EscapeDebug | 16 | 16 |
My ulterior motive and reason why I started looking into this is
addition of as_str method to the iterators. With this change this
will became trivial. It’s also going to be trivial to implement
DoubleEndedIterator if that’s ever desired.
Make sure the implementation of TcpStream::as_raw_fd is fully inlined
Currently the following function:
```rust
use std::os::fd::{AsRawFd, RawFd};
use std::net::TcpStream;
pub fn as_raw_fd(socket: &TcpStream) -> RawFd {
socket.as_raw_fd()
}
```
Is optimized to the following:
```asm
example::as_raw_fd:
push rax
call qword ptr [rip + <std::net::tcp::TcpStream as std::sys_common::AsInner<std::sys_common::net::TcpStream>>::as_inner@GOTPCREL]
mov rdi, rax
call qword ptr [rip + std::sys_common::net::TcpStream::socket@GOTPCREL]
mov rdi, rax
pop rax
jmp qword ptr [rip + _ZN73_$LT$std..sys..unix..net..Socket$u20$as$u20$std..os..fd..raw..AsRawFd$GT$9as_raw_fd17h633bcf7e481df8bbE@GOTPCREL]
```
I think it would make more sense to inline trivial functions used within `TcpStream::AsRawFd`.
update wasi_clock_time_api ref.
Closes#110809
>Preview0 corresponded to the import module name wasi_unstable. It was also called snapshot_0 in some places. It was short-lived, and the changes to preview1 were minor, so the focus here is on preview1.
we use the `preview1` doc according to the above quote form [WASI legacy Readme](https://github.com/WebAssembly/WASI/blob/main/legacy/README.md) .
Add 64-bit `time_t` support on 32-bit glibc Linux to `set_times`
Add support to `set_times` for 64-bit `time_t` on 32-bit glibc Linux platforms which have a 32-bit `time_t`. Split from #109773.
Tracking issue: #98245
std docs: edit `PathBuf::set_file_name` example
To make explicit that `set_file_name` might replace or remove the
extension, not just the file stem.
Also edit docs for `Path::with_file_name`, which calls `set_file_name`.
Some of these relations were already mentioned in the text, but that
Send is implemented for &mut impl Send was not mentioned,
neither did the docs list when &T is Sync.
Make `mem::replace` simpler in codegen
Since they'd mentioned more intrinsics for simplifying stuff recently,
r? `@WaffleLapkin`
This is a continuation of me looking at foundational stuff that ends up with more instructions than it really needs. Specifically I noticed this one because `Range::next` isn't MIR-inlining, and one of the largest parts of it is a `replace::<usize>` that's a good dozen instructions instead of the two it could be.
So this means that `ptr::write` with a `Copy` type no longer generates worse IR than manually dereferencing (well, at least in LLVM -- MIR still has bonus pointer casts), and in doing so means that we're finally down to just the two essential `memcpy`s when emitting `mem::replace` for a large type, rather than the bonus-`alloca` and three `memcpy`s we emitted before this ([or the 6 we currently emit in 1.69 stable](https://rust.godbolt.org/z/67W8on6nP)). That said, LLVM does _usually_ manage to optimize the extra code away. But it's still nice for it not to have to do as much, thanks to (for example) not going through an `alloca` when `replace`ing a primitive like a `usize`.
(This is a new intrinsic, but one that's immediately lowered to existing MIR constructs, so not anything that MIRI or the codegen backends or MIR semantics needs to do work to handle.)
Tweak await span to not contain dot
Fixes a discrepancy between method calls and await expressions where the latter are desugared to have a span that *contains* the dot (i.e. `.await`) but method call identifiers don't contain the dot. This leads to weird suggestions suggestions in borrowck -- see linked issue.
Fixes#110761
This mostly touches a bunch of tests to tighten their `await` span.
`inline(always)` for `lt`/`le`/`ge`/`gt` on integers and floats
I happened to notice one of these not getting inlined as part of `Range::next` in <https://rust.godbolt.org/z/4WKWWxj1G>
```rust
bb1: {
StorageLive(_5);
_6 = &mut _4;
StorageLive(_21);
StorageLive(_14);
StorageLive(_15);
_15 = &((*_6).0: usize);
StorageLive(_16);
_16 = &((*_6).1: usize);
_14 = <usize as PartialOrd>::lt(move _15, move _16) -> bb7;
}
```
So since a call for something that's just one instruction is never the right choice, `#[inline(always)]` seems appropriate, like we have it on things like the rotate methods on integers.
Improve internal field comments on `slice::Iter(Mut)`
I wrote these in a previous PR that I ended up withdrawing, so might as well submit them separately.
`@bors` rollup=always
Make sure that some stdlib method signatures aren't accidental refinements
In the process of implementing https://rust-lang.github.io/rfcs/3245-refined-impls.html, I found a bunch of stdlib implementations that accidentally "refined" their method signatures by dropping (unnecessary) bounds.
This isn't currently a problem, but may become one if/when method signature refining is stabilized in the future. Shouldn't hurt to make these signatures a bit more accurate anyways.
NOTE (just to be clear lol): This does not affect behavior at all, since we don't actually take advantage of refined implementations yet!
Use MIR's `Offset` for pointer `add` too
~~Status: draft while waiting for #110822 to land, since this is built atop that.~~
~~r? `@ghost~~`
Canonical Rust code has mostly moved to `add`/`sub` on pointers, which take `usize`, instead of `offset` which takes `isize`. (And, relatedly, when `sub_ptr` was added it turned out it replaced every single in-tree use of `offset_from`, because `usize` is just so much more useful than `isize` in Rust.)
Unfortunately, `intrinsics::offset` could only accept `*const` and `isize`, so there's a *huge* amount of type conversions back and forth being done. They're identity conversions in the backend, but still end up producing quite a lot of unhelpful MIR.
This PR changes `intrinsics::offset` to accept `*const` *and* `*mut` along with `isize` *and* `usize`. Conveniently, the backends and CTFE already handle this, since MIR's `BinOp::Offset` [already supports all four combinations](adaac6b166/compiler/rustc_const_eval/src/transform/validate.rs (L523-L528)).
To demonstrate the difference, I added some `mir-opt/pre-codegen/` tests around slice indexing. Here's the difference to `[T]::get_mut`, since it uses `<*mut _>::add` internally:
```diff
`@@` -79,30 +70,21 `@@` fn slice_get_mut_usize(_1: &mut [u32], _2: usize) -> Option<&mut u32> {
StorageLive(_12); // scope 3 at $SRC_DIR/core/src/slice/index.rs:LL:COL
StorageLive(_9); // scope 6 at $SRC_DIR/core/src/slice/index.rs:LL:COL
_9 = _8 as *mut u32 (PtrToPtr); // scope 11 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageLive(_13); // scope 13 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- _13 = _2 as isize (IntToInt); // scope 13 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageLive(_14); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageLive(_15); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- _15 = _9 as *const u32 (Pointer(MutToConstPointer)); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- _14 = Offset(move _15, _13); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageDead(_15); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- _7 = move _14 as *mut u32 (PtrToPtr); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageDead(_14); // scope 15 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
- StorageDead(_13); // scope 13 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
+ _7 = Offset(_9, _2); // scope 13 at $SRC_DIR/core/src/ptr/mut_ptr.rs:LL:COL
StorageDead(_9); // scope 6 at $SRC_DIR/core/src/slice/index.rs:LL:COL
StorageDead(_12); // scope 3 at $SRC_DIR/core/src/slice/index.rs:LL:COL
StorageDead(_11); // scope 3 at $SRC_DIR/core/src/slice/index.rs:LL:COL
```
1c1c8e442a (diff-a841b6a4538657add3f39bc895744331453d0625e7aace128b1f604f0b63c8fdR80)
I happened to notice one of these not getting inlined as part of `Range::next` in <https://rust.godbolt.org/z/4WKWWxj1G>
```rust
bb1: {
StorageLive(_5);
_6 = &mut _4;
StorageLive(_21);
StorageLive(_14);
StorageLive(_15);
_15 = &((*_6).0: usize);
StorageLive(_16);
_16 = &((*_6).1: usize);
_14 = <usize as PartialOrd>::lt(move _15, move _16) -> bb7;
}
```
So since a call for something this trivial is never the right choice, `#[inline(always)]` seems appropriate.
`remove_dir_all`: try deleting the directory even if `FILE_LIST_DIRECTORY` access is denied
If opening a directory with `FILE_LIST_DIRECTORY` access fails then we should try opening without requesting that access. We may still be able to delete it if it's empty or a link.
Fixes https://github.com/rust-lang/cargo/issues/12042
More core::fmt::rt cleanup.
- Removes the `V1` suffix from the `Argument` and `Flag` types.
- Moves more of the format_args lang items into the `core::fmt::rt` module. (The only remaining lang item in `core::fmt` is `Arguments` itself, which is a public type.)
Part of https://github.com/rust-lang/rust/issues/99012
Follow-up to https://github.com/rust-lang/rust/pull/110616
Document `const {}` syntax for `std::thread_local`.
It exists and is pretty cool. More people should use it.
It was added in #83416 and stabilized in #91355 with the tracking issue #84223.
If opening a directory with `FILE_LIST_DIRECTORY` access fails then we should try opening without requesting that access. We may still be able to delete it if it's empty or a link.
Change memory ordering in System wrapper example
Currently, the `SeqCst` ordering is used, which seems unnecessary:
+ Even `Relaxed` ordering guarantees that all updates are atomic and are executed in total order
+ User code only reads atomic for monitoring purposes, no "happens-before" relationships with actual allocations and deallocations are needed for this
If argumentation above is correct, I propose changing ordering to `Relaxed` to clarify that no synchronization is required here, and improve performance (if somebody copy-pastes this example into their code).
Correct `std::prelude` comment
(Read the changed file first for context.)
First, `alloc` has no prelude.
Second, the docs for `v1` don't matter since the [prelude module] already has all the doc links. The `rust_2021` module for instance also doesnt have a convenient doc page. However as I understand glob imports still cant be used because the items dont have the same stabilisation versions.
[prelude module]: https://doc.rust-lang.org/std/prelude/index.html
docs(std): clarify remove_dir_all errors
When using `remove_dir_all`, I assumed that the function was idempotent and that I could always call it to remove a directory if it existed. That's not the case and it bit me in production, so I figured I'd submit this to clarify the docs.
Restructure and rename std thread_local internals to make it less of a maze
Every time I try to work on std's thread local internals, it feels like I'm trying to navigate a confusing maze made of macros, deeply nested modules, and types with multiple names/aliases. Time to clean it up a bit.
This PR:
- Exports `Key` with its own name (`Key`), instead of `__LocalKeyInner`
- Uses `pub macro` to put `__thread_local_inner` into a (unstable, hidden) module, removing `#[macro_export]`, removing it from the crate root.
- Removes the `__` from `__thread_local_inner`.
- Removes a few unnecessary `allow_internal_unstable` features from the macros
- Removes the `libstd_thread_internals` feature. (Merged with `thread_local_internals`.)
- And removes it from the unstable book
- Gets rid of the deeply nested modules for the `Key` definitions (`mod fast` / `mod os` / `mod statik`).
- Turns a `#[cfg]` mess into a single `cfg_if`, now that there's no `#[macro_export]` anymore that breaks with `cfg_if`.
- Simplifies the `cfg_if` conditions to not repeat the conditions.
- Removes useless `normalize-stderr-test`, which were left over from when the `Key` types had different names on different platforms.
- Removes a seemingly unnecessary `realstd` re-export on `cfg(test)`.
This PR changes nothing about the thread local implementation. That's for a later PR. (Which should hopefully be easier once all this stuff is a bit cleaned up.)
Spelling library
Split per https://github.com/rust-lang/rust/pull/110392
I can squash once people are happy w/ the changes. It's really uncommon for large sets of changes to be perfectly acceptable w/o at least some changes.
I probably won't have time to respond until tomorrow or the next day
Fix `std` compilation error for wasi+atomics
Fix https://github.com/rust-lang/rust/issues/109727
It seems that the `unsupported/once.rs` module isn't meant to exist at the same time as the `futex` module, as they have conflicting definitions.
I've solved this by defining the `once` module only if `not(target_feature = "atomics")`.
The `wasm32-unknown-unknown` target [similarly only defines the `once` module if `not(target_feature = "atomics")`](01c4f31927/library/std/src/sys/wasm/mod.rs (L69-L70)).
As show in [this block of code](01c4f31927/library/std/src/sys_common/once/mod.rs (L10-L34)), the `sys::once` module doesn't need to exist if `all(target_arch = "wasm32", target_feature = "atomics")`.
Update documentation wording on path 'try_exists' functions
Just eliminate the quadruple negation in `doesn't silently ignore errors unrelated to ... not existing.`
black_box doc corrections for clarification - Issue #107957
Made a complete pass through the docs to help resolve https://github.com/rust-lang/rust/issues/107957
No code changes, just documentation
`@rustbot` label +T-libs-api -T-libs
Updating Wake example to use new 'pin!' macro
Closes: https://github.com/rust-lang/rust/issues/109965
I have already had this reviewed and approved here: https://github.com/rust-lang/rust/pull/110026 . But because I had some git issues and chose the "nuke it" option as my solution it didn't get merged. I nuked it too quickly. I am sorry for trouble of reviewing twice.
Fix no_global_oom_handling build
`provide_sorted_batch` in core is incorrectly marked with `#[cfg(not(no_global_oom_handling))]` which prevents core from building with the cfg enabled.
Nothing in `core` allocates memory (including this function). The `cfg` gate is incorrect.
cc ``@dpaoliello``
r? ``@wesleywiser``
The cfg was added by #107191
Add shortcut for Grisu3 algorithm.
While Grisu3 is way more faster for most numbers compare to Dragon4, the fall back to Dragon4 procedure for certain numbers could cause some performance regressions compare to use Dragon4 directly. Mitigating the regression caused by falling back is important for a largely used core library.
In Grisu3 algorithm implementation, there's a shortcut to jump out earlier when the fractional or integrals cannot meet the requirement of requested digits. This could significantly improve the performance of converting floating number to string as it falls back even without starting trying the algorithm.
The original idea is from the [.NET implementation](https://github.com/dotnet/runtime/blob/main/src/libraries/System.Private.CoreLib/src/System/Number.Grisu3.cs#L602-L615) and the code was originally added in [this PR](https://github.com/dotnet/coreclr/pull/14646#issuecomment-350942050). This shortcut has been shipped long time ago and has been proved working.
Fix#110129
Check requested digit length and the fractional or integral parts of the number. Falls back earlier without trying the Grisu algorithm if the specific condition meets.
Fix#110129
Add `intrinsics::transmute_unchecked`
This takes a whole 3 lines in `compiler/` since it lowers to `CastKind::Transmute` in MIR *exactly* the same as the existing `intrinsics::transmute` does, it just doesn't have the fancy checking in `hir_typeck`.
Added to enable experimenting with the request in <https://github.com/rust-lang/rust/pull/106281#issuecomment-1496648190> and because the portable-simd folks might be interested for dependently-sized array-vector conversions.
It also simplifies a couple places in `core`.
See also https://github.com/rust-lang/rust/pull/108442#issuecomment-1474777273, where `CastKind::Transmute` was added having exactly these semantics before the lang meeting (which I wasn't in) independently expressed interest.
Limit read size in `File::read_to_end` loop
Fixes#110650.
Windows file reads have perf overhead that's proportional to the buffer size. When we have a reasonable expectation that we know the file size, we can set a reasonable upper bound for the size of the buffer in one read call.
This takes a whole 3 lines in `compiler/` since it lowers to `CastKind::Transmute` in MIR *exactly* the same as the existing `intrinsics::transmute` does, it just doesn't have the fancy checking in `hir_typeck`.
Added to enable experimenting with the request in <https://github.com/rust-lang/rust/pull/106281#issuecomment-1496648190> and because the portable-simd folks might be interested for dependently-sized array-vector conversions.
It also simplifies a couple places in `core`.
Report allocation errors as panics
OOM is now reported as a panic but with a custom payload type (`AllocErrorPanicPayload`) which holds the layout that was passed to `handle_alloc_error`.
This should be review one commit at a time:
- The first commit adds `AllocErrorPanicPayload` and changes allocation errors to always be reported as panics.
- The second commit removes `#[alloc_error_handler]` and the `alloc_error_hook` API.
ACP: https://github.com/rust-lang/libs-team/issues/192Closes#51540Closes#51245
More `IS_ZST` in `library`
I noticed that `post_inc_start` and `pre_dec_end` were doing this check in different ways
d19b64fb54/library/core/src/slice/iter/macros.rs (L76-L93)
so started making this PR, then added a few more I found since I was already making changes anyway.
Add offset_of! macro (RFC 3308)
Implements https://github.com/rust-lang/rfcs/pull/3308 (tracking issue #106655) by adding the built in macro `core::mem::offset_of`. Two of the future possibilities are also implemented:
* Nested field accesses (without array indexing)
* DST support (for `Sized` fields)
I wrote this a few months ago, before the RFC merged. Now that it's merged, I decided to rebase and finish it.
cc `@thomcc` (RFC author)