Commit Graph

791 Commits

Author SHA1 Message Date
Mark Rousskov
43f9a5ec0c Mark more entries in rustc_data_structures as no_inline for docs
This is a workaround for #122758, but it's not clear why 1.79 requires a
more extensive amount of no_inline than the previous release. Seems like
there's something relatively subtle happening here.
2024-05-01 21:01:51 -04:00
León Orell Valerian Liehr
dec1d16a9b
Give an item related to issue 27438 a more meaningful name 2024-04-30 22:27:19 +02:00
Nicholas Nethercote
4814fd0a4b Remove extern crate rustc_macros from numerous crates. 2024-04-29 10:21:54 +10:00
Markus Reiter
33e68aadc9
Stabilize generic NonZero. 2024-04-22 18:48:47 +02:00
Maybe Waffle
523fe2b67b Add tests for predecessor-aware VecGraph mode 2024-04-18 17:32:42 +00:00
Maybe Waffle
fa134b5e0f Add graph::depth_first_search_as_undirected 2024-04-15 23:20:52 +00:00
Maybe Waffle
7d2cb3dda7 Make graph::DepthFirstSearch accept G by value
It's required for the next commit.

Note that you can still have `G = &H`, since there are implementations of all
the graph traits for references.
2024-04-15 23:20:52 +00:00
Maybe Waffle
86a576528c Add an opt-in to store incoming edges in VecGraph + some docs 2024-04-15 23:20:52 +00:00
许杰友 Jieyou Xu (Joe)
5580ae9795
Rollup merge of #123934 - WaffleLapkin:graph-mini-refactor, r=fmease
`rustc_data_structures::graph` mini refactor

Who doesn't love to breathe dust from the ancient times?
2024-04-15 16:56:18 +01:00
Maybe Waffle
435db9b9bd Use RPITIT for Successors and Predecessors traits
Now with RPITIT instead of GAT!
2024-04-15 13:34:08 +00:00
Maybe Waffle
e8d2221e3b Make depth_first_search into a standalone function
Does not necessarily change much, but we never overwrite it, so I see no reason
for it to be in the `Successors` trait. (+we already have a similar `is_cyclic`)
2024-04-14 16:03:08 +00:00
Maybe Waffle
3124fa9310 Document ControlFlowGraph 2024-04-14 15:53:38 +00:00
Maybe Waffle
f5144938bd Rename WithNumEdges => NumEdges and WithStartNode => StartNode 2024-04-14 15:51:29 +00:00
Maybe Waffle
0d5fc9bf58 Merge {With,Graph}{Successors,Predecessors} into {Successors,Predecessors}
Now with GAT!
2024-04-14 15:48:53 +00:00
Maybe Waffle
398da593a5 Merge WithNumNodes into DirectedGraph 2024-04-14 15:46:40 +00:00
bors
af6a1613b3 Auto merge of #123175 - Nilstrieb:debug-strict-overflow, r=wesleywiser
Add add/sub methods that only panic with debug assertions to rustc

This mitigates the perf impact of enabling overflow checks on rustc. The change to use overflow checks will be done in a later PR.

For rust-lang/compiler-team#724, based on data gathered in #119440.
2024-04-13 17:18:42 +00:00
Nilstrieb
5039160c5b Add add/sub methods that only panic with debug assertions to rustc
This mitigates the perf impact of enabling overflow checks on rustc.
The change to use overflow checks will be done in a later PR.
2024-04-13 17:03:12 +02:00
Vadim Petrochenkov
b40ea03f8a rustc_index: Add a ZERO constant to index types
It is commonly used.
2024-04-03 19:06:22 +03:00
bors
bf71daedc2 Auto merge of #121851 - michaelwoerister:mcp-533-effective-vis, r=cjgillot
Use FxIndexMap instead FxHashMap to stabilize iteration order in EffectiveVisibilities

Part of [MCP 533](https://github.com/rust-lang/compiler-team/issues/533).
2024-03-31 16:22:38 +00:00
Urgau
16d11c539f Add support for NonNull in ambiguous_wide_ptr_comparisions 2024-03-29 22:02:07 +01:00
Michael Woerister
7e4bc4a373 Remove and disallow HashStable impl of HashMap. 2024-03-27 14:57:01 +01:00
bors
df8ac8f1d7 Auto merge of #122568 - RalfJung:mentioned-items, r=oli-obk
recursively evaluate the constants in everything that is 'mentioned'

This is another attempt at fixing https://github.com/rust-lang/rust/issues/107503. The previous attempt at https://github.com/rust-lang/rust/pull/112879 seems stuck in figuring out where the [perf regression](https://perf.rust-lang.org/compare.html?start=c55d1ee8d4e3162187214692229a63c2cc5e0f31&end=ec8de1ebe0d698b109beeaaac83e60f4ef8bb7d1&stat=instructions:u) comes from. In  https://github.com/rust-lang/rust/pull/122258 I learned some things, which informed the approach this PR is taking.

Quoting from the new collector docs, which explain the high-level idea:
```rust
//! One important role of collection is to evaluate all constants that are used by all the items
//! which are being collected. Codegen can then rely on only encountering constants that evaluate
//! successfully, and if a constant fails to evaluate, the collector has much better context to be
//! able to show where this constant comes up.
//!
//! However, the exact set of "used" items (collected as described above), and therefore the exact
//! set of used constants, can depend on optimizations. Optimizing away dead code may optimize away
//! a function call that uses a failing constant, so an unoptimized build may fail where an
//! optimized build succeeds. This is undesirable.
//!
//! To fix this, the collector has the concept of "mentioned" items. Some time during the MIR
//! pipeline, before any optimization-level-dependent optimizations, we compute a list of all items
//! that syntactically appear in the code. These are considered "mentioned", and even if they are in
//! dead code and get optimized away (which makes them no longer "used"), they are still
//! "mentioned". For every used item, the collector ensures that all mentioned items, recursively,
//! do not use a failing constant. This is reflected via the [`CollectionMode`], which determines
//! whether we are visiting a used item or merely a mentioned item.
//!
//! The collector and "mentioned items" gathering (which lives in `rustc_mir_transform::mentioned_items`)
//! need to stay in sync in the following sense:
//!
//! - For every item that the collector gather that could eventually lead to build failure (most
//!   likely due to containing a constant that fails to evaluate), a corresponding mentioned item
//!   must be added. This should use the exact same strategy as the ecollector to make sure they are
//!   in sync. However, while the collector works on monomorphized types, mentioned items are
//!   collected on generic MIR -- so any time the collector checks for a particular type (such as
//!   `ty::FnDef`), we have to just onconditionally add this as a mentioned item.
//! - In `visit_mentioned_item`, we then do with that mentioned item exactly what the collector
//!   would have done during regular MIR visiting. Basically you can think of the collector having
//!   two stages, a pre-monomorphization stage and a post-monomorphization stage (usually quite
//!   literally separated by a call to `self.monomorphize`); the pre-monomorphizationn stage is
//!   duplicated in mentioned items gathering and the post-monomorphization stage is duplicated in
//!   `visit_mentioned_item`.
//! - Finally, as a performance optimization, the collector should fill `used_mentioned_item` during
//!   its MIR traversal with exactly what mentioned item gathering would have added in the same
//!   situation. This detects mentioned items that have *not* been optimized away and hence don't
//!   need a dedicated traversal.

enum CollectionMode {
    /// Collect items that are used, i.e., actually needed for codegen.
    ///
    /// Which items are used can depend on optimization levels, as MIR optimizations can remove
    /// uses.
    UsedItems,
    /// Collect items that are mentioned. The goal of this mode is that it is independent of
    /// optimizations: the set of "mentioned" items is computed before optimizations are run.
    ///
    /// The exact contents of this set are *not* a stable guarantee. (For instance, it is currently
    /// computed after drop-elaboration. If we ever do some optimizations even in debug builds, we
    /// might decide to run them before computing mentioned items.) The key property of this set is
    /// that it is optimization-independent.
    MentionedItems,
}
```
And the `mentioned_items` MIR body field docs:
```rust
    /// Further items that were mentioned in this function and hence *may* become monomorphized,
    /// depending on optimizations. We use this to avoid optimization-dependent compile errors: the
    /// collector recursively traverses all "mentioned" items and evaluates all their
    /// `required_consts`.
    ///
    /// This is *not* soundness-critical and the contents of this list are *not* a stable guarantee.
    /// All that's relevant is that this set is optimization-level-independent, and that it includes
    /// everything that the collector would consider "used". (For example, we currently compute this
    /// set after drop elaboration, so some drop calls that can never be reached are not considered
    /// "mentioned".) See the documentation of `CollectionMode` in
    /// `compiler/rustc_monomorphize/src/collector.rs` for more context.
    pub mentioned_items: Vec<Spanned<MentionedItem<'tcx>>>,
```

Fixes #107503
2024-03-21 09:01:18 +00:00
Mark Rousskov
283db5abfc Workaround for rustdoc bug in new beta
Filed #122758 to track a proper fix, but this seems to solve the
problem in the meantime and is probably OK in terms of impact on
(internal) doc quality.
2024-03-20 08:49:13 -04:00
Ralf Jung
712fe36611 collector: recursively traverse 'mentioned' items to evaluate their constants 2024-03-20 11:07:12 +01:00
Guillaume Yziquel
3fc5ed8067 Issue 122262: MAP_PRIVATE for more reliability on virtualised filesystems.
Adding support of quirky filesystems occuring in virtualised settings not
having full POSIX support for memory mapped files. Example: current virtiofs
with cache disabled, occuring in Incus/LXD or Kata Containers. Has been
hitting various virtualised filesystems since 2016, depending on their levels
of maturity at the time. The situation will perhaps improve when virtiofs DAX
support patches will have made it into the qemu mainline.

On a reliability level, using the MAP_PRIVATE sycall flag instead of the
MAP_SHARED syscall flag for the mmap() system call does have some undefined
behaviour when the caller update the memory mapping of the mmap()ed file, but
MAP_SHARED does allow not only the calling process but other processes to
modify the memory mapping. Thus, in the current context, using MAP_PRIVATE
copy-on-write is marginally more reliable than MAP_SHARED.

This discussion of reliability is orthogonal to the type system enforced safety
policy of rust, which does not claim to handle memory modification of memory
mapped files triggered through the operating system and not the running rust
process.
2024-03-15 18:31:07 -04:00
Matthias Krüger
706fe0b7d8
Rollup merge of #120976 - matthiaskrgr:constify_TL_statics, r=lcnr
constify a couple thread_local statics
2024-03-04 22:16:30 +01:00
Pavel Grigorenko
613cb3262d
compiler: use addr_of! 2024-02-24 18:53:48 +03:00
bors
c9c83cca51 Auto merge of #121265 - klensy:bump-18-02-24, r=Mark-Simulacrum
bump some deps

First commit dedupes darling* crates and remove one more syn 1.* dep
Second one bumps windows crate to 0.52
2024-02-18 16:54:15 +00:00
klensy
35fe26757a windows bump to 0.52 2024-02-18 16:02:16 +03:00
surechen
a61126cef6 By tracking import use types to check whether it is scope uses or the other situations like module-relative uses, we can do more accurate redundant import checking.
fixes #117448

For example unnecessary imports in std::prelude that can be eliminated:

```rust
use std::option::Option::Some;//~ WARNING the item `Some` is imported redundantly
use std::option::Option::None; //~ WARNING the item `None` is imported redundantly
```
2024-02-18 16:38:11 +08:00
bors
1be468815c Auto merge of #120486 - reitermarkus:use-generic-nonzero, r=dtolnay
Use generic `NonZero` internally.

Tracking issue: https://github.com/rust-lang/rust/issues/120257
2024-02-16 07:46:31 +00:00
bors
fa9f77ff35 Auto merge of #120931 - chenyukang:yukang-cleanup-hashmap, r=michaelwoerister
Clean up potential_query_instability with FxIndexMap and UnordMap

From https://github.com/rust-lang/rust/pull/120485#issuecomment-1916437191

r? `@michaelwoerister`
2024-02-15 12:36:37 +00:00
Markus Reiter
a90cc05233
Replace NonZero::<_>::new with NonZero::new. 2024-02-15 08:09:42 +01:00
Markus Reiter
746a58d435
Use generic NonZero internally. 2024-02-15 08:09:42 +01:00
Eric Huss
217e5e484d Fix SmallCStr conversion from CStr 2024-02-14 18:40:53 -08:00
yukang
3f27e4b3ea clean up potential_query_instability with FxIndexMap and UnordMap 2024-02-14 18:36:37 +08:00
Matthias Krüger
d0873c7a11 constify a couple thread_local statics 2024-02-12 16:25:39 +01:00
Matthias Krüger
317c372284
Rollup merge of #120846 - petrochenkov:jobs, r=oli-obk
Update jobserver-rs to 0.1.28

Fixes the issues found in https://github.com/rust-lang/rust/issues/120515 besides the diagnostic wording.
2024-02-10 00:58:38 +01:00
Vadim Petrochenkov
83f3bc4271 Update jobserver-rs to 0.1.28 2024-02-09 19:13:07 +03:00
Matthias Krüger
46a0448405
Rollup merge of #120693 - nnethercote:invert-diagnostic-lints, r=davidtwco
Invert diagnostic lints.

That is, change `diagnostic_outside_of_impl` and `untranslatable_diagnostic` from `allow` to `deny`, because more than half of the compiler has been converted to use translated diagnostics.

This commit removes more `deny` attributes than it adds `allow` attributes, which proves that this change is warranted.

r? ````@davidtwco````
2024-02-09 14:41:50 +01:00
Nicholas Nethercote
0ac1195ee0 Invert diagnostic lints.
That is, change `diagnostic_outside_of_impl` and
`untranslatable_diagnostic` from `allow` to `deny`, because more than
half of the compiler has be converted to use translated diagnostics.

This commit removes more `deny` attributes than it adds `allow`
attributes, which proves that this change is warranted.
2024-02-06 13:12:33 +11:00
Matthias Krüger
ca36ed27be
Rollup merge of #119600 - aDotInTheVoid:comment-fix, r=compiler-errors
Remove outdated references to librustc_middle

The relevant comment is now in 791a53f380/compiler/rustc_middle/src/tests.rs (L3-L13)
2024-02-05 06:37:14 +01:00
clubby789
fd29f74ff8 Remove unused features 2024-01-25 14:01:33 +00:00
Josh Stone
8f3af4c6e2 rustc_data_structures: use either instead of itertools 2024-01-24 15:36:57 -08:00
bors
3066253050 Auto merge of #120080 - cuviper:128-align-packed, r=nikic
Pack u128 in the compiler to mitigate new alignment

This is based on #116672, adding a new `#[repr(packed(8))]` wrapper on `u128` to avoid changing any of the compiler's size assertions. This is needed in two places:

* `SwitchTargets`, otherwise its `SmallVec<[u128; 1]>` gets padded up to 32 bytes.
* `LitKind::Int`, so that entire `enum` can stay 24 bytes.
  * This change definitely has far-reaching effects though, since it's public.
2024-01-22 13:08:19 +00:00
bors
6745c6000a Auto merge of #116185 - Zoxc:rem-one-thread, r=cjgillot
Remove `OneThread`

This removes `OneThread` by switching `incr_comp_session` over to `RwLock`.
2024-01-20 13:18:33 +00:00
Josh Stone
cb7d863e74 Add Pu128 = #[repr(packed(8))] u128 2024-01-19 20:10:38 -08:00
bors
1bd42be8cb Auto merge of #120076 - Mark-Simulacrum:unhash, r=cjgillot
Use UnhashMap for a few more maps

This avoids a few cases of hashing data that's already hashed.

cc https://github.com/rust-lang/rust/issues/56308
2024-01-19 04:43:17 +00:00
bors
25f8d01fd8 Auto merge of #114231 - ttsugriy:binary_search_slice, r=cjgillot
[rustc_data_structures] Use partition_point to find  slice range end.

This PR uses approach introduced in https://github.com/rust-lang/rust/pull/114152 to find
the end of the range. It's much easier to understand and reason about invariants of such
implementation.
Technically it's possible to make it even shorter by returning `&[start..end]` unconditionally
because even if searched item is not present in the slice, `start` and `end` would point at
the same index, so the range would be empty. The reason I decided not to use this shorter
implementation is because it would involve more comparisons in case there are no elements
in the slice with key equal to `key`.

Also, not that it matters much, but this implementation also improves perf according to the
benchmark below:
https://gist.github.com/ttsugriy/63c0ed39ae132b131931fa1f8a3dea55

The results on my M1 macbook air are:
```
Running benches/bin_search_slice_benchmark.rs (target/release/deps/bin_search_slice_benchmark-90fa6d68c3bd1298)
Benchmarking multiply add/binary_search_slice: Collecting 100 samples in estimated 5.0002 s (1
multiply add/binary_search_slice
                        time:   [44.719 ns 44.918 ns 45.158 ns]
                        No change in performance detected.
Found 3 outliers among 100 measurements (3.00%)
  1 (1.00%) high mild
  2 (2.00%) high severe
Benchmarking multiply add/binary_search_slice_new: Collecting 100 samples in estimated 5.0001
multiply add/binary_search_slice_new
                        time:   [36.955 ns 37.060 ns 37.221 ns]
                        No change in performance detected.
Found 7 outliers among 100 measurements (7.00%)
  3 (3.00%) high mild
  4 (4.00%) high severe
```
2024-01-18 18:54:54 +00:00
John Kåre Alsaker
63446d00ee Remove OneThread 2024-01-18 03:30:05 +01:00