Commit Graph

825 Commits

Author SHA1 Message Date
Nilstrieb
5039160c5b Add add/sub methods that only panic with debug assertions to rustc
This mitigates the perf impact of enabling overflow checks on rustc.
The change to use overflow checks will be done in a later PR.
2024-04-13 17:03:12 +02:00
Vadim Petrochenkov
b40ea03f8a rustc_index: Add a ZERO constant to index types
It is commonly used.
2024-04-03 19:06:22 +03:00
bors
bf71daedc2 Auto merge of #121851 - michaelwoerister:mcp-533-effective-vis, r=cjgillot
Use FxIndexMap instead FxHashMap to stabilize iteration order in EffectiveVisibilities

Part of [MCP 533](https://github.com/rust-lang/compiler-team/issues/533).
2024-03-31 16:22:38 +00:00
Urgau
16d11c539f Add support for NonNull in ambiguous_wide_ptr_comparisions 2024-03-29 22:02:07 +01:00
Michael Woerister
7e4bc4a373 Remove and disallow HashStable impl of HashMap. 2024-03-27 14:57:01 +01:00
bors
df8ac8f1d7 Auto merge of #122568 - RalfJung:mentioned-items, r=oli-obk
recursively evaluate the constants in everything that is 'mentioned'

This is another attempt at fixing https://github.com/rust-lang/rust/issues/107503. The previous attempt at https://github.com/rust-lang/rust/pull/112879 seems stuck in figuring out where the [perf regression](https://perf.rust-lang.org/compare.html?start=c55d1ee8d4e3162187214692229a63c2cc5e0f31&end=ec8de1ebe0d698b109beeaaac83e60f4ef8bb7d1&stat=instructions:u) comes from. In  https://github.com/rust-lang/rust/pull/122258 I learned some things, which informed the approach this PR is taking.

Quoting from the new collector docs, which explain the high-level idea:
```rust
//! One important role of collection is to evaluate all constants that are used by all the items
//! which are being collected. Codegen can then rely on only encountering constants that evaluate
//! successfully, and if a constant fails to evaluate, the collector has much better context to be
//! able to show where this constant comes up.
//!
//! However, the exact set of "used" items (collected as described above), and therefore the exact
//! set of used constants, can depend on optimizations. Optimizing away dead code may optimize away
//! a function call that uses a failing constant, so an unoptimized build may fail where an
//! optimized build succeeds. This is undesirable.
//!
//! To fix this, the collector has the concept of "mentioned" items. Some time during the MIR
//! pipeline, before any optimization-level-dependent optimizations, we compute a list of all items
//! that syntactically appear in the code. These are considered "mentioned", and even if they are in
//! dead code and get optimized away (which makes them no longer "used"), they are still
//! "mentioned". For every used item, the collector ensures that all mentioned items, recursively,
//! do not use a failing constant. This is reflected via the [`CollectionMode`], which determines
//! whether we are visiting a used item or merely a mentioned item.
//!
//! The collector and "mentioned items" gathering (which lives in `rustc_mir_transform::mentioned_items`)
//! need to stay in sync in the following sense:
//!
//! - For every item that the collector gather that could eventually lead to build failure (most
//!   likely due to containing a constant that fails to evaluate), a corresponding mentioned item
//!   must be added. This should use the exact same strategy as the ecollector to make sure they are
//!   in sync. However, while the collector works on monomorphized types, mentioned items are
//!   collected on generic MIR -- so any time the collector checks for a particular type (such as
//!   `ty::FnDef`), we have to just onconditionally add this as a mentioned item.
//! - In `visit_mentioned_item`, we then do with that mentioned item exactly what the collector
//!   would have done during regular MIR visiting. Basically you can think of the collector having
//!   two stages, a pre-monomorphization stage and a post-monomorphization stage (usually quite
//!   literally separated by a call to `self.monomorphize`); the pre-monomorphizationn stage is
//!   duplicated in mentioned items gathering and the post-monomorphization stage is duplicated in
//!   `visit_mentioned_item`.
//! - Finally, as a performance optimization, the collector should fill `used_mentioned_item` during
//!   its MIR traversal with exactly what mentioned item gathering would have added in the same
//!   situation. This detects mentioned items that have *not* been optimized away and hence don't
//!   need a dedicated traversal.

enum CollectionMode {
    /// Collect items that are used, i.e., actually needed for codegen.
    ///
    /// Which items are used can depend on optimization levels, as MIR optimizations can remove
    /// uses.
    UsedItems,
    /// Collect items that are mentioned. The goal of this mode is that it is independent of
    /// optimizations: the set of "mentioned" items is computed before optimizations are run.
    ///
    /// The exact contents of this set are *not* a stable guarantee. (For instance, it is currently
    /// computed after drop-elaboration. If we ever do some optimizations even in debug builds, we
    /// might decide to run them before computing mentioned items.) The key property of this set is
    /// that it is optimization-independent.
    MentionedItems,
}
```
And the `mentioned_items` MIR body field docs:
```rust
    /// Further items that were mentioned in this function and hence *may* become monomorphized,
    /// depending on optimizations. We use this to avoid optimization-dependent compile errors: the
    /// collector recursively traverses all "mentioned" items and evaluates all their
    /// `required_consts`.
    ///
    /// This is *not* soundness-critical and the contents of this list are *not* a stable guarantee.
    /// All that's relevant is that this set is optimization-level-independent, and that it includes
    /// everything that the collector would consider "used". (For example, we currently compute this
    /// set after drop elaboration, so some drop calls that can never be reached are not considered
    /// "mentioned".) See the documentation of `CollectionMode` in
    /// `compiler/rustc_monomorphize/src/collector.rs` for more context.
    pub mentioned_items: Vec<Spanned<MentionedItem<'tcx>>>,
```

Fixes #107503
2024-03-21 09:01:18 +00:00
Mark Rousskov
283db5abfc Workaround for rustdoc bug in new beta
Filed #122758 to track a proper fix, but this seems to solve the
problem in the meantime and is probably OK in terms of impact on
(internal) doc quality.
2024-03-20 08:49:13 -04:00
Ralf Jung
712fe36611 collector: recursively traverse 'mentioned' items to evaluate their constants 2024-03-20 11:07:12 +01:00
Guillaume Yziquel
3fc5ed8067 Issue 122262: MAP_PRIVATE for more reliability on virtualised filesystems.
Adding support of quirky filesystems occuring in virtualised settings not
having full POSIX support for memory mapped files. Example: current virtiofs
with cache disabled, occuring in Incus/LXD or Kata Containers. Has been
hitting various virtualised filesystems since 2016, depending on their levels
of maturity at the time. The situation will perhaps improve when virtiofs DAX
support patches will have made it into the qemu mainline.

On a reliability level, using the MAP_PRIVATE sycall flag instead of the
MAP_SHARED syscall flag for the mmap() system call does have some undefined
behaviour when the caller update the memory mapping of the mmap()ed file, but
MAP_SHARED does allow not only the calling process but other processes to
modify the memory mapping. Thus, in the current context, using MAP_PRIVATE
copy-on-write is marginally more reliable than MAP_SHARED.

This discussion of reliability is orthogonal to the type system enforced safety
policy of rust, which does not claim to handle memory modification of memory
mapped files triggered through the operating system and not the running rust
process.
2024-03-15 18:31:07 -04:00
Matthias Krüger
706fe0b7d8
Rollup merge of #120976 - matthiaskrgr:constify_TL_statics, r=lcnr
constify a couple thread_local statics
2024-03-04 22:16:30 +01:00
Pavel Grigorenko
613cb3262d
compiler: use addr_of! 2024-02-24 18:53:48 +03:00
bors
c9c83cca51 Auto merge of #121265 - klensy:bump-18-02-24, r=Mark-Simulacrum
bump some deps

First commit dedupes darling* crates and remove one more syn 1.* dep
Second one bumps windows crate to 0.52
2024-02-18 16:54:15 +00:00
klensy
35fe26757a windows bump to 0.52 2024-02-18 16:02:16 +03:00
surechen
a61126cef6 By tracking import use types to check whether it is scope uses or the other situations like module-relative uses, we can do more accurate redundant import checking.
fixes #117448

For example unnecessary imports in std::prelude that can be eliminated:

```rust
use std::option::Option::Some;//~ WARNING the item `Some` is imported redundantly
use std::option::Option::None; //~ WARNING the item `None` is imported redundantly
```
2024-02-18 16:38:11 +08:00
bors
1be468815c Auto merge of #120486 - reitermarkus:use-generic-nonzero, r=dtolnay
Use generic `NonZero` internally.

Tracking issue: https://github.com/rust-lang/rust/issues/120257
2024-02-16 07:46:31 +00:00
bors
fa9f77ff35 Auto merge of #120931 - chenyukang:yukang-cleanup-hashmap, r=michaelwoerister
Clean up potential_query_instability with FxIndexMap and UnordMap

From https://github.com/rust-lang/rust/pull/120485#issuecomment-1916437191

r? `@michaelwoerister`
2024-02-15 12:36:37 +00:00
Markus Reiter
a90cc05233
Replace NonZero::<_>::new with NonZero::new. 2024-02-15 08:09:42 +01:00
Markus Reiter
746a58d435
Use generic NonZero internally. 2024-02-15 08:09:42 +01:00
Eric Huss
217e5e484d Fix SmallCStr conversion from CStr 2024-02-14 18:40:53 -08:00
yukang
3f27e4b3ea clean up potential_query_instability with FxIndexMap and UnordMap 2024-02-14 18:36:37 +08:00
Matthias Krüger
d0873c7a11 constify a couple thread_local statics 2024-02-12 16:25:39 +01:00
Matthias Krüger
317c372284
Rollup merge of #120846 - petrochenkov:jobs, r=oli-obk
Update jobserver-rs to 0.1.28

Fixes the issues found in https://github.com/rust-lang/rust/issues/120515 besides the diagnostic wording.
2024-02-10 00:58:38 +01:00
Vadim Petrochenkov
83f3bc4271 Update jobserver-rs to 0.1.28 2024-02-09 19:13:07 +03:00
Matthias Krüger
46a0448405
Rollup merge of #120693 - nnethercote:invert-diagnostic-lints, r=davidtwco
Invert diagnostic lints.

That is, change `diagnostic_outside_of_impl` and `untranslatable_diagnostic` from `allow` to `deny`, because more than half of the compiler has been converted to use translated diagnostics.

This commit removes more `deny` attributes than it adds `allow` attributes, which proves that this change is warranted.

r? ````@davidtwco````
2024-02-09 14:41:50 +01:00
Nicholas Nethercote
0ac1195ee0 Invert diagnostic lints.
That is, change `diagnostic_outside_of_impl` and
`untranslatable_diagnostic` from `allow` to `deny`, because more than
half of the compiler has be converted to use translated diagnostics.

This commit removes more `deny` attributes than it adds `allow`
attributes, which proves that this change is warranted.
2024-02-06 13:12:33 +11:00
Matthias Krüger
ca36ed27be
Rollup merge of #119600 - aDotInTheVoid:comment-fix, r=compiler-errors
Remove outdated references to librustc_middle

The relevant comment is now in 791a53f380/compiler/rustc_middle/src/tests.rs (L3-L13)
2024-02-05 06:37:14 +01:00
clubby789
fd29f74ff8 Remove unused features 2024-01-25 14:01:33 +00:00
Josh Stone
8f3af4c6e2 rustc_data_structures: use either instead of itertools 2024-01-24 15:36:57 -08:00
bors
3066253050 Auto merge of #120080 - cuviper:128-align-packed, r=nikic
Pack u128 in the compiler to mitigate new alignment

This is based on #116672, adding a new `#[repr(packed(8))]` wrapper on `u128` to avoid changing any of the compiler's size assertions. This is needed in two places:

* `SwitchTargets`, otherwise its `SmallVec<[u128; 1]>` gets padded up to 32 bytes.
* `LitKind::Int`, so that entire `enum` can stay 24 bytes.
  * This change definitely has far-reaching effects though, since it's public.
2024-01-22 13:08:19 +00:00
bors
6745c6000a Auto merge of #116185 - Zoxc:rem-one-thread, r=cjgillot
Remove `OneThread`

This removes `OneThread` by switching `incr_comp_session` over to `RwLock`.
2024-01-20 13:18:33 +00:00
Josh Stone
cb7d863e74 Add Pu128 = #[repr(packed(8))] u128 2024-01-19 20:10:38 -08:00
bors
1bd42be8cb Auto merge of #120076 - Mark-Simulacrum:unhash, r=cjgillot
Use UnhashMap for a few more maps

This avoids a few cases of hashing data that's already hashed.

cc https://github.com/rust-lang/rust/issues/56308
2024-01-19 04:43:17 +00:00
bors
25f8d01fd8 Auto merge of #114231 - ttsugriy:binary_search_slice, r=cjgillot
[rustc_data_structures] Use partition_point to find  slice range end.

This PR uses approach introduced in https://github.com/rust-lang/rust/pull/114152 to find
the end of the range. It's much easier to understand and reason about invariants of such
implementation.
Technically it's possible to make it even shorter by returning `&[start..end]` unconditionally
because even if searched item is not present in the slice, `start` and `end` would point at
the same index, so the range would be empty. The reason I decided not to use this shorter
implementation is because it would involve more comparisons in case there are no elements
in the slice with key equal to `key`.

Also, not that it matters much, but this implementation also improves perf according to the
benchmark below:
https://gist.github.com/ttsugriy/63c0ed39ae132b131931fa1f8a3dea55

The results on my M1 macbook air are:
```
Running benches/bin_search_slice_benchmark.rs (target/release/deps/bin_search_slice_benchmark-90fa6d68c3bd1298)
Benchmarking multiply add/binary_search_slice: Collecting 100 samples in estimated 5.0002 s (1
multiply add/binary_search_slice
                        time:   [44.719 ns 44.918 ns 45.158 ns]
                        No change in performance detected.
Found 3 outliers among 100 measurements (3.00%)
  1 (1.00%) high mild
  2 (2.00%) high severe
Benchmarking multiply add/binary_search_slice_new: Collecting 100 samples in estimated 5.0001
multiply add/binary_search_slice_new
                        time:   [36.955 ns 37.060 ns 37.221 ns]
                        No change in performance detected.
Found 7 outliers among 100 measurements (7.00%)
  3 (3.00%) high mild
  4 (4.00%) high severe
```
2024-01-18 18:54:54 +00:00
John Kåre Alsaker
63446d00ee Remove OneThread 2024-01-18 03:30:05 +01:00
Mark Rousskov
510fcd318b Use UnhashMap for a few more maps
This avoids hashing data that's already hashed.
2024-01-17 17:09:55 -05:00
Michael Woerister
ac58f9ae03 Update measureme crate to version 11 2024-01-13 16:32:03 +01:00
Guillaume Gomez
d3574beb5d
Rollup merge of #119527 - klensy:ordering, r=compiler-errors
don't reexport atomic::ordering via rustc_data_structures, use std import

This looks simpler.
2024-01-09 13:23:17 +01:00
Matthias Krüger
909f2b63a3
Rollup merge of #119591 - Enselic:DestinationPropagation-stable, r=cjgillot
rustc_mir_transform: Make DestinationPropagation stable for queries

By using `FxIndexMap` instead of `FxHashMap`, so that the order of visiting of locals is deterministic.

We also need to bless
`copy_propagation_arg.foo.DestinationPropagation.panic*.diff`. Do not review the diff of the diff. Instead look at the diff files before and after this commit. Both before and after this commit, 3 statements are replaced with nop. It's just that due to change in ordering, different statements are replaced. But the net result is the same. In other words, compare this diff (before fix):
* 090d5eac72/tests/mir-opt/dest-prop/copy_propagation_arg.foo.DestinationPropagation.panic-unwind.diff

With this diff (after fix):
* f603babd63/tests/mir-opt/dest-prop/copy_propagation_arg.foo.DestinationPropagation.panic-unwind.diff

and you can see that both before and after the fix, we replace 3 statements with `nop`s.

I find it _slightly_ surprising that the test this PR affects did not previously fail spuriously due to the indeterminism of `FxHashMap`, but I guess in can be explained with the predictability of small `FxHashMap`s with `usize` (`Local`) keys, or something along those lines.

This should fix [this](https://github.com/rust-lang/rust/pull/119252#discussion_r1436101791) comment, but I wanted to make a separate PR for this fix for a simpler development and review process.

Part of https://github.com/rust-lang/rust/issues/84447 which is E-help-wanted.

r? `@cjgillot` who is reviewer for the highly related PR https://github.com/rust-lang/rust/pull/119252.
2024-01-06 16:07:47 +01:00
klensy
56173611d6 don't reexport atomic::ordering via rustc_data_structures, use std import 2024-01-06 15:01:10 +03:00
bors
e21f4cd98f Auto merge of #119478 - bjorn3:no_serialize_specialization, r=wesleywiser
Avoid specialization in the metadata serialization code

With the exception of a perf-only specialization for byte slices and byte vectors.

This uses the same trick of introducing a new trait and having the Encodable and Decodable derives add a bound to it as used for TyEncoder/TyDecoder. The new code is clearer about which encoder/decoder uses which impl and it reduces the dependency of rustc on specialization, making it easier to remove support for specialization entirely or turn it into a construct that is only allowed for perf optimizations if we decide to do this.
2024-01-06 09:56:00 +00:00
bors
d62f05b842 Auto merge of #119459 - cjgillot:inline-mir-utils, r=compiler-errors
Inline a few utility functions around MIR

Most of them are small enough to benefit from inlining.
2024-01-06 04:01:09 +00:00
Martin Nordholts
95eb5bcb67 rustc_mir_transform: Make DestinationPropagation stable for queries
By using FxIndexMap instead of FxHashMap, so that the order of visiting
of locals is deterministic.

We also need to bless
copy_propagation_arg.foo.DestinationPropagation.panic*.diff. Do not
review the diff of the diff. Instead look at the diff file before and
after this commit. Both before and after this commit, 3 statements are
replaced with nop. It's just that due to change in ordering, different
statements are replaced. But the net result is the same.
2024-01-05 20:55:32 +01:00
Alona Enraght-Moony
16e117cf96 Remove outdated references to librustc_middle. 2024-01-05 16:34:52 +00:00
Michael Woerister
077540cedf Address review comments and add back some #[inline] attrs from removed commits. 2024-01-04 13:51:06 +01:00
Michael Woerister
900a11bbec Provide generalized collect methods for UnordItems 2024-01-04 13:48:57 +01:00
Michael Woerister
739e5ef49e Split StableCompare trait out of StableOrd trait.
StableCompare is a companion trait to `StableOrd`. Some types like `Symbol` can be compared in a cross-session stable way, but their `Ord` implementation is not stable. In such cases, a `StableOrd` implementation can be provided to offer a lightweight way for stable sorting. (The more heavyweight option is to sort via `ToStableHashKey`, but then sorting needs to have access to a stable hashing context and `ToStableHashKey` can also be expensive as in the case of `Symbol` where it has to allocate a `String`.)
2024-01-04 13:32:42 +01:00
bjorn3
6ed37bdc42 Avoid specialization for the Span Encodable and Decodable impls 2023-12-31 20:42:17 +00:00
Camille GILLOT
f83468c89b Inline dominator check. 2023-12-31 00:37:45 +00:00
Nilstrieb
ffafcd8819 Update to bitflags 2 in the compiler
This involves lots of breaking changes. There are two big changes that
force changes. The first is that the bitflag types now don't
automatically implement normal derive traits, so we need to derive them
manually.

Additionally, bitflags now have a hidden inner type by default, which
breaks our custom derives. The bitflags docs recommend using the impl
form in these cases, which I did.
2023-12-30 18:17:28 +01:00
Camille GILLOT
821920b2a3 Do not store stable crate id in on-disk hash map. 2023-12-24 17:22:48 +00:00
Pietro Albini
f9f5840eb4
update cfg(bootstrap)s 2023-12-22 11:14:11 +01:00
Matthias Krüger
8479945c08 NFC don't convert types to identical types 2023-12-15 23:56:24 +01:00
bors
f651b436ce Auto merge of #117050 - c410-f3r:here-we-go-again, r=petrochenkov
[`RFC 3086`] Attempt to try to resolve blocking concerns

Implements what is described at https://github.com/rust-lang/rust/issues/83527#issuecomment-1744822345 to hopefully make some progress.

It is unknown if such approach is or isn't desired due to the lack of further feedback, as such, it is probably best to nominate this PR to the official entities.

`@rustbot` labels +I-compiler-nominated
2023-12-13 06:37:08 +00:00
surechen
40ae34194c remove redundant imports
detects redundant imports that can be eliminated.

for #117772 :

In order to facilitate review and modification, split the checking code and
removing redundant imports code into two PR.
2023-12-10 10:56:22 +08:00
oksbsb
dabedb711f 1. fix jobserver GLOBAL_CLIENT_CHECKED uninitialized before use
2. jobserver::initialize_checked should call before build_session, still should use EarlyErrorHandler, so revert stderr change in #118635
2023-12-08 09:50:28 +08:00
Caio
0278505691 Attempt to try to resolve blocking concerns 2023-12-01 21:19:22 -03:00
belovdv
45e6342346 jobserver: check file descriptors 2023-11-29 18:00:03 +03:00
Nicholas Nethercote
b12851ce5d Avoid an unnecessary by_ref. 2023-11-28 14:32:40 +11:00
Mark Rousskov
ee9223ff97 Enforce NonZeroUsize on thread count
This allows avoiding some if != 0 checks when allocating worker-local
datasets.
2023-11-23 20:10:44 -05:00
Nicholas Nethercote
7060fc8327 Replace no_ord_impl with orderable.
Similar to the previous commit, this replaces `newtype_index`'s opt-out
`no_ord_impl` attribute with the opt-in `orderable` attribute.
2023-11-22 18:38:17 +11:00
bors
cc4bb0de20 Auto merge of #117928 - nnethercote:rustc_ast_pretty, r=fee1-dead
`rustc_ast_pretty` cleanups

Some improvements I found while looking at this code.

r? `@fee1-dead`
2023-11-22 05:09:33 +00:00
Nicholas Nethercote
3eadc6844b Update itertools to 0.11.
Because the API for `with_position` improved in 0.11 and I want to use
it.
2023-11-22 08:13:21 +11:00
Nilstrieb
21a870515b Fix clippy::needless_borrow in the compiler
`x clippy compiler -Aclippy::all -Wclippy::needless_borrow --fix`.

Then I had to remove a few unnecessary parens and muts that were exposed
now.
2023-11-21 20:13:40 +01:00
Mark Rousskov
db3e2bacb6 Bump cfg(bootstrap)s 2023-11-15 19:41:28 -05:00
cui fliter
a44a4edc0e Fix some typos
Signed-off-by: cui fliter <imcusg@gmail.com>
2023-11-14 23:06:50 +08:00
bors
d8dbf7ca0e Auto merge of #117557 - Zoxc:panic-prio, r=petrochenkov
Make `FatalErrorMarker` lower priority than other panics

This makes `FatalErrorMarker` lower priority than other panics in a parallel sections. If any other panics occur, they will be unwound instead of `FatalErrorMarker`. This ensures `rustc` will exit with the correct error code on ICEs.

This fixes https://github.com/rust-lang/rust/issues/116659.
2023-11-09 00:39:02 +00:00
bors
f9b644636f Auto merge of #117435 - SparrowLii:nightly_parallel, r=oli-obk,davidtwco
enable parallel rustc front end in nightly builds

Refers to the [MCP](https://github.com/rust-lang/compiler-team/issues/681), this pr does:
1. Enable the parallel front end in nightly builds, and keep the default number of threads as 1. Then users can use the parallel rustc front end via -Z threads=n option.

2. Set it up to serial front end for beta/stable builds via bootstrap.

3. Switch over the alt builders from parallel rustc to serial, so we have artifacts without parallel to test against the artifacts with parallel.

r? `@oli-obk`

cc `@cjgillot` `@nnethercote` `@bjorn3` `@Kobzol`
2023-11-06 07:41:22 +00:00
SparrowLii
f2a40e99ff use portable AtomicU64 for powerPC and MIPS 2023-11-06 09:58:51 +08:00
Taras Tsugrii
97bc528877
Remove invariant comments 2023-11-05 17:35:37 -06:00
John Kåre Alsaker
ff1858e2aa Make FatalErrorMarker lower priority than other panics 2023-11-03 19:46:00 +01:00
Josh Stone
3984914aff Use filter_map in try_par_for_each_in
This simplifies the expression, especially for the rayon part, and also
lets us drop the `E: Copy` constraint.
2023-11-03 09:17:16 -07:00
Nicholas Nethercote
8ff624a9f2 Clean up rustc_*/Cargo.toml.
- Sort dependencies and features sections.
- Add `tidy` markers to the sorted sections so they stay sorted.
- Remove empty `[lib`] sections.
- Remove "See more keys..." comments.

Excluded files:
- rustc_codegen_{cranelift,gcc}, because they're external.
- rustc_lexer, because it has external use.
- stable_mir, because it has external use.
2023-10-30 08:46:02 +11:00
bors
a56bd2b944 Auto merge of #116849 - oli-obk:error_shenanigans, r=cjgillot
Avoid a `track_errors` by bubbling up most errors from `check_well_formed`

I believe `track_errors` is mostly papering over issues that a sufficiently convoluted query graph can hit. I made this change, while the actual change I want to do is to stop bailing out early on errors, and instead use this new `ErrorGuaranteed` to invoke `check_well_formed` for individual items before doing all the `typeck` logic on them.

This works towards resolving https://github.com/rust-lang/rust/issues/97477 and various other ICEs, as well as allowing us to use parallel rustc more (which is currently rather limited/bottlenecked due to the very sequential nature in which we do `rustc_hir_analysis::check_crate`)

cc `@SparrowLii` `@Zoxc` for the new `try_par_for_each_in` function
2023-10-23 09:59:40 +00:00
Oli Scherer
fd9ef69adf Avoid a track_errors by bubbling up most errors from check_well_formed 2023-10-20 08:46:27 +00:00
Caio
6379013876 Initiate the inner usage of cfg_match 2023-10-19 20:18:51 -03:00
bors
862bba6093 Auto merge of #116830 - nnethercote:rustc_type_ir, r=compiler-errors
Remove `IdFunctor` trait.

It's defined in `rustc_data_structures` but is only used in
`rustc_type_ir`. The code is shorter and easier to read if we remove
this layer of abstraction and just do the things directly where they are
needed.

r? `@BoxyUwU`
2023-10-18 03:55:36 +00:00
Nicholas Nethercote
847c8ba70d Remove IdFunctor trait.
It's defined in `rustc_data_structures` but is only used in
`rustc_type_ir`. The code is shorter and easier to read if we remove
this layer of abstraction and just do the things directly where they are
needed.
2023-10-17 16:26:35 +11:00
Nicholas Nethercote
4175c9b595 Remove unused features from rustc_data_structures. 2023-10-17 16:26:12 +11:00
Michael Howell
2ff2624722 docs: add Rust logo to more compiler crates
c6e6ecb1af added it to some of the
compiler's crates, but avoided adding it to all of them to reduce
bit-rot. This commit adds to more.
2023-10-16 15:38:08 -07:00
Tomasz Miąsko
b61a6d59e4 Remove unused dominator iterator 2023-10-10 21:39:59 +02:00
Tomasz Miąsko
0528d378b6 Optimize dominators for small path graphs
Generalizes the small dominators approach from #107449.
2023-10-05 23:45:59 +02:00
Tomasz Miąsko
ba694e301c Remove redundant Dominators::start_node field 2023-10-05 23:45:58 +02:00
Tomasz Miąsko
a8ec7ddf0e Test immediate dominators using public API 2023-10-05 23:45:58 +02:00
John Kåre Alsaker
2c507cae36 Rename cold_path to outline 2023-09-25 22:54:07 +02:00
Florian Schmiderer
3409ca65d8 Add OwnedTargetMachine to manage llvm:TargetMachine. Uses pointers
instead of &'static mut and provides safe interface to create/dispose
it.
2023-09-24 21:11:37 +02:00
bors
f73d376fb6 Auto merge of #115230 - Vtewari2311:mod-hurd-latest, r=b-naber
added support for GNU/Hurd

adding support for i686-unknown-hurd-gnu
2023-09-21 19:24:01 +00:00
Samuel Thibault
dcea7709f2 added support for GNU/Hurd 2023-09-21 17:31:25 +02:00
Ralf Jung
57444cf9f3 use pretty_print_const_value from MIR constant 'extra' printing 2023-09-19 11:06:32 +02:00
Zalathar
01b67f4b26 coverage: Simplify sorting of coverage spans extracted from MIR
Switching to `Ordering::then_with` makes control-flow less complicated, and
there is no need to use `partial_cmp` here.
2023-09-18 23:15:25 +10:00
Matthias Krüger
f3cc59b741
Rollup merge of #115548 - Zoxc:parallel-extract, r=wesleywiser
Extract parallel operations in `rustc_data_structures::sync` into a new `parallel` submodule

This extracts parallel operations in `rustc_data_structures::sync` into a new `parallel` submodule. This cuts down on the size of the large `cfg_if!` in `sync` and makes it easier to compare between serial and parallel variants.
2023-09-11 21:16:20 +02:00
bors
9b72cc9abf Auto merge of #115388 - Zoxc:sharded-lock, r=SparrowLii
Add optimized lock methods for `Sharded` and refactor `Lock`

This adds methods to `Sharded` which pick a shard and also locks it. These branch on parallelism just once instead of twice, improving performance.

Benchmark for `cfg(parallel_compiler)` and 1 thread:
<table><tr><td rowspan="2">Benchmark</td><td colspan="1"><b>Before</b></th><td colspan="2"><b>After</b></th></tr><tr><td align="right">Time</td><td align="right">Time</td><td align="right">%</th></tr><tr><td>🟣 <b>clap</b>:check</td><td align="right">1.6461s</td><td align="right">1.6345s</td><td align="right"> -0.70%</td></tr><tr><td>🟣 <b>hyper</b>:check</td><td align="right">0.2414s</td><td align="right">0.2394s</td><td align="right"> -0.83%</td></tr><tr><td>🟣 <b>regex</b>:check</td><td align="right">0.9205s</td><td align="right">0.9143s</td><td align="right"> -0.67%</td></tr><tr><td>🟣 <b>syn</b>:check</td><td align="right">1.4981s</td><td align="right">1.4869s</td><td align="right"> -0.75%</td></tr><tr><td>🟣 <b>syntex_syntax</b>:check</td><td align="right">5.7629s</td><td align="right">5.7256s</td><td align="right"> -0.65%</td></tr><tr><td>Total</td><td align="right">10.0690s</td><td align="right">10.0008s</td><td align="right"> -0.68%</td></tr><tr><td>Summary</td><td align="right">1.0000s</td><td align="right">0.9928s</td><td align="right"> -0.72%</td></tr></table>

cc `@SparrowLii`
2023-09-11 01:43:29 +00:00
John Kåre Alsaker
1d8fdc0332 Use FreezeLock for CStore 2023-09-09 16:02:11 +02:00
John Kåre Alsaker
c83eba9251 Add Freeze::clone 2023-09-08 14:14:57 +02:00
John Kåre Alsaker
9690142bef Remove the LockMode enum and dispatch 2023-09-08 10:15:12 +02:00
John Kåre Alsaker
61cc00d238 Refactor Lock implementation 2023-09-08 08:48:44 +02:00
John Kåre Alsaker
8fc160b742 Add optimized lock methods for Sharded 2023-09-08 08:48:44 +02:00
John Kåre Alsaker
f49382c050 Use Freeze for SourceFile.lines 2023-09-07 13:05:05 +02:00
John Kåre Alsaker
c5996b80be Use Freeze for SourceFile.external_src 2023-09-07 13:04:23 +02:00
John Kåre Alsaker
35e8b67fc2 Use a reference to the lock in the guards 2023-09-06 11:44:06 +02:00
John Kåre Alsaker
fe614712dd Extract parallel operations in rustc_data_structures::sync into a new parallel submodule 2023-09-06 09:47:34 +02:00