This largely avoids remapping from and to the 'real' indices, with the exception
of predecessor lookup and the final merge back, and is conceptually better.
As the paper indicates, the unprocessed vertices in the DFS tree and processed
vertices are disjoint, and we can use them in the same space, tracking only the index
of the split.
This replaces the previous implementation with the simple variant of
Lengauer-Tarjan, which performs better in the general case. Performance on the
keccak benchmark is about equivalent between the two, but we don't see
regressions (and indeed see improvements) on other benchmarks, even on a
partially optimized implementation.
The implementation here follows that of the pseudocode in "Linear-Time
Algorithms for Dominators and Related Problems" thesis by Loukas Georgiadis. The
next few commits will optimize the implementation as suggested in the thesis.
Several related works are cited in the comments within the implementation, as
well.
Implement the simple Lengauer-Tarjan algorithm
This replaces the previous implementation (from #34169), which has not been
optimized since, with the simple variant of Lengauer-Tarjan which performs
better in the general case. A previous attempt -- not kept in commit history --
attempted a replacement with a bitset-based implementation, but this led to
regressions on perf.rust-lang.org benchmarks and equivalent wins for the keccak
benchmark, so was rejected.
The implementation here follows that of the pseudocode in "Linear-Time
Algorithms for Dominators and Related Problems" thesis by Loukas Georgiadis. The
next few commits will optimize the implementation as suggested in the thesis.
Several related works are cited in the comments within the implementation, as
well.
On the keccak benchmark, we were previously spending 15% of our cycles computing
the NCA / intersect function; this function is quite expensive, especially on
modern CPUs, as it chases pointers on every iteration in a tight loop. With this
commit, we spend ~0.05% of our time in dominator computation.
There's a conversation in the tracking issue about possibly unaccepting `in_band_lifetimes`, but it's used heavily in the compiler, and thus there'd need to be a bunch of PRs like this if that were to happen.
So here's one to see how much of an impact it has.
(Oh, and I removed `nll` while I was here too, since it didn't seem needed. Let me know if I should put that back.)
Implement write() method for Box<MaybeUninit<T>>
This adds method similar to `MaybeUninit::write` main difference being
it returns owned `Box`. This can be used to elide copy from stack
safely, however it's not currently tested that the optimization actually
occurs.
Analogous methods are not provided for `Rc` and `Arc` as those need to
handle the possibility of sharing. Some version of them may be added in
the future.
This was discussed in #63291 which this change extends.
This adds method similar to `MaybeUninit::write` main difference being
it returns owned `Box`. This can be used to elide copy from stack
safely, however it's not currently tested that the optimization actually
occurs.
Analogous methods are not provided for `Rc` and `Arc` as those need to
handle the possibility of sharing. Some version of them may be added in
the future.
This was discussed in #63291 which this change extends.
Revert "Add rustc lint, warning when iterating over hashmaps"
Fixes perf regressions introduced in https://github.com/rust-lang/rust/pull/90235 by temporarily reverting the relevant PR.
Particularly for ctfe-stress-4, the hashing of byte slices as part of the
MIR Allocation is quite hot. Previously, we were falling back on byte-by-byte
copying of the slice into the SipHash buffer (64 bytes long) before hashing a 64
byte chunk, and then doing that again and again.
This should hopefully be an improvement for that code.
Stabilize `const_panic`
Closes#51999
FCP completed in #89006
```@rustbot``` label +A-const-eval +A-const-fn +T-lang
cc ```@oli-obk``` for review (not `r?`'ing as not on lang team)
Rollup of 12 pull requests
Successful merges:
- #88795 (Print a note if a character literal contains a variation selector)
- #89015 (core::ascii::escape_default: reduce struct size)
- #89078 (Cleanup: Remove needless reference in ParentHirIterator)
- #89086 (Stabilize `Iterator::map_while`)
- #89096 ([bootstrap] Improve the error message when `ninja` is not found to link to installation instructions)
- #89113 (dont `.ensure()` the `thir_abstract_const` query call in `mir_build`)
- #89114 (Fixes a technicality regarding the size of C's `char` type)
- #89115 (⬆️ rust-analyzer)
- #89126 (Fix ICE when `indirect_structural_match` is allowed)
- #89141 (Impl `Error` for `FromSecsError` without foreign type)
- #89142 (Fix match for placeholder region)
- #89147 (add case for checking const refs in check_const_value_eq)
Failed merges:
r? `@ghost`
`@rustbot` modify labels: rollup
Migrate in-tree crates to 2021
This replaces #89075 (cherry picking some of the commits from there), and closes#88637 and fixes#89074.
It excludes a migration of the library crates for now (see tidy diff) because we have some pending bugs around macro spans to fix there.
I instrumented bootstrap during the migration to make sure all crates moved from 2018 to 2021 had the compatibility warnings applied first.
Originally, the intent was to support cargo fix --edition within bootstrap, but this proved fairly difficult to pull off. We'd need to architect the check functionality to support running cargo check and cargo fix within the same x.py invocation, and only resetting sysroots on check. Further, it was found that cargo fix doesn't behave too well with "not quite workspaces", such as Clippy which has several crates. Bootstrap runs with --manifest-path ... for all the tools, and this makes cargo fix only attempt migration for that crate. We can't use e.g. --workspace due to needing to maintain sysroots for different phases of compilation appropriately.
It is recommended to skip the mass migration of Cargo.toml's to 2021 for review purposes; you can also use `git diff d6cd2c6c87 -I'^edition = .20...$'` to ignore the edition = 2018/21 lines in the diff.
Rework DepthFirstSearch API
This expands the API to be more flexible, allowing for more visitation patterns
on graphs. This will be useful to avoid extra datasets (and allocations) in
cases where the expanded DFS API is sufficient.
This also fixes a bug with the previous DFS constructor, which left the start
node not marked as visited (even though it was immediately returned).
Commit written by ```@nikomatsakis``` originally, cherry picked from several commits in work on never type stabilization, but stands alone.
This expands the API to be more flexible, allowing for more visitation patterns
on graphs. This will be useful to avoid extra datasets (and allocations) in
cases where the expanded DFS API is sufficient.
This also fixes a bug with the previous DFS constructor, which left the start
node not marked as visited (even though it was immediately returned).
Remove SmallVector mention
SmallVector is long gone, as it's been first replaced
by OneVector in commit e5e6375352,
which then has been removed entirely in favour of SmallVec in
commit 130a32fa72.
SmallVector is long gone, as it's been first replaced
by OneVector in commit e5e6375352,
which then has been removed entirely in favour of SmallVec in
commit 130a32fa72.
Since RFC 3052 soft deprecated the authors field anyway, hiding it from
crates.io, docs.rs, and making Cargo not add it by default, and it is
not generally up to date/useful information, we should remove it from
crates in this repo.
Make mir borrowck's use of opaque types independent of the typeck query's result
fixes#87218fixes#86465
we used to use the typeck results only to generate an obligation for the mir borrowck type to be equal to the typeck result.
When i removed the `fixup_opaque_types` function in #87200, I exposed a bug that showed that mir borrowck can't doesn't get enough information from typeck in order to build the correct lifetime mapping from opaque type usage to the actual concrete type. We therefor now fully compute the information within mir borrowck (we already did that, but we only used it to verify the typeck result) and stop using the typeck information.
We will likely be able to remove most opaque type information from the borrowck results in the future and just have all current callers use the mir borrowck result instead.
r? `@spastorino`
Profile incremental compilation hashing fingerprints
Adds profiling instrumentation for the hashing of incremental compilation fingerprints per query.
This will eventually feed into the `measureme` and `rustc-perf` infrastructure for tracking if computing hashes changes over time.
TODOs:
* [x] Address the FIXME where we are including node interning in the hash timing.
* [ ] Update measureme/summarize to handle this new data: https://github.com/rust-lang/measureme/pull/166
* [ ] ~Update rustc-perf to handle the new data from measureme~ (will be done at a later time)
r? `@ghost`
cc `@michaelwoerister`
rustc_data_structures has a dependency on crossbeam-utils but never uses
it. It appears to have originally had this dependency in order to set
the "nightly" feature; however, its other dependencies use a different
version of crossbeam-utils, so this doesn't actually affect anything.
Furthermore, in current crossbeam-utils, the "nightly" feature has
become a no-op.
Don't use a generator for BoxedResolver
The generator is non-trivial and requires unsafe code anyway. Using regular unsafe code without a generator is much easier to follow.
Based on #85810 as it touches rustc_interface too.
Remove unused feature gates
The first commit removes a usage of a feature gate, but I don't expect it to be controversial as the feature gate was only used to workaround a limitation of rust in the past. (closures never being `Clone`)
The second commit uses `#[allow_internal_unstable]` to avoid leaking the `trusted_step` feature gate usage from inside the index newtype macro. It didn't work for the `min_specialization` feature gate though.
The third commit removes (almost) all feature gates from the compiler that weren't used anyway.
don't enable parking_lot nightly features
Having the compiler itself depend on external libraries that use nightly features can lead to "fun" bootstrap situations. Within the rustc repo we use `cfg(bootstrap)` to resolve those, but that is not a reasonable option for external dependencies.
So I propose we stop enabling the "nightly" feature of `parking_lot` here. In my experiments, this then indeed leads to the feature not being enabled (i.e., nothing else enables it), and everything still builds. However, this means parking_lot's `RwLock` will no longer have hardware lock elision for readers -- I hope that is okay to lose in exchange for less bootstrap brain twisting. ;)
Cc `@Amanieu`
With the arrival of min const generics, many alt-vec libraries have
updated to use it in some way and arrayvec is no exception. Use the
latest with minor refactoring.
Also, rustc_workspace_hack is the only user of smallvec 0.6 in the
entire tree, so drop it.
Add `FromIterator` and `IntoIterator` impls for `ThinVec`
These should make using `ThinVec` feel much more like using `Vec`.
They will allow users of `Vec` to switch to `ThinVec` while continuing
to use `collect()`, `for` loops, and other parts of the iterator API.
I don't know if there were use cases before for using the iterator API
with `ThinVec`, but I would like to start using `ThinVec` in rustdoc,
and having it conform to the iterator API would make the transition
*a lot* easier.
I added a `FromIterator` impl, an `IntoIterator` impl that yields owned
elements, and `IntoIterator` impls that yield immutable or mutable
references to elements. I also added some unit tests for `ThinVec`.
These should make using `ThinVec` feel much more like using `Vec`.
They will allow users of `Vec` to switch to `ThinVec` while continuing
to use `collect()`, `for` loops, and other parts of the iterator API.
I don't know if there were use cases before for using the iterator API
with `ThinVec`, but I would like to start using `ThinVec` in rustdoc,
and having it conform to the iterator API would make the transition
*a lot* easier.
I added a `FromIterator` impl, an `IntoIterator` impl that yields owned
elements, and `IntoIterator` impls that yield immutable or mutable
references to elements. I also added some unit tests for `ThinVec`.
Add an Mmap wrapper to rustc_data_structures
This wrapper implements StableAddress and falls back to directly reading the file on wasm32.
Taken from #83640, which I will close due to the perf regression.
There isn't currently a good reviewer for these, and I don't want to
remove things that will just be added again. I plan to make a separate
PR for these changes so the rest of the cleanup can land.
- Add back `HirIdVec`, with a comment that it will soon be used.
- Add back `*_region` functions, with a comment they may soon be used.
- Remove `-Z borrowck_stats` completely. It didn't do anything.
- Remove `make_nop` completely.
- Add back `current_loc`, which is used by an out-of-tree tool.
- Fix style nits
- Remove `AtomicCell` with `cfg(parallel_compiler)` for consistency.
Found with https://github.com/est31/warnalyzer.
Dubious changes:
- Is anyone else using rustc_apfloat? I feel weird completely deleting
x87 support.
- Maybe some of the dead code in rustc_data_structures, in case someone
wants to use it in the future?
- Don't change rustc_serialize
I plan to scrap most of the json module in the near future (see
https://github.com/rust-lang/compiler-team/issues/418) and fixing the
tests needed more work than I expected.
TODO: check if any of the comments on the deleted code should be kept.
Allow for reading raw bytes from rustc_serialize::Decoder without unsafe code
The current `read_raw_bytes` method requires using `MaybeUninit` and `unsafe`. I don't think this is necessary. Let's see if a safe interface has any performance drawbacks.
This is a followup to #83273 and will make it easier to rebase #82183.
r? `@cjgillot`
Update to rustc-rayon 0.3.1
This pulls in rust-lang/rustc-rayon#8 to fix#81425. (h/t `@ammaraskar)`
That revealed weak constraints on `rustc_arena::DropArena`, because its
`DropType` was holding type-erased raw pointers to generic `T`. We can
implement `Send` for `DropType` (under `cfg(parallel_compiler)`) by
requiring all `T: Send` before they're type-erased.
This pulls in rust-lang/rustc-rayon#8 to fix#81425. (h/t @ammaraskar)
That revealed weak constraints on `rustc_arena::DropArena`, because its
`DropType` was holding type-erased raw pointers to generic `T`. We can
implement `Send` for `DropType` (under `cfg(parallel_compiler)`) by
requiring all `T: Send` before they're type-erased.
Let a portion of DefPathHash uniquely identify the DefPath's crate.
This allows to directly map from a `DefPathHash` to the crate it originates from, without constructing side tables to do that mapping -- something that is useful for incremental compilation where we deal with `DefPathHash` instead of `DefId` a lot.
It also allows to reliably and cheaply check for `DefPathHash` collisions which allows the compiler to gracefully abort compilation instead of running into a subsequent ICE at some random place in the code.
The following new piece of documentation describes the most interesting aspects of the changes:
```rust
/// A `DefPathHash` is a fixed-size representation of a `DefPath` that is
/// stable across crate and compilation session boundaries. It consists of two
/// separate 64-bit hashes. The first uniquely identifies the crate this
/// `DefPathHash` originates from (see [StableCrateId]), and the second
/// uniquely identifies the corresponding `DefPath` within that crate. Together
/// they form a unique identifier within an entire crate graph.
///
/// There is a very small chance of hash collisions, which would mean that two
/// different `DefPath`s map to the same `DefPathHash`. Proceeding compilation
/// with such a hash collision would very probably lead to an ICE and, in the
/// worst case, to a silent mis-compilation. The compiler therefore actively
/// and exhaustively checks for such hash collisions and aborts compilation if
/// it finds one.
///
/// `DefPathHash` uses 64-bit hashes for both the crate-id part and the
/// crate-internal part, even though it is likely that there are many more
/// `LocalDefId`s in a single crate than there are individual crates in a crate
/// graph. Since we use the same number of bits in both cases, the collision
/// probability for the crate-local part will be quite a bit higher (though
/// still very small).
///
/// This imbalance is not by accident: A hash collision in the
/// crate-local part of a `DefPathHash` will be detected and reported while
/// compiling the crate in question. Such a collision does not depend on
/// outside factors and can be easily fixed by the crate maintainer (e.g. by
/// renaming the item in question or by bumping the crate version in a harmless
/// way).
///
/// A collision between crate-id hashes on the other hand is harder to fix
/// because it depends on the set of crates in the entire crate graph of a
/// compilation session. Again, using the same crate with a different version
/// number would fix the issue with a high probability -- but that might be
/// easier said then done if the crates in questions are dependencies of
/// third-party crates.
///
/// That being said, given a high quality hash function, the collision
/// probabilities in question are very small. For example, for a big crate like
/// `rustc_middle` (with ~50000 `LocalDefId`s as of the time of writing) there
/// is a probability of roughly 1 in 14,750,000,000 of a crate-internal
/// collision occurring. For a big crate graph with 1000 crates in it, there is
/// a probability of 1 in 36,890,000,000,000 of a `StableCrateId` collision.
```
Given the probabilities involved I hope that no one will ever actually see the error messages. Nonetheless, I'd be glad about some feedback on how to improve them. Should we create a GH issue describing the problem and possible solutions to point to? Or a page in the rustc book?
r? `@pnkfelix` (feel free to re-assign)
Update measureme dependency to the latest version
This version adds the ability to use `rdpmc` hardware-based performance
counters instead of wall-clock time for measuring duration. This also
introduces a dependency on the `perf-event-open-sys` crate on Linux
which is used when using hardware counters.
r? ```@oli-obk```
Replace const_cstr with cstr crate
This PR replaces the `const_cstr` macro inside `rustc_data_structures` with `cstr` macro from [cstr](https://crates.io/crates/cstr) crate.
The two macros basically serve the same purpose, which is to generate `&'static CStr` from a string literal. `cstr` is better because it validates the literal at compile time, while the existing `const_cstr` does it at runtime when `debug_assertions` is enabled. In addition, the value `cstr` generates can be used in constant context (which is seemingly not needed anywhere currently, though).
This version adds the ability to use `rdpmc` hardware-based performance
counters instead of wall-clock time for measuring duration. This also
introduces a dependency on the `perf-event-open-sys` crate on Linux
which is used when using hardware counters.
Check the result cache before the DepGraph when ensuring queries
Split out of https://github.com/rust-lang/rust/pull/70951
Calling `ensure` on already forced queries is a common operation.
Looking at the results cache first is faster than checking the DepGraph for a green node.
Indicate change in RSS from start to end of pass in time-passes output
Previously, this was omitted because it could be misleading, but the
functionality seems too useful not to include.
r? ``@oli-obk``
This allows to directly map from a DefPathHash to the crate it
originates from, without constructing side tables to do that mapping.
It also allows to reliably and cheaply check for DefPathHash collisions.
Indicate both start and end of pass RSS in time-passes output
Previously, only the end of pass RSS was indicated. This could easily
lead one to believe that the change in RSS from one pass to the next was
attributable to the second pass, when in fact it occurred between the
end of the first pass and the start of the second.
Also, improve alignment of columns.
Sample of output:
```
time: 0.739; rss: 607MB -> 637MB item_types_checking
time: 8.429; rss: 637MB -> 775MB item_bodies_checking
time: 11.063; rss: 470MB -> 775MB type_check_crate
time: 0.232; rss: 775MB -> 777MB match_checking
time: 0.139; rss: 777MB -> 779MB liveness_and_intrinsic_checking
time: 0.372; rss: 775MB -> 779MB misc_checking_2
time: 8.188; rss: 779MB -> 1019MB MIR_borrow_checking
time: 0.062; rss: 1019MB -> 1021MB MIR_effect_checking
```
Previously, only the end of pass RSS was indicated. This could easily
lead one to believe that the change in RSS from one pass to the next was
attributable to the second pass, when in fact it occurred between the
end of the first pass and the start of the second.
Also, improve alignment of columns.
Enforce that query results implement Debug
Currently, we require that query keys implement `Debug`, but we do not do the same for query values. This can make incremental compilation bugs difficult to debug - there isn't a good place to print out the result loaded from disk.
This PR adds `Debug` bounds to several query-related functions, allowing us to debug-print the query value when an 'unstable fingerprint' error occurs. This required adding `#[derive(Debug)]` to a fairly large number of types - hopefully, this doesn't have much of an impact on compiler bootstrapping times.
Before:
```
thread 'rustc' panicked at 'attempt to read from stolen value', /home/joshua/rustc/compiler/rustc_data_structures/src/steal.rs:43:15
```
After:
```
thread 'rustc' panicked at 'attempt to steal from stolen value', compiler/rustc_mir/src/transform/mod.rs:423:25
```
Reduce a large memory spike that happens during serialization by writing
the incr comp structures to file by way of a fixed-size buffer, rather
than an unbounded vector.
Effort was made to keep the instruction count close to that of the
previous implementation. However, buffered writing to a file inherently
has more overhead than writing to a vector, because each write may
result in a handleable error. To reduce this overhead, arrangements are
made so that each LEB128-encoded integer can be written to the buffer
with only one capacity and error check. Higher-level optimizations in
which entire composite structures can be written with one capacity and
error check are possible, but would require much more work.
The performance is mostly on par with the previous implementation, with
small to moderate instruction count regressions. The memory reduction is
significant, however, so it seems like a worth-while trade-off.
Stabilize or_insert_with_key
Stabilizes the `or_insert_with_key` feature from https://github.com/rust-lang/rust/issues/71024. This allows inserting key-derived values when a `HashMap`/`BTreeMap` entry is vacant.
The difference between this and `.or_insert_with(|| ... )` is that this provides a reference to the key to the closure after it is moved with `.entry(key_being_moved)`, avoiding the need to copy or clone the key.
They were originally called "opt-in, built-in traits" (OIBITs), but
people realized that the name was too confusing and a mouthful, and so
they were renamed to just "auto traits". The feature flag's name wasn't
updated, though, so that's what this PR does.
There are some other spots in the compiler that still refer to OIBITs,
but I don't think changing those now is worth it since they are internal
and not particularly relevant to this PR.
Also see <https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/opt-in.2C.20built-in.20traits.20(auto.20traits).20feature.20name>.
Reworks Sccc computation to iteration instead of recursion
Linear graphs, producing as many scc's as nodes, would recurse once for every node when entered from the start of the list. This adds a test that exhausted the stack at least on my machine with error:
```
thread 'graph::scc::tests::test_deep_linear' has overflowed its stack
fatal runtime error: stack overflow
```
This may or may not be connected to #78567. I was only reminded that I started this rework some time ago. It might be plausible as borrow checking a long function with many borrow regions around each other—((((((…))))))— may produce the linear list setup to trigger this stack overflow ? I don't know enough about borrow check to say for sure.
This is best read in two separate commits. The first addresses only `find_state` internally. This is classical union phase from union-find. There's also a common solution of using the parent pointers in the (virtual) linked list to track the backreferences while traversing upwards and then following them backwards in a second path compression phase.
The second is more involved as it rewrites the mutually recursive `walk_node` and `walk_unvisited_node`. Firstly, the caller is required to handle the unvisited case of `walk_node` so a new `start_walk_from` method is added to handle that by walking the unvisited node if necessary. Then `walk_unvisited_node`, where we would previously recurse into in the missing case, is rewritten to construct a manual stack of its frames. The state fields consist of the previous stack slots.
[self-profiling] Include the estimated size of each cgu in the profile
This is helpful when looking for CGUs where the size estimate isn't a
good indicator of compilation time.
I verified that moving the profiling timer call doesn't affect the
results.
Results:
<img width="297" alt="Screen Shot 2020-11-03 at 7 25 04 AM" src="https://user-images.githubusercontent.com/831192/97985503-5901d100-1da6-11eb-9f10-f3e399702952.png">
`measureme` doesn't have support for custom arg names yet so `arg0` is the CGU name and `arg1` is the estimated size.
Move likely/unlikely argument outside of invisible unsafe block
The previous `likely!`/`unlikely!` macros were unsound because it permits the caller's expr to contain arbitrary unsafe code.
```rust
pub fn huh() -> bool {
likely!(std::ptr::read(&() as *const () as *const bool))
}
```
**Before:** compiles cleanly.
**After:**
```console
error[E0133]: call to unsafe function is unsafe and requires unsafe function or block
|
70 | likely!(std::ptr::read(&() as *const () as *const bool))
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ call to unsafe function
|
= note: consult the function's documentation for information on how to avoid undefined behavior
```
The previous `likely!`/`unlikely!` macros were unsound because it
permits the caller's expr to contain arbitrary unsafe code.
pub fn huh() -> bool {
likely!(std::ptr::read(&() as *const () as *const bool))
}
Before: compiles cleanly.
After:
error[E0133]: call to unsafe function is unsafe and requires unsafe function or block
|
70 | likely!(std::ptr::read(&() as *const () as *const bool))
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ call to unsafe function
|
= note: consult the function's documentation for information on how to avoid undefined behavior
This allows constructing the sccc for large that visit many nodes before
finding a single cycle of sccc, for example lists. When used to find
dependencies in borrow checking the list case is what occurs in very
long functions.
The basic conversion is a straightforward conversion of the linear
recursion to a loop forwards and backwards propagation of the result.
But this uses an optimization to avoid the need for extra space that
would otherwise be necessary to store the stack of unfinished states as
the function is not tail recursive.
Observe that only non-root-nodes in cycles have a recursive call and
that every such call overwrites their own node state. Thus we reuse the
node state itself as temporary storage for the stack of unfinished
states by inverting the links to a chain back to the previous state
update. When we hit the root or end of the full explored chain we
propagate the node state update backwards by following the chain until
a node with a link to itself.
This is helpful when looking for CGUs where the size estimate isn't a
good indicator of compilation time.
I verified that moving the profiling timer call doesn't affect the
results.
The previous recursive approach might overflow the stack when walking a
particularly deep, list-like, graph. In particular, dominator
calculation for borrow checking does such a traversal and very long
functions might lead to a region dependency graph with in this
problematic structure.
Avoid BorrowMutError with RUSTC_LOG=debug
```console
$ touch empty.rs
$ env RUSTC_LOG=debug rustc +stage1 --crate-type=lib empty.rs
```
Fails with a `BorrowMutError` because source map files are already
borrowed while `features_query` attempts to format a log message
containing a span.
Release the borrow before the query to avoid the issue.
perf: buffer SipHasher128
This is an attempt to improve Siphasher128 performance by buffering input. Although it reduces instruction count, I'm not confident the effect on wall times, or lack-thereof, is worth the change.
---
Additional notes not reflected in source comments:
* Implementation choices were guided by a combination of results from rustc-perf and micro-benchmarks, mostly the former.
* ~~I tried a couple of different struct layouts that might be more cache friendly with no obvious effect.~~ Update: a particular struct layout was chosen, but it's not critical to performance. See comments in source and discussion below.
* I suspect that buffering would be important to a SIMD-accelerated algorithm, but from what I've read and my own tests, SipHash does not seem very amenable to SIMD acceleration, at least by SSE.
Simplify query proc-macros
The query code generation is split between proc-macros and regular macros in `rustc_middle::ty::query`.
This PR removes unused capabilities of the proc-macros, and tend to use regular macros for the logic.
Try to make ObligationForest more efficient
This PR tries to decrease the number of allocations in ObligationForest, as well as moves some cold path code to an uninlined function.
This is a combination of 18 commits.
Commit #2:
Additional examples and some small improvements.
Commit #3:
fixed mir-opt non-mir extensions and spanview title elements
Corrected a fairly recent assumption in runtest.rs that all MIR dump
files end in .mir. (It was appending .mir to the graphviz .dot and
spanview .html file names when generating blessed output files. That
also left outdated files in the baseline alongside the files with the
incorrect names, which I've now removed.)
Updated spanview HTML title elements to match their content, replacing a
hardcoded and incorrect name that was left in accidentally when
originally submitted.
Commit #4:
added more test examples
also improved Makefiles with support for non-zero exit status and to
force validation of tests unless a specific test overrides it with a
specific comment.
Commit #5:
Fixed rare issues after testing on real-world crate
Commit #6:
Addressed PR feedback, and removed temporary -Zexperimental-coverage
-Zinstrument-coverage once again supports the latest capabilities of
LLVM instrprof coverage instrumentation.
Also fixed a bug in spanview.
Commit #7:
Fix closure handling, add tests for closures and inner items
And cleaned up other tests for consistency, and to make it more clear
where spans start/end by breaking up lines.
Commit #8:
renamed "typical" test results "expected"
Now that the `llvm-cov show` tests are improved to normally expect
matching actuals, and to allow individual tests to override that
expectation.
Commit #9:
test coverage of inline generic struct function
Commit #10:
Addressed review feedback
* Removed unnecessary Unreachable filter.
* Replaced a match wildcard with remining variants.
* Added more comments to help clarify the role of successors() in the
CFG traversal
Commit #11:
refactoring based on feedback
* refactored `fn coverage_spans()`.
* changed the way I expand an empty coverage span to improve performance
* fixed a typo that I had accidently left in, in visit.rs
Commit #12:
Optimized use of SourceMap and SourceFile
Commit #13:
Fixed a regression, and synched with upstream
Some generated test file names changed due to some new change upstream.
Commit #14:
Stripping out crate disambiguators from demangled names
These can vary depending on the test platform.
Commit #15:
Ignore llvm-cov show diff on test with generics, expand IO error message
Tests with generics produce llvm-cov show results with demangled names
that can include an unstable "crate disambiguator" (hex value). The
value changes when run in the Rust CI Windows environment. I added a sed
filter to strip them out (in a prior commit), but sed also appears to
fail in the same environment. Until I can figure out a workaround, I'm
just going to ignore this specific test result. I added a FIXME to
follow up later, but it's not that critical.
I also saw an error with Windows GNU, but the IO error did not
specify a path for the directory or file that triggered the error. I
updated the error messages to provide more info for next, time but also
noticed some other tests with similar steps did not fail. Looks
spurious.
Commit #16:
Modify rust-demangler to strip disambiguators by default
Commit #17:
Remove std::process::exit from coverage tests
Due to Issue #77553, programs that call std::process::exit() do not
generate coverage results on Windows MSVC.
Commit #18:
fix: test file paths exceeding Windows max path len
SipHasher128 implements short_write in an endian-independent way, yet
its write_xxx Hasher trait methods undo this endian-independence by byte
swapping the integer inputs on big-endian hardware. StableHasher then
adds endian-independence back by also byte-swapping on big-endian
hardware prior to invoking SipHasher128.
This double swap may have the appearance of being a no-op, but is in
fact by design. In particular, we really do want SipHasher128 to be
platform-dependent, in order to be consistent with the libstd SipHasher.
Try to clarify this intent. Also, add and update a couple of unit tests.
SsoHashSet::replace had to be removed because
it requires missing API from SsoHashMap.
It's not a widely used function, so I think it's ok
to omit it for now.
EitherIter moved into its own file.
Also sprinkled code with #[inline] attributes where appropriate.
Make `ensure_sufficient_stack()` non-generic, using cargo-llvm-lines
Inspired by [this blog post](https://blog.mozilla.org/nnethercote/2020/08/05/how-to-speed-up-the-rust-compiler-some-more-in-2020/) from `@nnethercote,` I used [cargo-llvm-lines](https://github.com/dtolnay/cargo-llvm-lines/) on the rust compiler itself, to improve it's compile time. This PR contains only one low-hanging fruit, but I also want to share some measurements.
The function `ensure_sufficient_stack()` was monomorphized 1500 times, and with it the `stacker` and `psm` crates, for a total of 1.5% of all llvm IR lines. With some trickery I convert the generic closure into a dynamic one, and thus all that code is only monomorphized once.
# Measurements
Getting these numbers took some fiddling with CLI flags and I [modified](https://github.com/Julian-Wollersberger/cargo-llvm-lines/blob/master/src/main.rs#L115) cargo-llvm-lines to read from a folder instead of invoking cargo. Commands I used:
```
./x.py clean
RUSTFLAGS="--emit=llvm-ir -C link-args=-fuse-ld=lld -Z self-profile=profile" CARGOFLAGS_BOOTSTRAP="-Ztimings" RUSTC_BOOTSTRAP=1 ./x.py build -i --stage 1 library/std
# Then manually copy all .ll files into a folder I hardcoded in cargo-llvm-lines in main.rs#L115
cd ../cargo-llvm-lines
cargo run llvm-lines
```
The result is this list (see [first 500 lines](https://github.com/Julian-Wollersberger/cargo-llvm-lines/blob/master/llvm-lines-rustc-before.txt) ), before the change:
```
Lines Copies Function name
----- ------ -------------
16894211 (100%) 58417 (100%) (TOTAL)
2223855 (13.2%) 502 (0.9%) rustc_query_system::query::plumbing::get_query_impl::{{closure}}
1331918 (7.9%) 1287 (2.2%) hashbrown::raw::RawTable<T>::reserve_rehash
774434 (4.6%) 12043 (20.6%) core::ptr::drop_in_place
294170 (1.7%) 499 (0.9%) rustc_query_system::dep_graph::graph::DepGraph<K>::with_task_impl
245410 (1.5%) 1552 (2.7%) psm::on_stack::with_on_stack
210311 (1.2%) 1 (0.0%) rustc_target::spec::load_specific
200962 (1.2%) 513 (0.9%) rustc_query_system::query::plumbing::get_query_impl
190704 (1.1%) 1 (0.0%) rustc_middle::ty::query::<impl rustc_middle::ty::context::TyCtxt>::alloc_self_profile_query_strings
180272 (1.1%) 468 (0.8%) rustc_query_system::query::plumbing::load_from_disk_and_cache_in_memory
177396 (1.1%) 114 (0.2%) rustc_query_system::query::plumbing::force_query_impl
161134 (1.0%) 445 (0.8%) rustc_query_system::dep_graph::graph::DepGraph<K>::with_anon_task
141551 (0.8%) 186 (0.3%) rustc_query_system::query::plumbing::incremental_verify_ich
110191 (0.7%) 7 (0.0%) rustc_middle::ty::context::_DERIVE_rustc_serialize_Decodable_D_FOR_TypeckResults::<impl rustc_serialize::serialize::Decodable<__D> for rustc_middle::ty::context::TypeckResults>::decode::{{closure}}
108590 (0.6%) 420 (0.7%) core::ops::function::FnOnce::call_once
88488 (0.5%) 21 (0.0%) rustc_query_system::dep_graph::graph::DepGraph<K>::try_mark_previous_green
86368 (0.5%) 1 (0.0%) rustc_middle::ty::query::stats::query_stats
85654 (0.5%) 3973 (6.8%) <&T as core::fmt::Debug>::fmt
84475 (0.5%) 1 (0.0%) rustc_middle::ty::query::Queries::try_collect_active_jobs
81220 (0.5%) 862 (1.5%) <hashbrown::raw::RawIterHash<T> as core::iter::traits::iterator::Iterator>::next
77636 (0.5%) 54 (0.1%) core::slice::sort::recurse
66484 (0.4%) 461 (0.8%) <hashbrown::raw::RawIter<T> as core::iter::traits::iterator::Iterator>::next
```
All `.ll` files together had 4.4GB. After my change they had 4.2GB. So a few percent less code LLVM has to process. Hurray!
Sadly, I couldn't measure an actual wall-time improvement. Watching YouTube while compiling added to much noise...
Here is the top of the list after the change:
```
16460866 (100%) 58341 (100%) (TOTAL)
1903085 (11.6%) 504 (0.9%) rustc_query_system::query::plumbing::get_query_impl::{{closure}}
1331918 (8.1%) 1287 (2.2%) hashbrown::raw::RawTable<T>::reserve_rehash
777796 (4.7%) 12031 (20.6%) core::ptr::drop_in_place
551462 (3.4%) 1519 (2.6%) rustc_data_structures::stack::ensure_sufficient_stack::{{closure}}
```
Note that the total was reduced by 430 000 lines and `psm::on_stack::with_on_stack` has disappeared. Instead `rustc_data_structures::stack::ensure_sufficient_stack::{{closure}}` appeared. I'm confused about that one, but it seems to consist of inlined calls to `rustc_query_system::*` stuff.
Further note the other two big culprits in this list: `rustc_query_system` and `hashbrown`. These two are monomorphized many times, the query system summing to more than 20% of all lines, not even counting code that's probably inlined elsewhere.
Assuming compile times scale linearly with llvm-lines, that means a possible 20% compile time reduction.
Reducing eg. `get_query_impl` would probably need a major refactoring of the qery system though. _Everything_ in there is generic over multiple types, has associated types and passes generic Self arguments by value. Which means you can't simply make things `dyn`.
---------------------------------------
This PR is a small step to make rustc compile faster and thus make contributing to rustc less painful. Nonetheless I love Rust and I find the work around rustc fascinating :)
use `array_windows` instead of `windows` in the compiler
I do think these changes are beautiful, but do have to admit that using type inference for the window length
can easily be confusing. This seems like a general issue with const generics, where inferring constants adds an additional
complexity which users have to learn and keep in mind.
Remove redundant nightly features
Removes a bunch of redundant/outdated nightly features. The first commit removes a `core_intrinsics` use for which a stable wrapper has been provided since. The second commit replaces the `const_generics` feature with `min_const_generics` which might get stabilized this year. The third commit is the result of a trial/error run of removing every single feature and then adding it back if compile failed. A bunch of unused features are the result that the third commit removes.
Avoid rehashing Fingerprint as a map key
This introduces a no-op `Unhasher` for map keys that are already hash-
like, for example `Fingerprint` and its wrapper `DefPathHash`. For these
we can directly produce the `u64` hash for maps. The first use of this
is `def_path_hash_to_def_id: Option<UnhashMap<DefPathHash, DefId>>`.
cc #56308
r? @eddyb
This introduces a no-op `Unhasher` for map keys that are already hash-
like, for example `Fingerprint` and its wrapper `DefPathHash`. For these
we can directly produce the `u64` hash for maps. The first use of this
is `def_path_hash_to_def_id: Option<UnhashMap<DefPathHash, DefId>>`.