Commit Graph

626 Commits

Author SHA1 Message Date
Joshua Nelson
4856affd90 Make HandleCycleError an enum instead of a macro-generated closure
- Add a `HandleCycleError` enum to rustc_query_system, along with a `handle_cycle_error` function
- Move `Value` to rustc_query_system, so `handle_cycle_error` can use it
- Move the `Value` impls from rustc_query_impl to rustc_middle. This is necessary due to orphan rules.
2022-09-06 19:26:08 -05:00
Jhonny Bill Mena
321e60bf34 UPDATE - into_diagnostic to take a Handler instead of a ParseSess
Suggested by the team in this Zulip Topic https://rust-lang.zulipchat.com/#narrow/stream/336883-i18n/topic/.23100717.20SessionDiagnostic.20on.20Handler

Handler already has almost all the capabilities of ParseSess when it comes to diagnostic emission, in this migration we only needed to add the ability to access source_map from the emitter in order to get a Snippet and the start_point. Not sure if this is the best way to address this gap
2022-09-05 02:18:45 -04:00
Joshua Nelson
4e09a13bb8 Don't create two new closures for each query
- Parameterize DepKindStruct over `'tcx`

    This allows passing in an invariant function pointer in `query_callback`,
    rather than having to try and make it work for any lifetime.

- Add a new `execute_query` function to `QueryDescription` so we can call `tcx.$name` without needing to be in a macro context
2022-09-01 18:47:54 -05:00
bors
eac6c33bc6 Auto merge of #100869 - nnethercote:replace-ThinVec, r=spastorino
Replace `rustc_data_structures::thin_vec::ThinVec` with `thin_vec::ThinVec`

`rustc_data_structures::thin_vec::ThinVec` looks like this:
```
pub struct ThinVec<T>(Option<Box<Vec<T>>>);
```
It's just a zero word if the vector is empty, but requires two
allocations if it is non-empty. So it's only usable in cases where the
vector is empty most of the time.

This commit removes it in favour of `thin_vec::ThinVec`, which is also
word-sized, but stores the length and capacity in the same allocation as
the elements. It's good in a wider variety of situation, e.g. in enum
variants where the vector is usually/always non-empty.

The commit also:
- Sorts some `Cargo.toml` dependency lists, to make additions easier.
- Sorts some `use` item lists, to make additions easier.
- Changes `clean_trait_ref_with_bindings` to take a
  `ThinVec<TypeBinding>` rather than a `&[TypeBinding]`, because this
  avoid some unnecessary allocations.

r? `@spastorino`
2022-09-01 08:01:06 +00:00
Li Yuanheng
7ce59ebf49 SessionDiagnostic for QueryOverflow error 2022-08-31 19:43:23 +08:00
Li Yuanheng
166aef90fb delete leftover comment 2022-08-31 19:43:23 +08:00
Yuanheng Li
ac638c1f5f use derive proc macro to impl SessionDiagnostic
fixes `SessionSubdiagnostic` to accept multiple attributes
emitting list of fluent message remains unresolved
2022-08-31 19:43:12 +08:00
Li Yuanheng
d7e07c0b6b link related issue to FIXME
Co-authored-by: David Wood <agile.lion3441@fuligin.ink>
2022-08-31 19:33:51 +08:00
Yuanheng Li
b8075e4550 migrate rustc_query_system to use SessionDiagnostic
with manual impl SessionDiagnostic
2022-08-31 19:33:51 +08:00
Nicholas Nethercote
b38106b6d8 Replace rustc_data_structures::thin_vec::ThinVec with thin_vec::ThinVec.
`rustc_data_structures::thin_vec::ThinVec` looks like this:
```
pub struct ThinVec<T>(Option<Box<Vec<T>>>);
```
It's just a zero word if the vector is empty, but requires two
allocations if it is non-empty. So it's only usable in cases where the
vector is empty most of the time.

This commit removes it in favour of `thin_vec::ThinVec`, which is also
word-sized, but stores the length and capacity in the same allocation as
the elements. It's good in a wider variety of situation, e.g. in enum
variants where the vector is usually/always non-empty.

The commit also:
- Sorts some `Cargo.toml` dependency lists, to make additions easier.
- Sorts some `use` item lists, to make additions easier.
- Changes `clean_trait_ref_with_bindings` to take a
  `ThinVec<TypeBinding>` rather than a `&[TypeBinding]`, because this
  avoid some unnecessary allocations.
2022-08-29 15:42:13 +10:00
SparrowLii
cbc6bd2019 add depth_limit in QueryVTable 2022-08-24 09:42:12 +08:00
bors
14a459bf37 Auto merge of #100441 - nnethercote:shrink-ast-Attribute, r=petrochenkov
Shrink `ast::Attribute`.

r? `@ghost`
2022-08-16 07:54:22 +00:00
Nicholas Nethercote
85a6cd6a47 Shrink ast::Attribute. 2022-08-16 11:10:13 +10:00
Camille GILLOT
5d75ca5ef4 Remove unused hashing infra. 2022-08-07 17:51:55 +02:00
Camille GILLOT
8a4cbcf220 Derive HashStable for HIR Expr and Ty. 2022-08-07 17:30:45 +02:00
Camille GILLOT
b0047c18cb Stop forcing the hashing of bodies in types and expressions. 2022-08-07 17:30:45 +02:00
Camille GILLOT
2134dd3b48 Remove useless closure. 2022-07-29 22:11:23 +02:00
Camille GILLOT
6733bc3066 Remove guess_head_span. 2022-07-28 23:14:04 +02:00
Ralf Jung
3dad266f40 consistently use VTable over Vtable (matching stable stdlib API RawWakerVTable) 2022-07-20 17:12:07 -04:00
Joshua Nelson
3c9765cff1 Rename debugging_opts to unstable_opts
This is no longer used only for debugging options (e.g. `-Zoutput-width`, `-Zallow-features`).
Rename it to be more clear.
2022-07-13 17:47:06 -05:00
bors
86b8dd5389 Auto merge of #99028 - tmiasko:inline, r=estebank
Miscellaneous inlining improvements

Add `#[inline]` to a few trivial non-generic methods from a perf report
that otherwise wouldn't be candidates for inlining.
2022-07-09 04:34:51 +00:00
Tomasz Miąsko
87374de3ad Miscellaneous inlining improvements
Add `#[inline]` to a few trivial non-generic methods from a perf report
that otherwise wouldn't be candidates for inlining.
2022-07-07 22:20:08 +02:00
bors
0f573a0c54 Auto merge of #95573 - cjgillot:lower-query, r=michaelwoerister
Make lowering a query

Split from https://github.com/rust-lang/rust/pull/88186.

This PR refactors the relationship between lowering and the resolver outputs in order to make lowering itself a query.
In a first part, lowering is changed to avoid modifying resolver outputs, by maintaining its own data structures for creating new `NodeId`s and so.

Then, the `TyCtxt` is modified to allow creating new `LocalDefId`s from inside it. This is done by:
- enclosing `Definitions` in a lock, so as to allow modification;
- creating a query `register_def` whose purpose is to declare a `LocalDefId` to the query system.

See `TyCtxt::create_def` and `TyCtxt::iter_local_def_id` for more detailed explanations of the design.
2022-07-07 18:14:44 +00:00
Camille GILLOT
e912c8dfe0 Use a dedicated DepKind for the forever-red node. 2022-07-06 23:20:12 +02:00
Camille GILLOT
15530a1c84 Create a forever red node and use it to force side effects. 2022-07-06 23:11:44 +02:00
Camille GILLOT
250c71b85d Make AST lowering a query. 2022-07-06 23:04:55 +02:00
Camille GILLOT
43bb31b954 Allow to create definitions inside the query system. 2022-07-06 22:50:55 +02:00
David Wood
8371a036ea incr: cache dwarf objects in work products
Cache DWARF objects alongside object files in work products when those
exist so that DWARF object files are available for thorin in packed mode
in incremental scenarios.

Signed-off-by: David Wood <david.wood@huawei.com>
2022-07-06 11:15:13 +01:00
bors
8c52a83c45 Auto merge of #98570 - SparrowLii:deadlock, r=cjgillot
get rid of `tcx` in deadlock handler when parallel compilation

This is a very obscure and hard-to-trace problem that affects thread scheduling. If we copy `tcx` to the deadlock handler thread, it will perform unpredictable behavior and cause very weird problems when executing `try_collect_active_jobs`(For example, the deadlock handler thread suddenly preempts the content of the blocked worker thread and executes the unknown judgment branch, like #94654).
Fortunately we can avoid this behavior by precomputing `query_map`. This change fixes the following ui tests failure on my environment when set `parallel-compiler = true`:
```
    [ui] src/test\ui\async-await\no-const-async.rs
    [ui] src/test\ui\infinite\infinite-struct.rs
    [ui] src/test\ui\infinite\infinite-tag-type-recursion.rs
    [ui] src/test\ui\issues\issue-3008-1.rs
    [ui] src/test\ui\issues\issue-3008-2.rs
    [ui] src/test\ui\issues\issue-32326.rs
    [ui] src/test\ui\issues\issue-57271.rs
    [ui] src/test\ui\issues\issue-72554.rs
    [ui] src/test\ui\parser\fn-header-semantic-fail.rs
    [ui] src/test\ui\union\union-nonrepresentable.rs
```

Updates #75760
Fixes #94654
2022-07-03 02:05:14 +00:00
SparrowLii
fbca21edd2 get rid of tcx in deadlock handler when parallel compilation 2022-06-29 10:02:30 +08:00
Nicholas Nethercote
7c40661ddb Update smallvec to 1.8.1.
This pulls in https://github.com/servo/rust-smallvec/pull/282, which
gives some small wins for rustc.
2022-06-27 08:48:55 +10:00
Gary Guo
8b7299dd12 Remove likely! and unlikely! macro from compiler 2022-06-18 04:52:11 +01:00
bors
3a8b0144c8 Auto merge of #98106 - cjgillot:split-definitions, r=michaelwoerister
Split up `Definitions` and `ResolverAstLowering`.

Split off https://github.com/rust-lang/rust/pull/95573

r? `@michaelwoerister`
2022-06-17 10:00:11 +00:00
Nicholas Nethercote
bb02cc47c4 Move finish out of the Encoder trait.
This simplifies things, but requires making `CacheEncoder` non-generic.

(This was previously merged as commit 4 in #94732 and then was reverted
in #97905 because it caused a perf regression.)
2022-06-16 16:20:32 +10:00
bors
ebe184a693 Auto merge of #98084 - nnethercote:rm-thread-local-IGNORED_ATTRIBUTES, r=michaelwoerister
Remove thread-local `IGNORED_ATTRIBUTES`.

It's just a copy of the read-only global `ich::IGNORED_ATTRIBUTES`, and
can be removed without any effect.

r? `@michaelwoerister`
2022-06-15 08:20:19 +00:00
Yuki Okushi
97b9347c93
Rollup merge of #98083 - nnethercote:rename-Encoder, r=bjorn3
Rename rustc_serialize::opaque::Encoder as MemEncoder.

This avoids the name clash with `rustc_serialize::Encoder` (a trait),
and allows lots qualifiers to be removed and imports to be simplified
(e.g. fewer `as` imports).

(This was previously merged as commit 5 in #94732 and then was reverted
in #97905 because of a perf regression caused by commit 4 in #94732.)

r? ```@bjorn3```
2022-06-15 12:02:04 +09:00
Camille GILLOT
34e4d72929 Separate source_span and expn_that_defined from Definitions. 2022-06-14 22:45:51 +02:00
Nicholas Nethercote
a2801625cd Remove thread-local IGNORED_ATTRIBUTES.
It's just a copy of the read-only global `ich::IGNORED_ATTRIBUTES`, and
can be removed without any effect.
2022-06-14 15:20:54 +10:00
Nicholas Nethercote
abe45a9ffa Rename rustc_serialize::opaque::Encoder as MemEncoder.
This avoids the name clash with `rustc_serialize::Encoder` (a trait),
and allows lots qualifiers to be removed and imports to be simplified
(e.g. fewer `as` imports).

(This was previously merged as commit 5 in #94732 and then was reverted
in #97905 because of a perf regression caused by commit 4 in #94732.)
2022-06-14 14:52:01 +10:00
klensy
4ea4e2e76d remove currently unused deps 2022-06-13 22:20:51 +03:00
Nicholas Nethercote
3186e311e5 Revert dc08bc51f2. 2022-06-10 11:58:29 +10:00
Nicholas Nethercote
7f51a1b976 Revert b983e42936. 2022-06-10 08:35:03 +10:00
Nicholas Nethercote
b983e42936 Rename rustc_serialize::opaque::Encoder as MemEncoder.
This avoids the name clash with `rustc_serialize::Encoder` (a trait),
and allows lots qualifiers to be removed and imports to be simplified
(e.g. fewer `as` imports).
2022-06-08 09:50:44 +10:00
Nicholas Nethercote
dc08bc51f2 Move finish out of the Encoder trait.
This simplifies things, but requires making `CacheEncoder` non-generic.
2022-06-08 09:21:05 +10:00
Nicholas Nethercote
1acbe7573d Use delayed error handling for Encodable and Encoder infallible.
There are two impls of the `Encoder` trait: `opaque::Encoder` and
`opaque::FileEncoder`. The former encodes into memory and is infallible, the
latter writes to file and is fallible.

Currently, standard `Result`/`?`/`unwrap` error handling is used, but this is a
bit verbose and has non-trivial cost, which is annoying given how rare failures
are (especially in the infallible `opaque::Encoder` case).

This commit changes how `Encoder` fallibility is handled. All the `emit_*`
methods are now infallible. `opaque::Encoder` requires no great changes for
this. `opaque::FileEncoder` now implements a delayed error handling strategy.
If a failure occurs, it records this via the `res` field, and all subsequent
encoding operations are skipped if `res` indicates an error has occurred. Once
encoding is complete, the new `finish` method is called, which returns a
`Result`. In other words, there is now a single `Result`-producing method
instead of many of them.

This has very little effect on how any file errors are reported if
`opaque::FileEncoder` has any failures.

Much of this commit is boring mechanical changes, removing `Result` return
values and `?` or `unwrap` from expressions. The more interesting parts are as
follows.
- serialize.rs: The `Encoder` trait gains an `Ok` associated type. The
  `into_inner` method is changed into `finish`, which returns
  `Result<Vec<u8>, !>`.
- opaque.rs: The `FileEncoder` adopts the delayed error handling
  strategy. Its `Ok` type is a `usize`, returning the number of bytes
  written, replacing previous uses of `FileEncoder::position`.
- Various methods that take an encoder now consume it, rather than being
  passed a mutable reference, e.g. `serialize_query_result_cache`.
2022-06-08 07:01:26 +10:00
bjorn3
e16c3b4a44 Make saved_file field of WorkProduct non-optional
A WorkProduct without a saved file is useless
2022-06-06 12:39:32 +00:00
bjorn3
85136dbe56 Remove unnecessary cgu name length hash
This is a tiny optimization
2022-06-06 12:29:44 +00:00
Nicholas Nethercote
72de7c4102 Address review comments. 2022-06-02 12:22:04 +10:00
Nicholas Nethercote
0b81d7cdc6 Lazify SourceFile::lines.
`SourceFile::lines` is a big part of metadata. It's stored in a compressed form
(a difference list) to save disk space. Decoding it is a big fraction of
compile time for very small crates/programs.

This commit introduces a new type `SourceFileLines` which has a `Lines`
form and a `Diffs` form. The latter is used when the metadata is first
read, and it is only decoded into the `Lines` form when line data is
actually needed. This avoids the decoding cost for many files,
especially in `std`. It's a performance win of up to 15% for tiny
crates/programs where metadata decoding is a high part of compilation
costs.

A `Lock` is needed because the methods that access lines data (which can
trigger decoding) take `&self` rather than `&mut self`. To allow for this,
`SourceFile::lines` now takes a `FnMut` that operates on the lines slice rather
than returning the lines slice.
2022-06-01 10:36:39 +10:00
bors
0f06824013 Auto merge of #97287 - compiler-errors:type-interner, r=jackh726,oli-obk
Move things to `rustc_type_ir`

Finishes some work proposed in https://github.com/rust-lang/compiler-team/issues/341.

r? `@ghost`
2022-05-29 08:20:13 +00:00
Michael Goulet
4638915940 Make TyCtxt implement Interner, make HashStable generic and move to rustc_type_ir 2022-05-28 12:16:05 -07:00
Wilco Kusee
a7015fe816 Move things to rustc_type_ir 2022-05-28 11:38:22 -07:00
Josh Stone
ab57e36268 Update to rebased rustc-rayon 0.4 2022-05-27 20:20:41 -07:00
bors
574830f573 Auto merge of #96094 - Elliot-Roberts:fix_doctests, r=compiler-errors
Begin fixing all the broken doctests in `compiler/`

Begins to fix #95994.
All of them pass now but 24 of them I've marked with `ignore HELP (<explanation>)` (asking for help) as I'm unsure how to get them to work / if we should leave them as they are.
There are also a few that I marked `ignore` that could maybe be made to work but seem less important.
Each `ignore` has a rough "reason" for ignoring after it parentheses, with

- `(pseudo-rust)` meaning "mostly rust-like but contains foreign syntax"
- `(illustrative)` a somewhat catchall for either a fragment of rust that doesn't stand on its own (like a lone type), or abbreviated rust with ellipses and undeclared types that would get too cluttered if made compile-worthy.
- `(not-rust)` stuff that isn't rust but benefits from the syntax highlighting, like MIR.
- `(internal)` uses `rustc_*` code which would be difficult to make work with the testing setup.

Those reason notes are a bit inconsistently applied and messy though. If that's important I can go through them again and try a more principled approach. When I run `rg '```ignore \(' .` on the repo, there look to be lots of different conventions other people have used for this sort of thing. I could try unifying them all if that would be helpful.

I'm not sure if there was a better existing way to do this but I wrote my own script to help me run all the doctests and wade through the output. If that would be useful to anyone else, I put it here: https://github.com/Elliot-Roberts/rust_doctest_fixing_tool
2022-05-07 06:30:29 +00:00
Yuki Okushi
ade123275d
Rollup merge of #96697 - oli-obk:trace_queries, r=michaelwoerister
Enable tracing for all queries

This allows you to log everything within a specific query, e.g.

```
env RUSTC_LOG=[mir_borrowck]
```

dumping all borrowck queries may be a bit verbose, so you can also restrict it to just an item of your choice:

```
env RUSTC_LOG=[mir_borrowck{key=\.\*name_of_item\.\*}]
```

the regex `.*` in the key name are because the key is a debug printed DefId, so you'd get all kinds of things like hashes in there. The tracing logs will show you the key, so you can restrict it further if you want.
2022-05-05 10:20:38 +09:00
Oli Scherer
0d5a738b8b Enable tracing for all queryies 2022-05-04 16:15:26 +00:00
Josh Triplett
0fc5c524f5 Stabilize bool::then_some 2022-05-04 13:22:08 +02:00
Elliot Roberts
7907385999 fix most compiler/ doctests 2022-05-02 17:40:30 -07:00
Camille GILLOT
443333dc1f Remove NodeIdHashingMode. 2022-04-12 19:59:32 +02:00
bors
e980c62955 Auto merge of #95524 - oli-obk:cached_stable_hash_cleanups, r=nnethercote
Cached stable hash cleanups

r? `@nnethercote`

Add a sanity assertion in debug mode to check that the cached hashes are actually the ones we get if we compute the hash each time.

Add a new data structure that bundles all the hash-caching work to make it easier to re-use it for different interned data structures
2022-04-09 02:31:24 +00:00
David Wood
7f91697b50 errors: implement fallback diagnostic translation
This commit updates the signatures of all diagnostic functions to accept
types that can be converted into a `DiagnosticMessage`. This enables
existing diagnostic calls to continue to work as before and Fluent
identifiers to be provided. The `SessionDiagnostic` derive just
generates normal diagnostic calls, so these APIs had to be modified to
accept Fluent identifiers.

In addition, loading of the "fallback" Fluent bundle, which contains the
built-in English messages, has been implemented.

Each diagnostic now has "arguments" which correspond to variables in the
Fluent messages (necessary to render a Fluent message) but no API for
adding arguments has been added yet. Therefore, diagnostics (that do not
require interpolation) can be converted to use Fluent identifiers and
will be output as before.
2022-04-05 07:01:02 +01:00
Oli Scherer
00c24dd8ce Move stable hash from TyS into a datastructure that can be shared with other interned types. 2022-03-31 14:54:04 +00:00
Yuri Astrakhan
5160f8f843 Spellchecking compiler comments
This PR cleans up the rest of the spelling mistakes in the compiler comments. This PR does not change any literal or code spelling issues.
2022-03-30 15:14:15 -04:00
bors
f132bcf3bd Auto merge of #94081 - oli-obk:lazy_tait_take_two, r=nikomatsakis
Lazy type-alias-impl-trait take two

### user visible change 1: RPIT inference from recursive call sites

Lazy TAIT has an insta-stable change. The following snippet now compiles, because opaque types can now have their hidden type set from wherever the opaque type is mentioned.

```rust
fn bar(b: bool) -> impl std::fmt::Debug {
    if b {
        return 42
    }
    let x: u32 = bar(false); // this errors on stable
    99
}
```

The return type of `bar` stays opaque, you can't do `bar(false) + 42`, you need to actually mention the hidden type.

### user visible change 2: divergence between RPIT and TAIT in return statements

Note that `return` statements and the trailing return expression are special with RPIT (but not TAIT). So

```rust
#![feature(type_alias_impl_trait)]
type Foo = impl std::fmt::Debug;

fn foo(b: bool) -> Foo {
    if b {
        return vec![42];
    }
    std::iter::empty().collect() //~ ERROR `Foo` cannot be built from an iterator
}

fn bar(b: bool) -> impl std::fmt::Debug {
    if b {
        return vec![42]
    }
    std::iter::empty().collect() // Works, magic (accidentally stabilized, not intended)
}
```

But when we are working with the return value of a recursive call, the behavior of RPIT and TAIT is the same:

```rust
type Foo = impl std::fmt::Debug;

fn foo(b: bool) -> Foo {
    if b {
        return vec![];
    }
    let mut x = foo(false);
    x = std::iter::empty().collect(); //~ ERROR `Foo` cannot be built from an iterator
    vec![]
}

fn bar(b: bool) -> impl std::fmt::Debug {
    if b {
        return vec![];
    }
    let mut x = bar(false);
    x = std::iter::empty().collect(); //~ ERROR `impl Debug` cannot be built from an iterator
    vec![]
}
```

### user visible change 3: TAIT does not merge types across branches

In contrast to RPIT, TAIT does not merge types across branches, so the following does not compile.

```rust
type Foo = impl std::fmt::Debug;

fn foo(b: bool) -> Foo {
    if b {
        vec![42_i32]
    } else {
        std::iter::empty().collect()
        //~^ ERROR `Foo` cannot be built from an iterator over elements of type `_`
    }
}
```

It is easy to support, but we should make an explicit decision to include the additional complexity in the implementation (it's not much, see a721052457cf513487fb4266e3ade65c29b272d2 which needs to be reverted to enable this).

### PR formalities

previous attempt: #92007

This PR also includes #92306 and #93783, as they were reverted along with #92007 in #93893

fixes #93411
fixes #88236
fixes #89312
fixes #87340
fixes #86800
fixes #86719
fixes #84073
fixes #83919
fixes #82139
fixes #77987
fixes #74282
fixes #67830
fixes #62742
fixes #54895
2022-03-30 05:04:45 +00:00
Oli Scherer
264cd05b16 Revert "Auto merge of #93893 - oli-obk:sad_revert, r=oli-obk"
This reverts commit 6499c5e7fc, reversing
changes made to 78450d2d60.
2022-03-28 16:27:14 +00:00
klensy
008fc79dcd Propagate parallel_compiler feature through rustc crates. Turned off feature gives change of builded crates: 238 -> 224. 2022-03-28 08:41:12 +03:00
Camille GILLOT
056951d628 Take &mut Diagnostic in emit_diagnostic.
Taking a Diagnostic by move would break the usual pattern
`diag.label(..).emit()`.
2022-03-20 20:36:08 +01:00
mark
e489a94dee rename ErrorReported -> ErrorGuaranteed 2022-03-02 09:45:25 -06:00
bors
3b1fe7e7c9 Auto merge of #94084 - Mark-Simulacrum:drop-sharded, r=cjgillot
Avoid query cache sharding code in single-threaded mode

In non-parallel compilers, this is just adding needless overhead at compilation time (since there is only one shard statically anyway). This amounts to roughly ~10 seconds reduction in bootstrap time, with overall neutral (some wins, some losses) performance results.

Parallel compiler performance should be largely unaffected by this PR; sharding is kept there.
2022-02-27 14:04:07 +00:00
Mark Rousskov
22c3a71de1 Switch bootstrap cfgs 2022-02-25 08:00:52 -05:00
Eduard-Mihai Burtescu
b7e95dee65 rustc_errors: let DiagnosticBuilder::emit return a "guarantee of emission". 2022-02-23 06:38:52 +00:00
bors
58a721af9f Auto merge of #93839 - Mark-Simulacrum:delete-json-rust-deserialization, r=nnethercote
Simplify rustc_serialize by dropping support for decoding into JSON

This PR currently bundles two (somewhat separate) tasks.

First, it removes the JSON Decoder trait impl, which permitted going from JSON to Rust structs. For now, we keep supporting JSON deserialization, but only to `Json` (an equivalent of serde_json::Value). The primary hard to remove user there is for custom targets -- which need some form of JSON deserialization -- but they already have a custom ad-hoc pass for moving from Json to a Rust struct.

A [comment](e7aca89598/compiler/rustc_target/src/spec/mod.rs (L1653)) there suggests that it would be impractical to move them to a Decodable-based impl, at least without backwards compatibility concerns. I suspect that if we were widely breaking compat there, it would make sense to use serde_json at this point which would produce better error messages; the types in rustc_target are relatively isolated so we would not particularly suffer from using serde_derive.

The second part of the PR (all but the first commit) is to simplify the Decoder API by removing the non-primitive `read_*` functions. These primarily add indirection (through a closure), which doesn't directly cause a performance issue (the unique closure types essentially guarantee monomorphization), but does increase the amount of work rustc and LLVM need to do. This could be split out to a separate PR, but is included here in part to help motivate the first part.

Future work might consist of:

* Specializing enum discriminant encoding to avoid leb128 for small enums (since we know the variant count, we can directly use read/write u8 in almost all cases)
* Adding new methods to support faster deserialization (e.g., access to the underlying byte stream)
   * Currently these are somewhat ad-hoc supported by specializations for e.g. `Vec<u8>`, but other types which could benefit don't today.
* Removing the Decoder trait entirely in favor of a concrete type -- today, we only really have one impl of it modulo wrappers used for specialization-based dispatch.

Highly recommend review with whitespace changes off, as the removal of closures frequently causes things to be de-indented.
2022-02-22 07:54:22 +00:00
Mark Rousskov
42904b0219 Delete Decoder::read_seq 2022-02-20 18:58:23 -05:00
Mark Rousskov
24dc052132 Delete Decoder::read_seq_elt 2022-02-20 18:58:22 -05:00
Mark Rousskov
19288951e1 Delete Decoder::read_struct_field 2022-02-20 18:58:22 -05:00
Mark Rousskov
c021ba48a7 Delete Decoder::read_struct 2022-02-20 18:58:22 -05:00
Mark Rousskov
594ea74bf0 Refactor Sharded out of non-parallel active query map 2022-02-20 15:10:19 -05:00
Mark Rousskov
41f124c824 Avoid sharding query caches entirely in single-threaded mode 2022-02-20 14:57:34 -05:00
Mark Rousskov
8443816176 Inline QueryStateShard into QueryState 2022-02-20 12:11:28 -05:00
Mark Rousskov
75ef068920 Delete QueryLookup
This was largely just caching the shard value at this point, which is not
particularly useful -- in the use sites the key was being hashed nearby anyway.
2022-02-20 12:11:28 -05:00
Mark Rousskov
9deed6f74e Move Sharded maps into each QueryCache impl 2022-02-20 12:10:46 -05:00
Mark Rousskov
ddda851fd5 Remove SimpleDefKind 2022-02-17 18:08:45 -05:00
Oli Scherer
d54195db22 Revert "Auto merge of #92007 - oli-obk:lazy_tait2, r=nikomatsakis"
This reverts commit e7cc3bddbe, reversing
changes made to 734368a200.
2022-02-11 07:18:06 +00:00
bors
e7aca89598 Auto merge of #93741 - Mark-Simulacrum:global-job-id, r=cjgillot
Refactor query system to maintain a global job id counter

This replaces the per-shard counters with a single global counter, simplifying
the JobId struct down to just a u64 and removing the need to pipe a DepKind
generic through a bunch of code. The performance implications on non-parallel
compilers are likely minimal (this switches to `Cell<u64>` as the backing
storage over a `u64`, but the latter was already inside a `RefCell` so it's not
really a significance divergence). On parallel compilers, the cost of a single
global u64 counter may be more significant: it adds a serialization point in
theory. On the other hand, we can imagine changing the counter to have a
thread-local component if it becomes worrisome or some similar structure.

The new design is sufficiently simpler that it warrants the potential for slight
changes down the line if/when we get parallel compilation to be more of a
default.

A u64 counter, instead of u32 (the old per-shard width), is chosen to avoid
possibly overflowing it and causing problems; it is effectively impossible that
we would overflow a u64 counter in this context.
2022-02-09 18:54:30 +00:00
Mark Rousskov
e240783a4d Switch QueryJobId to a single global counter
This replaces the per-shard counters with a single global counter, simplifying
the JobId struct down to just a u64 and removing the need to pipe a DepKind
generic through a bunch of code. The performance implications on non-parallel
compilers are likely minimal (this switches to `Cell<u64>` as the backing
storage over a `u64`, but the latter was already inside a `RefCell` so it's not
really a significance divergence). On parallel compilers, the cost of a single
global u64 counter may be more significant: it adds a serialization point in
theory. On the other hand, we can imagine changing the counter to have a
thread-local component if it becomes worrisome or some similar structure.

The new design is sufficiently simpler that it warrants the potential for slight
changes down the line if/when we get parallel compilation to be more of a
default.

A u64 counter, instead of u32 (the old per-shard width), is chosen to avoid
possibly overflowing it and causing problems; it is effectively impossible that
we would overflow a u64 counter in this context.
2022-02-08 18:49:55 -05:00
bors
e7cc3bddbe Auto merge of #92007 - oli-obk:lazy_tait2, r=nikomatsakis
Lazy type-alias-impl-trait

Previously opaque types were processed by

1. replacing all mentions of them with inference variables
2. memorizing these inference variables in a side-table
3. at the end of typeck, resolve the inference variables in the side table and use the resolved type as the hidden type of the opaque type

This worked okayish for `impl Trait` in return position, but required lots of roundabout type inference hacks and processing.

This PR instead stops this process of replacing opaque types with inference variables, and just keeps the opaque types around.
Whenever an opaque type `O` is compared with another type `T`, we make the comparison succeed and record `T` as the hidden type. If `O` is compared to `U` while there is a recorded hidden type for it, we grab the recorded type (`T`) and compare that against `U`. This makes implementing

* https://github.com/rust-lang/rfcs/pull/2515

much simpler (previous attempts on the inference based scheme were very prone to ICEs and general misbehaviour that was not explainable except by random implementation defined oddities).

r? `@nikomatsakis`

fixes #93411
fixes #88236
2022-02-07 23:40:26 +00:00
Oli Scherer
0f6e06b7c0 Lazily resolve type-alias-impl-trait defining uses
by using an opaque type obligation to bubble up comparisons between opaque types and other types

Also uses proper obligation causes so that the body id works, because out of some reason nll uses body ids for logic instead of just diagnostics.
2022-02-02 15:40:11 +00:00
est31
08be313feb Use Option::then in two places 2022-02-02 16:10:16 +01:00
lcnr
a1a30f7548 add a rustc::query_stability lint 2022-02-01 10:15:59 +01:00
Nicholas Nethercote
37fbd91eb5 Address review comments. 2022-01-22 10:38:34 +11:00
Nicholas Nethercote
416399dc10 Make Decodable and Decoder infallible.
`Decoder` has two impls:
- opaque: this impl is already partly infallible, i.e. in some places it
  currently panics on failure (e.g. if the input is too short, or on a
  bad `Result` discriminant), and in some places it returns an error
  (e.g. on a bad `Option` discriminant). The number of places where
  either happens is surprisingly small, just because the binary
  representation has very little redundancy and a lot of input reading
  can occur even on malformed data.
- json: this impl is fully fallible, but it's only used (a) for the
  `.rlink` file production, and there's a `FIXME` comment suggesting it
  should change to a binary format, and (b) in a few tests in
  non-fundamental ways. Indeed #85993 is open to remove it entirely.

And the top-level places in the compiler that call into decoding just
abort on error anyway. So the fallibility is providing little value, and
getting rid of it leads to some non-trivial performance improvements.

Much of this commit is pretty boring and mechanical. Some notes about
a few interesting parts:
- The commit removes `Decoder::{Error,error}`.
- `InternIteratorElement::intern_with`: the impl for `T` now has the same
  optimization for small counts that the impl for `Result<T, E>` has,
  because it's now much hotter.
- Decodable impls for SmallVec, LinkedList, VecDeque now all use
  `collect`, which is nice; the one for `Vec` uses unsafe code, because
  that gave better perf on some benchmarks.
2022-01-22 10:38:31 +11:00
bors
42852d7857 Auto merge of #92740 - cuviper:update-rayons, r=Mark-Simulacrum
Update rayon and rustc-rayon

This updates rayon for various tools and rustc-rayon for the compiler's parallel mode.

- rayon v1.3.1 -> v1.5.1
- rayon-core v1.7.1 -> v1.9.1
- rustc-rayon v0.3.1 -> v0.3.2
- rustc-rayon-core v0.3.1 -> v0.3.2

... and indirectly, this updates all of crossbeam-* to their latest versions.

Fixes #92677 by removing crossbeam-queue, but there's still a lingering question about how tidy discovers "runtime" dependencies. None of this is truly in the standard library's dependency tree at all.
2022-01-16 08:12:23 +00:00
bors
02c9e73e6c Auto merge of #92681 - Aaron1011:task-deps-ref, r=cjgillot
Introduce new `TaskDepsRef` enum to track allow/ignore/forbid status
2022-01-14 14:20:17 +00:00
Josh Stone
f3b8812f24 Update rayon and rustc-rayon 2022-01-10 11:34:07 -08:00
bors
d63a8d965e Auto merge of #92278 - Aaron1011:fix-fingerprint-caching, r=michaelwoerister
Ensure that `Fingerprint` caching respects hashing configuration

Fixes #92266

In some `HashStable` impls, we use a cache to avoid re-computing
the same `Fingerprint` from the same structure (e.g. an `AdtDef`).
However, the `StableHashingContext` used can be configured to
perform hashing in different ways (e.g. skipping `Span`s). This
configuration information is not included in the cache key,
which will cause an incorrect `Fingerprint` to be used if
we hash the same structure with different `StableHashingContext`
settings.

To fix this, the configuration settings of `StableHashingContext`
are split out into a separate `HashingControls` struct. This
struct is used as part of the cache key, ensuring that our caches
always produce the correct result for the given settings.

With this in place, we now turn off `Span` hashing during the
entire process of computing the hash included in legacy symbols.
This current has no effect, but will matter when a future PR
starts hashing more `Span`s that we currently skip.
2022-01-10 00:26:07 +00:00
Aaron Hill
f64cd87ca6
Introduce new TaskDepsRef enum to track allow/ignore/forbid status 2022-01-08 18:22:06 -05:00
bors
a7e2e33960 Auto merge of #91919 - Aaron1011:query-recursive-read, r=michaelwoerister
Don't perform any new queries while reading a query result on disk

In addition to being very confusing, this can cause us to add dep node edges between two queries that would not otherwise have an edge.

We now panic if any new dep node edges are created during the deserialization of a query result. This requires serializing the full `AdtDef` to disk, instead of just serializing the `DefId` and invoking the `adt_def` query during deserialization.

I'll probably split this up into several smaller PRs for perf runs.
2022-01-08 18:32:31 +00:00
Aaron Hill
4ca275add0
Address review comments 2022-01-05 10:30:49 -05:00
Aaron Hill
560c90f5df
Adjust assert_default_hashing_controls 2022-01-05 10:13:29 -05:00
Aaron Hill
5580e5e1dd
Ensure that Fingerprint caching respects hashing configuration
Fixes #92266

In some `HashStable` impls, we use a cache to avoid re-computing
the same `Fingerprint` from the same structure (e.g. an `AdtDef`).
However, the `StableHashingContext` used can be configured to
perform hashing in different ways (e.g. skipping `Span`s). This
configuration information is not included in the cache key,
which will cause an incorrect `Fingerprint` to be used if
we hash the same structure with different `StableHashingContext`
settings.

To fix this, the configuration settings of `StableHashingContext`
are split out into a separate `HashingControls` struct. This
struct is used as part of the cache key, ensuring that our caches
always produce the correct result for the given settings.

With this in place, we now turn off `Span` hashing during the
entire process of computing the hash included in legacy symbols.
This current has no effect, but will matter when a future PR
starts hashing more `Span`s that we currently skip.
2022-01-05 10:13:28 -05:00
bors
2b681ac06b Auto merge of #92259 - Aaron1011:normal-mod-hashing, r=michaelwoerister
Remove special-cased stable hashing for HIR module

All other 'containers' (e.g. `impl` blocks) hashed their contents
in the normal, order-dependent way. However, `Mod` was hashing
its contents in a (sort-of) order-independent way. However, the
exact order is exposed to consumers through `Mod.item_ids`,
and through query results like `hir_module_items`. Therefore,
stable hashing needs to take the order of items into account,
to avoid fingerprint ICEs.

Unforuntately, I was unable to directly build a reproducer
for the ICE, due to the behavior of `Fingerprint::combine_commutative`.
This operation swaps the upper and lower `u64` when constructing the
result, which makes the function non-associative. Since we start
the hashing of module items by combining `Fingerprint::ZERO` with
the first item, it's difficult to actually build an example where
changing the order of module items leaves the final hash unchanged.

However, this appears to have been hit in practice in #92218
While we're not able to reproduce it, the fact that proc-macros
are involved (which can give an entire module the same span, preventing
any span-related invalidations) makes me confident that the root
cause of that issue is our method of hashing module items.

This PR removes all of the special handling for `Mod`, instead deriving
a `HashStable` implementation. This makes `Mod` consistent with other
'contains' like `Impl`, which hash their contents through the typical
derive of `HashStable`.
2022-01-04 00:25:23 +00:00
Aaron Hill
da3f196a4e
Remove special-cased stable hashing for HIR module
All other 'containers' (e.g. `impl` blocks) hashed their contents
in the normal, order-dependent way. However, `Mod` was hashing
its contents in a (sort-of) order-independent way. However, the
exact order is exposed to consumers through `Mod.item_ids`,
and through query results like `hir_module_items`. Therefore,
stable hashing needs to take the order of items into account,
to avoid fingerprint ICEs.

Unforuntately, I was unable to directly build a reproducer
for the ICE, due to the behavior of `Fingerprint::combine_commutative`.
This operation swaps the upper and lower `u64` when constructing the
result, which makes the function non-associative. Since we start
the hashing of module items by combining `Fingerprint::ZERO` with
the first item, it's difficult to actually build an example where
changing the order of module items leaves the final hash unchanged.

However, this appears to have been hit in practice in #92218
While we're not able to reproduce it, the fact that proc-macros
are involved (which can give an entire module the same span, preventing
any span-related invalidations) makes me confident that the root
cause of that issue is our method of hashing module items.

This PR removes all of the special handling for `Mod`, instead deriving
a `HashStable` implementation. This makes `Mod` consistent with other
'contains' like `Impl`, which hash their contents through the typical
derive of `HashStable`.
2021-12-24 12:38:29 -05:00
Aaron Hill
27ed52c0a2
Adjust wording of comment 2021-12-23 13:44:04 -05:00
Aaron Hill
28f19f62c7
Address review comments 2021-12-23 13:38:54 -05:00
Aaron Hill
ab168e69ac
Some cleanup 2021-12-23 13:38:53 -05:00
Aaron Hill
49560e9c49
Ban deps only during query loading from disk 2021-12-23 13:38:53 -05:00
Aaron Hill
75181dc22f
Error if we try to read dep during deserialization 2021-12-23 13:38:53 -05:00
Aaron Hill
f1d682334d
Add #[rustc_clean(loaded_from_disk)] to assert loading of query result
Currently, you can use `#[rustc_clean]` to assert to that a particular
query (technically, a `DepNode`) is green or red. However, a green
`DepNode` does not mean that the query result was actually deserialized
from disk - we might have never re-run a query that needed the result.

Some incremental tests are written as regression tests for ICEs that
occured during query result decoding. Using
`#[rustc_clean(loaded_from_disk="typeck")]`, you can now assert
that the result of a particular query (e.g. `typeck`) was actually
loaded from disk, in addition to being green.
2021-12-21 16:34:12 -05:00
PFPoitras
304ede6bcc Stabilize iter::zip. 2021-12-14 18:50:31 -04:00
Alex Crichton
a0c959750a std: Stabilize the thread_local_const_init feature
This commit is intended to follow the stabilization disposition of the
FCP that has now finished in #84223. This stabilizes the ability to flag
thread local initializers as `const` expressions which enables the macro
to generate more efficient code for accessing it, notably removing
runtime checks for initialization.

More information can also be found in #84223 as well as the tests where
the feature usage was removed in this PR.

Closes #84223
2021-11-29 07:23:46 -08:00
Mark Rousskov
dc65b22901 Manually outline error on incremental_verify_ich
This reduces codegen for rustc_query_impl by 169k lines of LLVM IR, representing
a 1.2% improvement.
2021-11-22 21:32:20 -05:00
bors
495322d776 Auto merge of #90361 - Mark-Simulacrum:always-verify, r=michaelwoerister
Enable verification for 1/32th of queries loaded from disk

This is a limited enabling of incremental verification for query results loaded from disk, which previously did not run without -Zincremental-verify-ich. If enabled for all queries, we see a probably unacceptable hit of ~50% in the worst case, so this pairs back the verification to a more limited set based on the hash key.

Per collected [perf results](https://github.com/rust-lang/rust/pull/84227#issuecomment-953350582), this is a regression of at most 7% on coercions opt incr-unchanged, and typically less than 0.5% on other benchmarks (largely limited to incr-unchanged). I believe this is acceptable performance to land, and we can either ratchet it up or down fairly easily.

We have no real sense of whether this will lead to a large amount of assertions in the wild, but since those assertions may lead to miscompilations today, it seems potentially warranted. We have a good bit of lead time until the next stable release, though the holiday season will also start soon; we may wish to discuss the timing of enabling this and weigh the desire to prevent (possible) miscompilations against assertions.

cc `@rust-lang/wg-incr-comp`
2021-11-08 13:38:08 +00:00
bors
88a5a984fe Auto merge of #90380 - Mark-Simulacrum:revert-89558-query-stable-lint, r=lcnr
Revert "Add rustc lint, warning when iterating over hashmaps"

Fixes perf regressions introduced in https://github.com/rust-lang/rust/pull/90235 by temporarily reverting the relevant PR.
2021-10-29 04:55:51 +00:00
Mark Rousskov
3215eeb99f
Revert "Add rustc lint, warning when iterating over hashmaps" 2021-10-28 11:01:42 -04:00
Mark Rousskov
49e7c993ee Enable verification for 1/32th of queries loaded from disk 2021-10-28 09:57:31 -04:00
bors
c4ff03f689 Auto merge of #90145 - cjgillot:sorted-map, r=michaelwoerister
Use SortedMap in HIR.

Closes https://github.com/rust-lang/rust/issues/89788
r? `@ghost`
2021-10-28 13:04:40 +00:00
bors
28d0e75269 Auto merge of #90210 - cjgillot:qarray2, r=Mark-Simulacrum
Build the query vtable directly.

Continuation of https://github.com/rust-lang/rust/pull/89978.

This shrinks the query interface and attempts to reduce the amount of function pointer calls.
2021-10-25 01:10:50 +00:00
Matthias Krüger
87822b27ee
Rollup merge of #89558 - lcnr:query-stable-lint, r=estebank
Add rustc lint, warning when iterating over hashmaps

r? rust-lang/wg-incr-comp
2021-10-24 15:48:42 +02:00
Camille GILLOT
138e96b719 Do not require QueryCtxt for cache_on_disk. 2021-10-23 18:12:43 +02:00
Camille GILLOT
7c0920f5fb Build the query vtable directly. 2021-10-23 16:59:19 +02:00
Camille GILLOT
6f6fa8b954 Use SortedMap in HIR. 2021-10-21 23:08:57 +02:00
Camille GILLOT
0a5666b838 Do not depend on the stored value when trying to cache on disk. 2021-10-21 20:00:45 +02:00
Camille GILLOT
b11ec29e28 Address review. 2021-10-20 18:51:15 +02:00
Camille GILLOT
8785b70774 Inline DepNodeParams methods. 2021-10-20 18:46:25 +02:00
Camille GILLOT
df71d0874a Compute query vtable manually. 2021-10-20 18:41:28 +02:00
Camille GILLOT
69a3594635 Store node_intern_event_id in CurrentDepGraph. 2021-10-20 18:37:11 +02:00
Camille GILLOT
bd5c107672 Build jump table at runtime. 2021-10-20 18:32:29 +02:00
Camille GILLOT
602d3cbce3 Invoke callbacks from rustc_middle. 2021-10-20 18:29:33 +02:00
Camille GILLOT
b09de95fab Merge two query callbacks arrays. 2021-10-20 18:29:27 +02:00
Camille GILLOT
dc7143367c Drop has_params. 2021-10-20 18:29:22 +02:00
Camille GILLOT
aa404c24dd Make hash_result an Option. 2021-10-20 18:29:18 +02:00
Yuki Okushi
3d95330230
Rollup merge of #87404 - rylev:artifact-size-profiling, r=wesleywiser
Add support for artifact size profiling

This adds support for profiling artifact file sizes (incremental compilation artifacts and query cache to begin with).

Eventually we want to track this in perf.rlo so we can ensure that file sizes do not change dramatically on each pull request.

This relies on support in measureme: https://github.com/rust-lang/measureme/pull/169. Once that lands we can update this PR to not point to a git dependency.

This was worked on together with `@michaelwoerister.`

r? `@wesleywiser`
2021-10-20 04:35:11 +09:00
bors
1af55d19c7 Auto merge of #89933 - est31:let_else, r=michaelwoerister
Adopt let_else across the compiler

This performs a substitution of code following the pattern:

```
let <id> = if let <pat> = ... { identity } else { ... : ! };
```

To simplify it to:

```
let <pat> = ... { identity } else { ... : ! };
```

By adopting the `let_else` feature (cc #87335).

The PR also updates the syn crate because the currently used version of the crate doesn't support `let_else` syntax yet.

Note: Generally I'm the person who *removes* usages of unstable features from the compiler, not adds more usages of them, but in this instance I think it hopefully helps the feature get stabilized sooner and in a better state. I have written a [comment](https://github.com/rust-lang/rust/issues/87335#issuecomment-944846205) on the tracking issue about my experience and what I feel could be improved before stabilization of `let_else`.
2021-10-19 14:41:39 +00:00
bors
bd41e09da3 Auto merge of #89124 - cjgillot:owner-info, r=michaelwoerister
Index and hash HIR as part of lowering

Part of https://github.com/rust-lang/rust/pull/88186
~Based on https://github.com/rust-lang/rust/pull/88880 (see merge commit).~

Once HIR is lowered, it is later indexed by the `index_hir` query and hashed for `crate_hash`. This PR moves those post-processing steps to lowering itself. As a side objective, the HIR crate data structure is refactored as an `IndexVec<LocalDefId, Option<OwnerInfo<'hir>>>` where `OwnerInfo` stores all the relevant information for an HIR owner.

r? `@michaelwoerister`
cc `@petrochenkov`
2021-10-18 19:53:05 +00:00
est31
1418df5888 Adopt let_else across the compiler
This performs a substitution of code following the pattern:

let <id> = if let <pat> = ... { identity } else { ... : ! };

To simplify it to:

let <pat> = ... { identity } else { ... : ! };

By adopting the let_else feature.
2021-10-16 07:18:05 +02:00
lcnr
00e5abe9b6 allow potential_query_instability everywhere 2021-10-15 10:58:18 +02:00
Mark Rousskov
127373822e Remove built-in cache_hit tracking
This was already only enabled in debug_assertions builds. Generally, it seems
like most use cases that would use this could also use the -Zself-profile flag
which also tracks cache hits (in all builds), and so the extra cfg's and such
are not really necessary.

This is largely just a small cleanup though, which primarily is intended to make
other changes easier by avoiding the need to deal with this field.
2021-10-11 16:33:49 -04:00
Camille GILLOT
0431fdb113 Compute full HIR hash during lowering. 2021-10-10 00:05:49 +02:00
Camille GILLOT
457de08487 Forbid hashing HIR outside of indexing. 2021-10-09 18:38:28 +02:00
Ryan Levick
947a33bf20 Add support for artifact size profiling 2021-10-07 14:22:29 +02:00
Mark Rousskov
6f78eed1c7 Query the fingerprint style during key reconstruction
Keys can be reconstructed from fingerprints that are not DefPathHash, but then
we cannot extract a DefId from them.
2021-10-06 22:19:48 -04:00
Camille GILLOT
b2ed9c4007 Add some inlining. 2021-10-03 16:08:57 +02:00
Camille GILLOT
fedd7785fe Access StableHashingContext in rustc_query_system. 2021-10-03 16:08:55 +02:00
Camille GILLOT
c355b2e5cd Move ICH to rustc_query_system. 2021-10-03 16:08:53 +02:00
Mark Rousskov
c746be2219 Migrate to 2021 2021-09-20 22:21:42 -04:00
bors
d6cd2c6c87 Auto merge of #82183 - michaelwoerister:lazier-defpathhash-loading2, r=wesleywiser
Simplify lazy DefPathHash decoding by using an on-disk hash table.

This PR simplifies the logic around mapping `DefPathHash` values encountered during incremental compilation to valid `DefId`s in the current session. It is able to do so by using an on-disk hash table encoding that allows for looking up values directly, i.e. without deserializing the entire table.

The main simplification comes from not having to keep track of `DefPathHashes` being used during the compilation session.
2021-09-18 14:37:39 +00:00
Tomasz Miąsko
b6b19f3b6c Use explicit log level in tracing instrument macro
Specify a log level in tracing instrument macro explicitly.

Additionally reduce the used log level from a default info level to a
debug level (all of those appear to be developer oriented logs, so there
should be no need to include them in release builds).
2021-09-15 19:02:10 +02:00
Michael Woerister
5445715c20 Remove RawDefId tracking infrastructure from incr. comp. framework.
This infrastructure is obsolete now with the new encoding scheme for
the DefPathHash->DefIndex maps in crate metadata.
2021-09-14 13:56:33 +02:00
bors
8c2b6ea37d Auto merge of #78780 - cjgillot:req, r=Mark-Simulacrum
Refactor query forcing

The control flow in those functions was very complex, with several layers of continuations.

I tried to simplify the implementation, while keeping essentially the same logic.
Now, all code paths go through `try_execute_query` for the actual query execution.
Communication with the `dep_graph` and the live caches are the only difference between query getting/ensuring/forcing.
2021-09-11 20:39:47 +00:00
Noah Lev
4553a4baf2 Remove redundant Span in QueryJobInfo
Previously, `QueryJobInfo` was composed of two parts: a `QueryInfo` and
a `QueryJob`. However, both `QueryInfo` and `QueryJob` have a `span`
field, which seem to be the same. So, the `span` was recorded twice.

Now, `QueryJobInfo` is composed of a `QueryStackFrame` (the other field
of `QueryInfo`) and a `QueryJob`. So, now, the `span` is only recorded
once.
2021-09-01 11:10:58 -07:00
Noah Lev
c861964735 Note that trait aliases cannot be recursive 2021-08-27 14:50:52 -07:00
Noah Lev
cd0fc444fb Note that type aliases cannot be recursive 2021-08-27 14:50:51 -07:00
Camille GILLOT
31330bfce1 Use variable. 2021-08-22 20:23:32 +02:00
Camille GILLOT
eeb3c8f4b7 Unify with_task functions.
Remove with_eval_always_task.
2021-08-22 20:23:32 +02:00
Camille GILLOT
f2c8707abb Remove force_query_with_job. 2021-08-22 20:23:31 +02:00
Camille GILLOT
ef4becdce4 Split try_execute_query. 2021-08-22 20:23:31 +02:00
Camille GILLOT
307aacaf05 Decouple JobOwner from cache. 2021-08-22 20:23:31 +02:00
Camille GILLOT
d2304008c1 Complete job outside of force_query_with_job. 2021-08-22 20:23:30 +02:00
Camille GILLOT
13d4eb92b8 Do not compute the dep_node twice. 2021-08-22 20:23:30 +02:00
Camille GILLOT
283a8e1445 Make all query forcing go through try_execute_query.
try_execute_query is now able to centralize the path for query
get/ensure/force.

try_execute_query now takes the dep_node as a parameter, so it can
accommodate `force`. This dep_node is an Option to avoid computing it in
the `get` fast path.

try_execute_query now returns both the result and the dep_node_index to
allow the caller to handle the dep graph.

The caller is responsible for marking the dependency.
2021-08-22 20:23:29 +02:00
Camille GILLOT
45d6decc19 Remove try_mark_green_and_read. 2021-08-22 20:23:29 +02:00
Camille GILLOT
c3bf3969d4 Move assertion inwards.
`with_taks_impl` is only called from `with_eval_always_task` and
`with_task` . The former is only used in query invocation, while the
latter is also used to start the `tcx` and to trigger codegen.

This move should not change significantly the number of calls to this
assertion.
2021-08-22 20:23:29 +02:00
Camille GILLOT
cd1cb3449e Simplify control flow. 2021-08-22 20:23:20 +02:00
Noah Lev
2f48bfa88c Improve errors for recursive type aliases 2021-08-21 18:30:25 -07:00
Camille GILLOT
0edc775b90 Only clone key when needed. 2021-08-22 01:06:19 +02:00
Camille GILLOT
5e35fadddb Move dep_graph checking into try_load_from_disk_and_cache_in_memory. 2021-08-22 01:00:01 +02:00
Aaron Hill
77b02eed7b
Prevent double panic when handling incremental fingerprint mismatch
When an incremental fingerprint mismatch occurs, we debug-print
our `DepNode` and query result. Unfortunately, the debug printing
process may cause us to run additional queries, which can result
in a re-entrant fingerprint mismatch error.

To avoid a double panic, this commit adds a thread-local variable
to detect re-entrant calls.
2021-08-12 15:11:39 -05:00
bors
b53a93db2d Auto merge of #87535 - lf-:authors, r=Mark-Simulacrum
rfc3052 followup: Remove authors field from Cargo manifests

Since RFC 3052 soft deprecated the authors field, hiding it from
crates.io, docs.rs, and making Cargo not add it by default, and it is
not generally up to date/useful information for contributors, we may as well
remove it from crates in this repo.
2021-08-02 05:49:17 +00:00
bors
aadd6189ad Auto merge of #87449 - matthiaskrgr:clippyy_v2, r=nagisa
more clippy::complexity fixes

(also a couple of clippy::perf fixes)
2021-08-01 09:15:15 +00:00
Jade
3cf820e17d rfc3052: Remove authors field from Cargo manifests
Since RFC 3052 soft deprecated the authors field anyway, hiding it from
crates.io, docs.rs, and making Cargo not add it by default, and it is
not generally up to date/useful information, we should remove it from
crates in this repo.
2021-07-29 14:56:05 -07:00
Aaron Hill
87740bac64
Restrict field visibility 2021-07-25 20:43:27 -05:00
Aaron Hill
e6a5231238
Create QuerySideEffects and use it for diagnostics 2021-07-25 20:27:58 -05:00
Matthias Krüger
3fd8cbb404 clippy::useless_format 2021-07-25 12:26:03 +02:00
Ryan Levick
b5bec17184 Add docs to new methods 2021-07-07 11:14:14 +02:00
Ryan Levick
6e33dce9c2 Profile incremental hashing 2021-07-07 10:43:30 +02:00
bors
12d0849f9d Auto merge of #85154 - cjgillot:lessfn, r=bjorn3
Reduce amount of function pointers in query invocation.

r? `@ghost`
2021-06-15 14:52:58 +00:00
bors
e4a6032706 Auto merge of #85903 - bjorn3:rustc_serialize_cleanup, r=varkor
Remove unused functions and arguments from rustc_serialize
2021-06-07 14:40:26 +00:00
Yuki Okushi
36f1ed6de2
Rollup merge of #85850 - bjorn3:less_feature_gates, r=jyn514
Remove unused feature gates

The first commit removes a usage of a feature gate, but I don't expect it to be controversial as the feature gate was only used to workaround a limitation of rust in the past. (closures never being `Clone`)

The second commit uses `#[allow_internal_unstable]` to avoid leaking the `trusted_step` feature gate usage from inside the index newtype macro. It didn't work for the `min_specialization` feature gate though.

The third commit removes (almost) all feature gates from the compiler that weren't used anyway.
2021-06-04 13:42:54 +09:00
Camille GILLOT
b51f24f021 Make the reasoning more explicit. 2021-06-01 21:46:30 +02:00
Camille GILLOT
3a6d5c2beb Avoid creating anonymous nodes with zero or one dependency. 2021-06-01 21:43:30 +02:00
bjorn3
a2c4affe86 Remove unused functions and arguments from rustc_serialize 2021-06-01 19:29:11 +02:00
bjorn3
312f964478 Remove unused feature gates 2021-05-31 13:55:43 +02:00
Camille GILLOT
fd318a2f9b Reduce amount of function pointers. 2021-05-30 15:15:22 +02:00
bors
f60a670256 Auto merge of #85319 - cjgillot:query-simp, r=Mark-Simulacrum
Simplification of query forcing

Extracted from #78780
2021-05-30 10:11:23 +00:00
bors
9a72afa7dd Auto merge of #83772 - jhpratt:revamp-step-trait, r=Mark-Simulacrum
Make `Step` trait safe to implement

This PR makes a few modifications to the `Step` trait that I believe better position it for stabilization in the short term. In particular,

1. `unsafe trait TrustedStep` is introduced, indicating that the implementation of `Step` for a given type upholds all stated invariants (which have remained unchanged). This is gated behind a new `trusted_step` feature, as stabilization is realistically blocked on min_specialization.
2. The `Step` trait is internally specialized on the `TrustedStep` trait, which avoids a serious performance regression.
3. `TrustedLen` is implemented for `T: TrustedStep` as the latter's invariants subsume the former's.
4. The `Step` trait is no longer `unsafe`, as the invariants must not be relied upon by unsafe code (unless the type implements `TrustedStep`).
5. `TrustedStep` is implemented for all types that implement `Step` in the standard library and compiler.
6. The `step_trait_ext` feature is merged into the `step_trait` feature. I was unable to find any reasoning for the features being split; the `_unchecked` methods need not necessarily be stabilized at the same time, but I think it is useful to have them under the same feature flag.

All existing implementations of `Step` will be broken, as it is not possible to `unsafe impl` a safe trait. Given this trait only exists on nightly, I feel this breakage is acceptable. The blanket `impl<T: Step> TrustedLen for T` will likely cause some minor breakage, but this should be covered by the equivalent impl for `TrustedStep`.

Hopefully these changes are sufficient to place `Step` in decent position for stabilization, which would allow user-defined types to be used with `a..b` syntax.
2021-05-30 01:21:39 +00:00
Camille GILLOT
f3ed997254 Move reconstruct test inwards. 2021-05-29 22:38:51 +02:00
Jacob Pratt
bc2f0fb5a9
Specialize implementations
Implementations in stdlib are now optimized as they were before.
2021-05-26 18:07:09 -04:00
Camille GILLOT
a50f1e949b Get rid of PreviousDepGraph. 2021-05-22 14:14:23 +02:00
Camille GILLOT
c95a5682f7 Remove def_path_str. 2021-05-15 10:37:30 +02:00
Camille GILLOT
eb82187b13 Make the fast path faster. 2021-05-15 10:36:37 +02:00
Camille GILLOT
91444af87a Refactor try_mark_previous_green. 2021-05-15 10:27:27 +02:00
Camille GILLOT
c2c59ae304 Move key recovering into force_query. 2021-05-15 10:20:56 +02:00
Aaron Hill
a4c0793551 Show nicer error when an 'unstable fingerprints' error occurs 2021-05-10 17:43:51 -04:00
bors
777bb2f612 Auto merge of #84806 - Mark-Simulacrum:try-start-entry, r=cjgillot
Streamline try_start code

This shifts some branches around and avoids interleaving parallel and
non-parallel versions of the function too much.
2021-05-06 22:35:06 +00:00
Mark Rousskov
981135ae8e Streamline try_start code
This shifts some branches around and avoids interleaving parallel and
non-parallel versions of the function too much.
2021-05-02 12:25:48 -04:00
Mark Rousskov
61fd56fdb9 Avoid generating QueryMap::extend for each key type 2021-05-01 20:13:18 -04:00
Mark Rousskov
a1d7367429 Move iter_results to dyn FnMut rather than a generic
This means that we're no longer generating the iteration/locking code for each
invocation site of iter_results, rather just once per query.

This is a 15% win in instruction counts when compiling the rustc_query_impl crate.
2021-04-29 17:26:46 -04:00
Ralf Jung
bd9556956a fix feature use in rustc libs 2021-04-18 22:05:45 +02:00
pierwill
0019ca9141 Fix outdated crate names in compiler docs
Changes `librustc_X` to `rustc_X`, only in documentation comments.
Plain code comments are left unchanged.

Also fix incorrect file paths.
2021-04-08 11:12:14 -05:00
Camille GILLOT
f3dde45d2a Enable debugging the dep-graph without debug-assertions.
It may also be useful in these cases,
and some CI configurations test without debug assertions.
2021-03-31 17:12:06 +02:00
Camille GILLOT
8ee9322c10 Also profile finishing the encoding. 2021-03-30 18:10:08 +02:00
Camille GILLOT
df24315ddf Adjust profiling. 2021-03-30 18:10:08 +02:00
Camille GILLOT
fe89f3236c Address review. 2021-03-30 18:10:08 +02:00
Camille GILLOT
65a8681a17 Add documentation. 2021-03-30 18:10:07 +02:00
Camille GILLOT
c5c935af92 Simplify tracking the encoder state. 2021-03-30 18:10:07 +02:00
Camille GILLOT
e1c99e5fcc Remove the parallel version. 2021-03-30 18:10:06 +02:00
Camille GILLOT
8208872fa2 Fix parallel compiler. 2021-03-30 18:10:06 +02:00
Camille GILLOT
cfe786e5e0 Fix tests.
Avoid invoking queries inside `check_paths`, since we are holding a lock
to the reconstructed graph.
2021-03-30 18:10:06 +02:00
Camille GILLOT
39b306a53d Do not allocate in decoder. 2021-03-30 18:10:05 +02:00
Camille GILLOT
6bfaf3a9cb Stream the dep-graph to a file. 2021-03-30 18:09:59 +02:00
Joshua Nelson
441dc3640a Remove (lots of) dead code
Found with https://github.com/est31/warnalyzer.

Dubious changes:
- Is anyone else using rustc_apfloat? I feel weird completely deleting
  x87 support.
- Maybe some of the dead code in rustc_data_structures, in case someone
  wants to use it in the future?
- Don't change rustc_serialize

  I plan to scrap most of the json module in the near future (see
  https://github.com/rust-lang/compiler-team/issues/418) and fixing the
  tests needed more work than I expected.

TODO: check if any of the comments on the deleted code should be kept.
2021-03-27 22:16:33 -04:00
Josh Stone
72ebebe474 Use iter::zip in compiler/ 2021-03-26 09:32:31 -07:00
Aaron Hill
443cef5618
Debug-print result when an unstable fingerprint is detected 2021-03-19 21:47:57 -04:00
bors
2a55274e0c Auto merge of #82999 - cuviper:rustc-rayon-0.3.1, r=Mark-Simulacrum
Update to rustc-rayon 0.3.1

This pulls in rust-lang/rustc-rayon#8 to fix #81425. (h/t `@ammaraskar)`

That revealed weak constraints on `rustc_arena::DropArena`, because its
`DropType` was holding type-erased raw pointers to generic `T`. We can
implement `Send` for `DropType` (under `cfg(parallel_compiler)`) by
requiring all `T: Send` before they're type-erased.
2021-03-15 08:49:25 +00:00
bors
e7e1dc158c Auto merge of #83007 - Aaron1011:incr-verify-default, r=Mark-Simulacrum
Turn `-Z incremental-verify-ich` on by default

Issue #82920 showed that the kind of bugs caught by this flag have
soundness implications.
2021-03-13 17:52:22 +00:00
Aaron Hill
7d7c81a114
Always run incremental_verify_ich when re-computing query results
Issue #82920 showed that the kind of bugs caught by this flag have
soundness implications.

This causes performance regressions of up to 15.2% during incremental
compilation, but this is necessary to catch miscompilations caused by
bugs in query implementations.
2021-03-13 12:00:38 -05:00
Tyson Nottingham
adcbe49b16 rustc_query_system: simplify QueryCache::iter
Minor cleanup to reduce a small amount of complexity and code bloat.
Reduces the number of mono items in rustc_query_impl by 15%.
2021-03-12 17:34:14 -08:00
Josh Stone
f7e75a2124 Update to rustc-rayon 0.3.1
This pulls in rust-lang/rustc-rayon#8 to fix #81425. (h/t @ammaraskar)

That revealed weak constraints on `rustc_arena::DropArena`, because its
`DropType` was holding type-erased raw pointers to generic `T`. We can
implement `Send` for `DropType` (under `cfg(parallel_compiler)`) by
requiring all `T: Send` before they're type-erased.
2021-03-10 17:53:35 -08:00
Camille GILLOT
903f65f215 Simplify hashing. 2021-02-21 12:22:22 +01:00
Camille GILLOT
a87de890fd Move print_query_stack to rustc_query_system. 2021-02-20 23:40:56 +01:00
Camille GILLOT
c26d965714 Move report_cycle to rustc_query_system.
The call to `ty::print::with_forced_impl_filename_line`
is done when constructing the description,
at the construction of the QueryStackFrame.
2021-02-20 23:36:31 +01:00
Camille GILLOT
3897395787 Move Query to rustc_query_system.
Rename it to QueryStackFrame and document a bit.
2021-02-20 22:53:47 +01:00
Camille GILLOT
0144d6a3b7 Do not hold query key in Query. 2021-02-20 22:53:46 +01:00
Camille GILLOT
f96e960ccf Access the session directly from DepContext. 2021-02-20 22:53:46 +01:00
Camille GILLOT
b27266fdb2 Use a QueryContext for try_mark_green. 2021-02-19 17:51:56 +01:00
Camille GILLOT
ea3d465c95 Move try_load_from_on_disk_cache to the QueryContext. 2021-02-19 17:51:55 +01:00
Camille GILLOT
49c1b07a9e Decouple QueryContext from DepContext. 2021-02-19 17:51:49 +01:00
Camille GILLOT
6f04883023 Remove QueryAccessors::to_dep_node. 2021-02-19 17:51:49 +01:00
Camille GILLOT
211b05aef3 Don't require a QueryContext to access the DepGraph. 2021-02-19 17:51:49 +01:00
Eduard-Mihai Burtescu
6165d1cc72 Print -Ztime-passes (and misc stats/logs) on stderr, not stdout. 2021-02-18 14:13:38 +02:00
Tomasz Miąsko
db36db2e81 Inline try_get_cached 2021-02-16 00:00:00 +00:00
bors
d1206f950f Auto merge of #81855 - cjgillot:ensure-cache, r=oli-obk
Check the result cache before the DepGraph when ensuring queries

Split out of https://github.com/rust-lang/rust/pull/70951

Calling `ensure` on already forced queries is a common operation.
Looking at the results cache first is faster than checking the DepGraph for a green node.
2021-02-15 12:11:59 +00:00
klensy
93c8ebe022 bumped smallvec deps 2021-02-14 18:03:11 +03:00
Camille GILLOT
3fc8ed68e9 Check query cache before calling into the query engine. 2021-02-13 21:14:58 +01:00
Camille GILLOT
280a2866d5 Drop the cache lock earlier. 2021-02-13 21:14:58 +01:00
Camille GILLOT
15b0bc6b83 Separate the query cache from the query state. 2021-02-13 21:14:58 +01:00
Camille GILLOT
9f46259a75 Return a Result for query cache. 2021-02-13 21:14:58 +01:00
Camille GILLOT
8684e9e47d Merge {get,ensure}_query. 2021-02-13 21:14:57 +01:00
bors
097bc6a84f Auto merge of #81892 - jyn514:no-inline, r=cjgillot
[experiment] remove `#[inline]` from rustc_query_system::plumbing

These functions have a ton of generic parameters and are instantiated
over and over again. Hopefully this will reduce binary bloat and speed
up bootstrapping times.

r? `@cjgillot`
2021-02-09 18:37:33 +00:00
Mark Rousskov
f564d7abba Switch query descriptions to just String
In practice we never used the borrowed variant anyway.
2021-02-08 17:20:41 -05:00
Joshua Nelson
4f77a1afc2 [experiment] remove #[inline] from rustc_query_system::plumbing
These functions have a ton of generic parameters and are instantiated
over and over again. Hopefully this will reduce binary bloat and speed
up bootstrapping times.
2021-02-08 14:57:15 -05:00
bors
a8f7075532 Auto merge of #80692 - Aaron1011:feature/query-result-debug, r=estebank
Enforce that query results implement Debug

Currently, we require that query keys implement `Debug`, but we do not do the same for query values. This can make incremental compilation bugs difficult to debug - there isn't a good place to print out the result loaded from disk.

This PR adds `Debug` bounds to several query-related functions, allowing us to debug-print the query value when an 'unstable fingerprint' error occurs. This required adding `#[derive(Debug)]` to a fairly large number of types - hopefully, this doesn't have much of an impact on compiler bootstrapping times.
2021-01-26 05:47:23 +00:00
bors
c5a96fb797 Auto merge of #80957 - tgnottingham:direct_serialize_depgraph, r=michaelwoerister
Serialize dependency graph directly from DepGraph

Reduce memory usage by serializing dep graph directly from `DepGraph`,
rather than copying it into `SerializedDepGraph` and serializing that.
2021-01-19 19:36:41 +00:00
Aaron Hill
056fbbf7ee
Undo assertion change 2021-01-16 21:37:53 -05:00
Aaron Hill
6417760632
Run fmt 2021-01-16 17:53:02 -05:00
Aaron Hill
93ab705655
Print result on unstable fingerprint error 2021-01-16 17:53:02 -05:00
Aaron Hill
7afb32557d
Enforce that query results implement Debug 2021-01-16 17:53:02 -05:00
LingMan
a56bffb4f9 Use Option::map_or instead of .map(..).unwrap_or(..) 2021-01-14 19:23:59 +01:00
Tyson Nottingham
09067db8a0 Serialize dependency graph directly from DepGraph
Reduce memory usage by serializing dep graph directly from `DepGraph`,
rather than copying it into `SerializedDepGraph` and serializing that.
2021-01-12 22:20:29 -08:00
Joshua Nelson
0215b3a456 Don't mark force_query_with_job as inline(always)
It's rather large, and using `inline(always)` forces it to be recompiled
in each calling crate.
2021-01-08 18:38:33 -05:00
Camille GILLOT
016ea6b319 Use a side-table of consts instead of matching on the DepKind enum. 2021-01-08 17:48:02 +01:00
Camille GILLOT
d1220fdedf Simplify DepNodeParams. 2021-01-08 17:29:49 +01:00
Tyson Nottingham
03eb75f759 rustc_query_system: avoid race condition when using edge_count 2020-12-22 14:12:57 -08:00
Tyson Nottingham
22ed75158b rustc_query_system: add more comments for dependency graph 2020-12-22 14:12:57 -08:00
Tyson Nottingham
d6b2aaed7d rustc_query_system: rename intern_node to intern_new_node 2020-12-22 14:12:57 -08:00
Tyson Nottingham
712fcae13a rustc_query_system: remove inline annotation from edge_count
This isn't called frequently enough to justify inlining.
2020-12-22 14:12:57 -08:00
Tyson Nottingham
4f76266295 rustc_query_system: minor cleanup
Remove effectively unused parameter and delete out of date comment.
2020-12-22 14:12:57 -08:00
Tyson Nottingham
dd1ab840d2 rustc_query_system: use more space-efficient edges representation
Use single vector of edges rather than per-node vector. There is a small
hit to instruction counts (< 0.5%), but the memory savings make up for it.
2020-12-22 14:12:57 -08:00
Tyson Nottingham
ea47269f5f rustc_query_system: share previous graph edges with current graph
Reduce memory consumption by sharing the previous dependency graph's
edges with the current graph when it is known to be valid to do so. It
is known to be valid whenever we mark a node green because all of its
dependencies were green. It is *not* known to be valid when we mark a
node green because we re-executed its query and its result was the same
as in the previous compilation session. In that case, the dependency set
might have changed (we don't try to determine whether or not it changed
and whether or not we can share).
2020-12-22 14:12:57 -08:00
Tyson Nottingham
f6d6b0c96d rustc_query_system: share previous graph data with current graph
Reduce memory consumption by taking advantage of red/green algorithm
properties to share the previous dependency graph's node data with the
current graph instead of storing node data redundantly. Red nodes can
share the `DepNode`, and green nodes can share the `DepNode` and
`Fingerprint`. Edges will be shared when possible in a later change.
2020-12-22 14:12:57 -08:00
Tyson Nottingham
7795801902 rustc_query_system: explicitly register reused dep nodes
Register nodes that we've reused from the previous session explicitly
with `OnDiskCache`. Previously, we relied on this happening as a side
effect of accessing the nodes in the `PreviousDepGraph`. For the sake of
performance and avoiding unintended side effects, register explictily.
2020-12-18 18:53:12 -08:00
Aaron Hill
3918b82993
Use def_path_hash_to_def_id when re-using a RawDefId
Fixes #79890

Previously, we just copied a `RawDefId` from the 'old' map to the 'new'
map. However, the `RawDefId` for a given `DefPathHash` may be different
in the current compilation session. Using `def_path_hash_to_def_id`
ensures that the `RawDefId` we use is valid in the current session.
2020-12-10 16:04:19 -05:00
Aaron Hill
c2946402ff
Properly re-use def path hash in incremental mode
Fixes #79661

In incremental compilation mode, we update a `DefPathHash -> DefId`
mapping every time we create a `DepNode` for a foreign `DefId`.
This mapping is written out to the on-disk incremental cache, and is
read by the next compilation session to allow us to lazily decode
`DefId`s.

When we decode a `DepNode` from the current incremental cache, we need
to ensure that any previously-recorded `DefPathHash -> DefId` mapping
gets recorded in the new mapping that we write out. However, PR #74967
didn't do this in all cases, leading to us being unable to decode a
`DefPathHash` in certain circumstances.

This PR refactors some of the code around `DepNode` deserialization to
prevent this kind of mistake from happening again.
2020-12-04 22:16:40 -05:00
Aaron Hill
7a9aa4f980
Fix rebase fallout 2020-11-25 15:08:51 -05:00
Aaron Hill
e935d3832c
Lazy DefPath decoding for incremental compilation 2020-11-25 14:49:15 -05:00
Dániel Buga
db8b86b2df Fix typos 2020-11-21 09:06:45 +01:00
Tyson Nottingham
05dde137ca Make PackedFingerprint's Fingerprint private 2020-11-18 15:10:43 -08:00
Tyson Nottingham
f09d474836 Use PackedFingerprint in DepNode to reduce memory consumption 2020-11-18 12:49:09 -08:00
Joshua Nelson
57c6ed0c07 Fix even more clippy warnings 2020-10-30 10:13:39 -04:00
Camille GILLOT
0a4d948b4a Remove unused ProfileCategory. 2020-10-22 22:35:32 +02:00
bors
500ddc5efd Auto merge of #77871 - Julian-Wollersberger:less-query-context, r=oli-obk
Make fewer types generic over QueryContext

While trying to refactor `rustc_query_system::query::QueryContext` to make it dyn-safe, I noticed some smaller things:
* QueryConfig doesn't need to be generic over QueryContext
* ~~The `kind` field on QueryJobId is unused~~
* Some unnecessary where clauses
* Many types in `job.rs` where generic over `QueryContext` but only needed `QueryContext::Query`.
  If handle_cycle_error() could be refactored to not take `error: CycleError<CTX::Query>`, all those bounds could be removed as well.

Changing `find_cycle_in_stack()` in job.rs to not take a `tcx` argument is the only functional change here. Everything else is just updating type signatures. (aka compile-error driven development ^^)

~~Currently there is a weird bug where memory usage suddenly skyrockets when running UI tests. I'll investigate that tomorrow.
A perf run probably won't make sense before that is fixed.~~

EDIT: `kind` actually is used by `Eq`, and re-adding it fixed the memory issue.
2020-10-22 12:24:55 +00:00
Julian Wollersberger
52cedcab92 Remove <CTX: QueryContext> in a bunch of places.
It was only needed by `find_cycle_in_stack()` in job.rs, but needed to be forwarded through dozens of types.
2020-10-19 11:11:09 +02:00
est31
338fad2162 Remove unused code from rustc_query_system 2020-10-14 04:14:32 +02:00
Julian Wollersberger
39b0e79285 Remove generic argument from QueryConfig. 2020-10-12 16:04:49 +02:00
Andreas Jonson
b8752fff19 update the version of itertools and parking_lot
this is to avoid compiling multiple version of the crates in rustc
2020-09-12 08:26:53 +02:00
mark
9e5f7d5631 mv compiler to compiler/ 2020-08-30 18:45:07 +03:00