Commit Graph

695 Commits

Author SHA1 Message Date
Josh Stone
ab57e36268 Update to rebased rustc-rayon 0.4 2022-05-27 20:20:41 -07:00
Vadim Petrochenkov
5bf23f64cc libcore: Add iter::from_generator which is like iter::from_fn, but for coroutines instead of functions 2022-05-27 01:51:31 +03:00
bors
574830f573 Auto merge of #96094 - Elliot-Roberts:fix_doctests, r=compiler-errors
Begin fixing all the broken doctests in `compiler/`

Begins to fix #95994.
All of them pass now but 24 of them I've marked with `ignore HELP (<explanation>)` (asking for help) as I'm unsure how to get them to work / if we should leave them as they are.
There are also a few that I marked `ignore` that could maybe be made to work but seem less important.
Each `ignore` has a rough "reason" for ignoring after it parentheses, with

- `(pseudo-rust)` meaning "mostly rust-like but contains foreign syntax"
- `(illustrative)` a somewhat catchall for either a fragment of rust that doesn't stand on its own (like a lone type), or abbreviated rust with ellipses and undeclared types that would get too cluttered if made compile-worthy.
- `(not-rust)` stuff that isn't rust but benefits from the syntax highlighting, like MIR.
- `(internal)` uses `rustc_*` code which would be difficult to make work with the testing setup.

Those reason notes are a bit inconsistently applied and messy though. If that's important I can go through them again and try a more principled approach. When I run `rg '```ignore \(' .` on the repo, there look to be lots of different conventions other people have used for this sort of thing. I could try unifying them all if that would be helpful.

I'm not sure if there was a better existing way to do this but I wrote my own script to help me run all the doctests and wade through the output. If that would be useful to anyone else, I put it here: https://github.com/Elliot-Roberts/rust_doctest_fixing_tool
2022-05-07 06:30:29 +00:00
bors
d60b4f52c9 Auto merge of #95454 - randomicon00:fix95444, r=wesleywiser
Fixing #95444 by only displaying passes that take more than 5 millise…

As discussed in #95444, I have added the code to test and only display prints that are greater than 5 milliseconds.

r? `@jyn514`
2022-05-06 17:52:47 +00:00
bors
8c4fc9d9a4 Auto merge of #94598 - scottmcm:prefix-free-hasher-methods, r=Amanieu
Add a dedicated length-prefixing method to `Hasher`

This accomplishes two main goals:
- Make it clear who is responsible for prefix-freedom, including how they should do it
- Make it feasible for a `Hasher` that *doesn't* care about Hash-DoS resistance to get better performance by not hashing lengths

This does not change rustc-hash, since that's in an external crate, but that could potentially use it in future.

Fixes #94026

r? rust-lang/libs

---

The core of this change is the following two new methods on `Hasher`:

```rust
pub trait Hasher {
    /// Writes a length prefix into this hasher, as part of being prefix-free.
    ///
    /// If you're implementing [`Hash`] for a custom collection, call this before
    /// writing its contents to this `Hasher`.  That way
    /// `(collection![1, 2, 3], collection![4, 5])` and
    /// `(collection![1, 2], collection![3, 4, 5])` will provide different
    /// sequences of values to the `Hasher`
    ///
    /// The `impl<T> Hash for [T]` includes a call to this method, so if you're
    /// hashing a slice (or array or vector) via its `Hash::hash` method,
    /// you should **not** call this yourself.
    ///
    /// This method is only for providing domain separation.  If you want to
    /// hash a `usize` that represents part of the *data*, then it's important
    /// that you pass it to [`Hasher::write_usize`] instead of to this method.
    ///
    /// # Examples
    ///
    /// ```
    /// #![feature(hasher_prefixfree_extras)]
    /// # // Stubs to make the `impl` below pass the compiler
    /// # struct MyCollection<T>(Option<T>);
    /// # impl<T> MyCollection<T> {
    /// #     fn len(&self) -> usize { todo!() }
    /// # }
    /// # impl<'a, T> IntoIterator for &'a MyCollection<T> {
    /// #     type Item = T;
    /// #     type IntoIter = std::iter::Empty<T>;
    /// #     fn into_iter(self) -> Self::IntoIter { todo!() }
    /// # }
    ///
    /// use std:#️⃣:{Hash, Hasher};
    /// impl<T: Hash> Hash for MyCollection<T> {
    ///     fn hash<H: Hasher>(&self, state: &mut H) {
    ///         state.write_length_prefix(self.len());
    ///         for elt in self {
    ///             elt.hash(state);
    ///         }
    ///     }
    /// }
    /// ```
    ///
    /// # Note to Implementers
    ///
    /// If you've decided that your `Hasher` is willing to be susceptible to
    /// Hash-DoS attacks, then you might consider skipping hashing some or all
    /// of the `len` provided in the name of increased performance.
    #[inline]
    #[unstable(feature = "hasher_prefixfree_extras", issue = "88888888")]
    fn write_length_prefix(&mut self, len: usize) {
        self.write_usize(len);
    }

    /// Writes a single `str` into this hasher.
    ///
    /// If you're implementing [`Hash`], you generally do not need to call this,
    /// as the `impl Hash for str` does, so you can just use that.
    ///
    /// This includes the domain separator for prefix-freedom, so you should
    /// **not** call `Self::write_length_prefix` before calling this.
    ///
    /// # Note to Implementers
    ///
    /// The default implementation of this method includes a call to
    /// [`Self::write_length_prefix`], so if your implementation of `Hasher`
    /// doesn't care about prefix-freedom and you've thus overridden
    /// that method to do nothing, there's no need to override this one.
    ///
    /// This method is available to be overridden separately from the others
    /// as `str` being UTF-8 means that it never contains `0xFF` bytes, which
    /// can be used to provide prefix-freedom cheaper than hashing a length.
    ///
    /// For example, if your `Hasher` works byte-by-byte (perhaps by accumulating
    /// them into a buffer), then you can hash the bytes of the `str` followed
    /// by a single `0xFF` byte.
    ///
    /// If your `Hasher` works in chunks, you can also do this by being careful
    /// about how you pad partial chunks.  If the chunks are padded with `0x00`
    /// bytes then just hashing an extra `0xFF` byte doesn't necessarily
    /// provide prefix-freedom, as `"ab"` and `"ab\u{0}"` would likely hash
    /// the same sequence of chunks.  But if you pad with `0xFF` bytes instead,
    /// ensuring at least one padding byte, then it can often provide
    /// prefix-freedom cheaper than hashing the length would.
    #[inline]
    #[unstable(feature = "hasher_prefixfree_extras", issue = "88888888")]
    fn write_str(&mut self, s: &str) {
        self.write_length_prefix(s.len());
        self.write(s.as_bytes());
    }
}
```

With updates to the `Hash` implementations for slices and containers to call `write_length_prefix` instead of `write_usize`.

`write_str` defaults to using `write_length_prefix` since, as was pointed out in the issue, the `write_u8(0xFF)` approach is insufficient for hashers that work in chunks, as those would hash `"a\u{0}"` and `"a"` to the same thing.  But since `SipHash` works byte-wise (there's an internal buffer to accumulate bytes until a full chunk is available) it overrides `write_str` to continue to use the add-non-UTF-8-byte approach.

---

Compatibility:

Because the default implementation of `write_length_prefix` calls `write_usize`, the changed hash implementation for slices will do the same thing the old one did on existing `Hasher`s.
2022-05-06 09:43:57 +00:00
Scott McMurray
98054377ee Add a dedicated length-prefixing method to Hasher
This accomplishes two main goals:
- Make it clear who is responsible for prefix-freedom, including how they should do it
- Make it feasible for a `Hasher` that *doesn't* care about Hash-DoS resistance to get better performance by not hashing lengths

This does not change rustc-hash, since that's in an external crate, but that could potentially use it in future.
2022-05-06 00:03:38 -07:00
Peh
e79ba76ec4 Fixing #95444 by only displaying passes that take more than 5 milliseconds
95444: Adding passes that include memory increase

Fix95444: Change the substraction with the abs_diff() method

Fix95444: Change the substraction with abs_diff() method
2022-05-05 23:56:40 +00:00
Josh Triplett
0fc5c524f5 Stabilize bool::then_some 2022-05-04 13:22:08 +02:00
Elliot Roberts
7907385999 fix most compiler/ doctests 2022-05-02 17:40:30 -07:00
Michael Woerister
c0be619724 incr. comp.: Don't export impl_stable_hash_via_hash!() and warn about using it. 2022-04-19 10:43:20 +02:00
bors
563ef23529 Auto merge of #95899 - petrochenkov:modchild2, r=cjgillot
rustc_metadata: Do not encode unnecessary module children

This should remove the syntax context shift and the special case for `ExternCrate` in decoder in https://github.com/rust-lang/rust/pull/95880.

This PR also shifts some work from decoding to encoding, which is typically useful for performance (but probably not much in this case).
r? `@cjgillot`
2022-04-16 22:04:10 +00:00
Dylan DPC
9905774762
Rollup merge of #96058 - euclio:flock-impls, r=nagisa
separate flock implementations into separate modules

The main benefit of doing this is that rustfmt will now format each of these modules.
2022-04-16 19:42:05 +02:00
bors
febce1fc31 Auto merge of #95689 - lqd:self-profiler, r=wesleywiser
Allow self-profiler to only record potentially costly arguments when argument recording is turned on

As discussed [on zulip](https://rust-lang.zulipchat.com/#narrow/stream/247081-t-compiler.2Fperformance/topic/Identifying.20proc-macro.20slowdowns/near/277304909) with `@wesleywiser,` I'd like to record proc-macro expansions in the self-profiler, with some detailed data (per-expansion spans for example, to follow #95473).

At the same time, I'd also like to avoid doing expensive things when tracking a generic activity's arguments, if they were not specifically opted into the event filter mask, to allow the self-profiler to be used in hotter contexts.

This PR tries to offer:
- a way to ensure a closure to record arguments will only be called in that situation, so that potentially costly arguments can still be recorded when needed. With the additional requirement that, if possible, it would offer a way to record non-owned data without adding many `generic_activity_with_arg_{...}`-style methods. This lead to the `generic_activity_with_arg_recorder` single entry-point, and the closure parameter would offer the new methods, able to be executed in a context where costly argument could be created without disturbing the profiled piece of code.
- some facilities/patterns allowing to record more rustc specific data in this situation, without making `rustc_data_structures`  where the self-profiler is defined, depend on other rustc crates (causing circular dependencies): in particular, spans. They are quite tricky to turn into strings (if the default `Debug` impl output does not match the context one needs them for), and since I'd also like to avoid the allocation there when arg recording is turned off today, that has turned into another flexibility requirement for the API in this PR (separating the span-specific recording into an extension trait). **edit**: I've removed this from the PR so that it's easier to review, and opened https://github.com/rust-lang/rust/pull/95739.
- allow for extensibility in the future: other ways to record arguments, or additional data attached to them could be added in the future (e.g. recording the argument's name as well as its data).

Some areas where I'd love feedback:
- the API and names: the `EventArgRecorder` and its method for example. As well as the verbosity that comes from the increased flexibility.
- if I should convert the existing `generic_activity_with_arg{s}` to just forward to `generic_activity_with_arg_recorder` + `recorder.record_arg` (or remove them altogether ? Probably not): I've used the new API in the simple case I could find of allocating for an arg that may not be recorded, and the rest don't seem costly.
- [x] whether this API should panic if no arguments were recorded by the user-provided closure (like this PR currently does: it seems like an error to use an API dedicated to record arguments but not call the methods to then do so) or if this should just record a generic activity without arguments ?
- whether the `record_arg` function should be `#[inline(always)]`, like the `generic_activity_*` functions ?

As mentioned, r? `@wesleywiser` following our recent discussion.
2022-04-16 11:43:28 +00:00
Dylan DPC
49a31cdc1d
Rollup merge of #95372 - RalfJung:unaligned_references, r=oli-obk
make unaligned_references lint deny-by-default

This lint has been warn-by-default for a year now (since https://github.com/rust-lang/rust/pull/82525), so I think it is time to crank it up a bit. Code that triggers the lint causes UB (without `unsafe`) when executed, so we really don't want people to write code like this.
2022-04-16 07:12:43 +02:00
Ralf Jung
e30d6d9096 make unaligned_references lint deny-by-default 2022-04-14 21:16:42 -04:00
Andy Russell
219d81f19b
separate flock implementations into separate modules 2022-04-14 18:30:53 -04:00
Vadim Petrochenkov
233fa659f4 rustc_metadata: Do not encode unnecessary module children 2022-04-13 01:35:27 +03:00
Camille GILLOT
443333dc1f Remove NodeIdHashingMode. 2022-04-12 19:59:32 +02:00
bors
e980c62955 Auto merge of #95524 - oli-obk:cached_stable_hash_cleanups, r=nnethercote
Cached stable hash cleanups

r? `@nnethercote`

Add a sanity assertion in debug mode to check that the cached hashes are actually the ones we get if we compute the hash each time.

Add a new data structure that bundles all the hash-caching work to make it easier to re-use it for different interned data structures
2022-04-09 02:31:24 +00:00
bors
f4a7ce997a Auto merge of #95519 - oli-obk:tait_ub2, r=compiler-errors
Enforce well formedness for type alias impl trait's hidden type

fixes #84657

This was not an issue with return-position-impl-trait because the generic bounds of the function are the same as those of the opaque type, and the hidden type must already be well formed within the function.

With type-alias-impl-trait the hidden type could be defined in a function that has *more* lifetime bounds than the type alias. This is fine, but the hidden type must still be well formed without those additional bounds.
2022-04-08 20:45:16 +00:00
Rémy Rakic
7585269673 add generic_activity_with_arg_recorder to the self-profiler
This allows profiling costly arguments to be recorded only when `-Zself-profile-events=args` is on: using a closure that takes an `EventArgRecorder` and call its `record_arg` or `record_args` methods.
2022-04-07 15:47:20 +02:00
Rémy Rakic
1c4ae7aa4a turn exec comment into doc comment 2022-04-07 15:47:19 +02:00
Oli Scherer
2e0ef701c2 Document and rename the new wrapper type 2022-04-07 13:01:48 +00:00
Oli Scherer
b30bcfae38 Fix some fallout around type alias impl trait in associated types 2022-04-06 12:56:22 +00:00
bors
168a020900 Auto merge of #92686 - saethlin:unsafe-debug-asserts, r=Amanieu
Add debug assertions to some unsafe functions

As suggested by https://github.com/rust-lang/rust/issues/51713

~~Some similar code calls `abort()` instead of `panic!()` but aborting doesn't work in a `const fn`, and the intrinsic for doing dispatch based on whether execution is in a const is unstable.~~

This picked up some invalid uses of `get_unchecked` in the compiler, and fixes them.

I can confirm that they do in fact pick up invalid uses of `get_unchecked` in the wild, though the user experience is less-than-awesome:
```
     Running unittests (target/x86_64-unknown-linux-gnu/debug/deps/rle_decode_fast-04b7918da2001b50)

running 6 tests
error: test failed, to rerun pass '--lib'

Caused by:
  process didn't exit successfully: `/home/ben/rle-decode-helper/target/x86_64-unknown-linux-gnu/debug/deps/rle_decode_fast-04b7918da2001b50` (signal: 4, SIGILL: illegal instruction)
```

~~As best I can tell these changes produce a 6% regression in the runtime of `./x.py test` when `[rust] debug = true` is set.~~
Latest commit (6894d559bd) brings the additional overhead from this PR down to 0.5%, while also adding a few more assertions. I think this actually covers all the places in `core` that it is reasonable to check for safety requirements at runtime.

Thoughts?
2022-04-03 16:04:47 +00:00
Oli Scherer
6ffd654683 Check that the cached stable hash is the right one if debug assertions are enabled 2022-03-31 14:54:04 +00:00
Oli Scherer
00c24dd8ce Move stable hash from TyS into a datastructure that can be shared with other interned types. 2022-03-31 14:54:04 +00:00
bors
05142a7e44 Auto merge of #95466 - Dylan-DPC:rollup-g7ddr8y, r=Dylan-DPC
Rollup of 5 pull requests

Successful merges:

 - #95294 (Document Linux kernel handoff in std::io::copy and std::fs::copy)
 - #95443 (Clarify how `src/tools/x` searches for python)
 - #95452 (fix since field version for termination stabilization)
 - #95460 (Spellchecking compiler code)
 - #95461 (Spellchecking some comments)

Failed merges:

r? `@ghost`
`@rustbot` modify labels: rollup
2022-03-30 07:45:42 +00:00
Dylan DPC
03b3993ae8
Rollup merge of #95461 - nyurik:spelling, r=lcnr
Spellchecking some comments

This PR attempts to clean up some minor spelling mistakes in comments
2022-03-30 09:10:07 +02:00
Yuri Astrakhan
7e8201ae0a Spellchecking some comments
This PR attempts to clean up some minor spelling mistakes in comments
2022-03-30 01:39:38 -04:00
bors
f132bcf3bd Auto merge of #94081 - oli-obk:lazy_tait_take_two, r=nikomatsakis
Lazy type-alias-impl-trait take two

### user visible change 1: RPIT inference from recursive call sites

Lazy TAIT has an insta-stable change. The following snippet now compiles, because opaque types can now have their hidden type set from wherever the opaque type is mentioned.

```rust
fn bar(b: bool) -> impl std::fmt::Debug {
    if b {
        return 42
    }
    let x: u32 = bar(false); // this errors on stable
    99
}
```

The return type of `bar` stays opaque, you can't do `bar(false) + 42`, you need to actually mention the hidden type.

### user visible change 2: divergence between RPIT and TAIT in return statements

Note that `return` statements and the trailing return expression are special with RPIT (but not TAIT). So

```rust
#![feature(type_alias_impl_trait)]
type Foo = impl std::fmt::Debug;

fn foo(b: bool) -> Foo {
    if b {
        return vec![42];
    }
    std::iter::empty().collect() //~ ERROR `Foo` cannot be built from an iterator
}

fn bar(b: bool) -> impl std::fmt::Debug {
    if b {
        return vec![42]
    }
    std::iter::empty().collect() // Works, magic (accidentally stabilized, not intended)
}
```

But when we are working with the return value of a recursive call, the behavior of RPIT and TAIT is the same:

```rust
type Foo = impl std::fmt::Debug;

fn foo(b: bool) -> Foo {
    if b {
        return vec![];
    }
    let mut x = foo(false);
    x = std::iter::empty().collect(); //~ ERROR `Foo` cannot be built from an iterator
    vec![]
}

fn bar(b: bool) -> impl std::fmt::Debug {
    if b {
        return vec![];
    }
    let mut x = bar(false);
    x = std::iter::empty().collect(); //~ ERROR `impl Debug` cannot be built from an iterator
    vec![]
}
```

### user visible change 3: TAIT does not merge types across branches

In contrast to RPIT, TAIT does not merge types across branches, so the following does not compile.

```rust
type Foo = impl std::fmt::Debug;

fn foo(b: bool) -> Foo {
    if b {
        vec![42_i32]
    } else {
        std::iter::empty().collect()
        //~^ ERROR `Foo` cannot be built from an iterator over elements of type `_`
    }
}
```

It is easy to support, but we should make an explicit decision to include the additional complexity in the implementation (it's not much, see a721052457cf513487fb4266e3ade65c29b272d2 which needs to be reverted to enable this).

### PR formalities

previous attempt: #92007

This PR also includes #92306 and #93783, as they were reverted along with #92007 in #93893

fixes #93411
fixes #88236
fixes #89312
fixes #87340
fixes #86800
fixes #86719
fixes #84073
fixes #83919
fixes #82139
fixes #77987
fixes #74282
fixes #67830
fixes #62742
fixes #54895
2022-03-30 05:04:45 +00:00
Ben Kimock
6e6d0cbf83 Add debug assertions to some unsafe functions
These debug assertions are all implemented only at runtime using
`const_eval_select`, and in the error path they execute
`intrinsics::abort` instead of being a normal debug assertion to
minimize the impact of these assertions on code size, when enabled.

Of all these changes, the bounds checks for unchecked indexing are
expected to be most impactful (case in point, they found a problem in
rustc).
2022-03-29 11:05:24 -04:00
Oli Scherer
264cd05b16 Revert "Auto merge of #93893 - oli-obk:sad_revert, r=oli-obk"
This reverts commit 6499c5e7fc, reversing
changes made to 78450d2d60.
2022-03-28 16:27:14 +00:00
klensy
008fc79dcd Propagate parallel_compiler feature through rustc crates. Turned off feature gives change of builded crates: 238 -> 224. 2022-03-28 08:41:12 +03:00
lcnr
b8135fd5c8 add #[rustc_pass_by_value] to more types 2022-03-08 15:39:52 +01:00
Nicholas Nethercote
4f008e06c3 Clarify Layout interning.
`Layout` is another type that is sometimes interned, sometimes not, and
we always use references to refer to it so we can't take any advantage
of the uniqueness properties for hashing or equality checks.

This commit renames `Layout` as `LayoutS`, and then introduces a new
`Layout` that is a newtype around an `Interned<LayoutS>`. It also
interns more layouts than before. Previously layouts within layouts
(via the `variants` field) were never interned, but now they are. Hence
the lifetime on the new `Layout` type.

Unlike other interned types, these ones are in `rustc_target` instead of
`rustc_middle`. This reflects the existing structure of the code, which
does layout-specific stuff in `rustc_target` while `TyAndLayout` is
generic over the `Ty`, allowing the type-specific stuff to occur in
`rustc_middle`.

The commit also adds a `HashStable` impl for `Interned`, which was
needed. It hashes the contents, unlike the `Hash` impl which hashes the
pointer.
2022-03-07 13:41:47 +11:00
Nicholas Nethercote
4852291417 Introduce ConstAllocation.
Currently some `Allocation`s are interned, some are not, and it's very
hard to tell at a use point which is which.

This commit introduces `ConstAllocation` for the known-interned ones,
which makes the division much clearer. `ConstAllocation::inner()` is
used to get the underlying `Allocation`.

In some places it's natural to use an `Allocation`, in some it's natural
to use a `ConstAllocation`, and in some places there's no clear choice.
I've tried to make things look as nice as possible, while generally
favouring `ConstAllocation`, which is the type that embodies more
information. This does require quite a few calls to `inner()`.

The commit also tweaks how `PartialOrd` works for `Interned`. The
previous code was too clever by half, building on `T: Ord` to make the
code shorter. That caused problems with deriving `PartialOrd` and `Ord`
for `ConstAllocation`, so I changed it to build on `T: PartialOrd`,
which is slightly more verbose but much more standard and avoided the
problems.
2022-03-07 08:25:50 +11:00
bors
c38b8a8c62 Auto merge of #94579 - tmiasko:target-features, r=nagisa
Always include global target features in function attributes

This ensures that information about target features configured with
`-C target-feature=...` or detected with `-C target-cpu=native` is
retained for subsequent consumers of LLVM bitcode.

This is crucial for linker plugin LTO, since this information is not
conveyed to the plugin otherwise.

<details><summary>Additional test case demonstrating the issue</summary>

```rust
extern crate core;

#[inline]
#[target_feature(enable = "aes")]
unsafe fn f(a: u128, b: u128) -> u128 {
    use core::arch::x86_64::*;
    use core::mem::transmute;
    transmute(_mm_aesenc_si128(transmute(a), transmute(b)))
}

pub fn g(a: u128, b: u128) -> u128 {
    unsafe { f(a, b) }
}

fn main() {
    let mut args = std::env::args();
    let _ = args.next().unwrap();
    let a: u128 = args.next().unwrap().parse().unwrap();
    let b: u128 = args.next().unwrap().parse().unwrap();
    println!("{}", g(a, b));
}
```

```console
$ rustc --edition=2021 a.rs -Clinker-plugin-lto -Clink-arg=-fuse-ld=lld  -Ctarget-feature=+aes -O
...
  = note: LLVM ERROR: Cannot select: intrinsic %llvm.x86.aesni.aesenc
```

</details>

r? `@nagisa`
2022-03-06 18:07:11 +00:00
Tomasz Miąsko
725c11ef3c Add SmallStr 2022-03-04 16:57:34 +01:00
Tomasz Miąsko
ea0a31ff0c Inline SmallCStr::deref 2022-03-04 16:57:34 +01:00
Loïc BRANSTETT
08e1e67b49 Remove invalid #[cfg(tests)] in index_map 2022-03-04 11:34:50 +01:00
bors
2a280de64f Auto merge of #94514 - matthiaskrgr:rollup-pdzn82h, r=matthiaskrgr
Rollup of 9 pull requests

Successful merges:

 - #94464 (Suggest adding a new lifetime parameter when two elided lifetimes should match up for traits and impls.)
 - #94476 (7 - Make more use of `let_chains`)
 - #94478 (Fix panic when handling intra doc links generated from macro)
 - #94482 (compiler: fix some typos)
 - #94490 (Update books)
 - #94496 (tests: accept llvm intrinsic in align-checking test)
 - #94498 (9 - Make more use of `let_chains`)
 - #94503 (Provide C FFI types via core::ffi, not just in std)
 - #94513 (update Miri)

Failed merges:

r? `@ghost`
`@rustbot` modify labels: rollup
2022-03-02 05:32:00 +00:00
cuishuang
eb2b9441e7 compiler: fix some typos 2022-03-01 20:02:47 +08:00
Simonas Kazlauskas
df701a292c Querify global_backend_features
At the very least this serves to deduplicate the diagnostics that are
output about unknown target features provided via CLI.
2022-03-01 01:57:25 +02:00
bors
3b1fe7e7c9 Auto merge of #94084 - Mark-Simulacrum:drop-sharded, r=cjgillot
Avoid query cache sharding code in single-threaded mode

In non-parallel compilers, this is just adding needless overhead at compilation time (since there is only one shard statically anyway). This amounts to roughly ~10 seconds reduction in bootstrap time, with overall neutral (some wins, some losses) performance results.

Parallel compiler performance should be largely unaffected by this PR; sharding is kept there.
2022-02-27 14:04:07 +00:00
Matthias Krüger
648a8e314a
Rollup merge of #94306 - Mark-Simulacrum:dom-fixups, r=jackh726
Avoid exhausting stack space in dominator compression

Doesn't add a test case -- I ended up running into this while playing with the generated example from #43578, which we could do with a run-make test (to avoid checking a large code snippet into tree), but I suspect we don't want to wait for it to compile (locally it takes ~14s -- not terrible, but doesn't seem worth it to me). In practice stack space exhaustion is difficult to test for, too, since if we set the bound too low a different call structure above us (e.g., a nearer ensure_sufficient_stack call) would let the test pass even with the old impl, most likely.

Locally it seems like this manages to perform approximately equivalently to the recursion, but will run perf to confirm.
2022-02-26 07:52:44 +01:00
Mark Rousskov
22c3a71de1 Switch bootstrap cfgs 2022-02-25 08:00:52 -05:00
Matthias Krüger
ae27c4ab1f
Rollup merge of #94288 - Mark-Simulacrum:ser-opt, r=nnethercote
Cleanup a few Decoder methods

This is just some simple follow up to #93839.

r? `@nnethercote`
2022-02-24 07:48:09 +01:00
Mark Rousskov
4d89292785 Avoid exhausting stack space in dominator compression 2022-02-23 16:07:56 -05:00
bors
bafe8d06e0 Auto merge of #93984 - nnethercote:ChunkedBitSet, r=Mark-Simulacrum
Introduce `ChunkedBitSet` and use it for some dataflow analyses.

This reduces peak memory usage significantly for some programs with very
large functions.

r? `@ghost`
2022-02-23 01:26:07 +00:00
Nicholas Nethercote
36b495f3cf Introduce ChunkedBitSet and use it for some dataflow analyses.
This reduces peak memory usage significantly for some programs with very
large functions, such as:
- `keccak`, `unicode_normalization`, and `match-stress-enum`, from
  the `rustc-perf` benchmark suite;
- `http-0.2.6` from crates.io.

The new type is used in the analyses where the bitsets can get huge
(e.g. 10s of thousands of bits): `MaybeInitializedPlaces`,
`MaybeUninitializedPlaces`, and `EverInitializedPlaces`.

Some refactoring was required in `rustc_mir_dataflow`. All existing
analysis domains are either `BitSet` or a trivial wrapper around
`BitSet`, and access in a few places is done via `Borrow<BitSet>` or
`BorrowMut<BitSet>`. Now that some of these domains are `ClusterBitSet`,
that no longer works. So this commit replaces the `Borrow`/`BorrowMut`
usage with a new trait `BitSetExt` containing the needed bitset
operations. The impls just forward these to the underlying bitset type.
This required fiddling with trait bounds in a few places.

The commit also:
- Moves `static_assert_size` from `rustc_data_structures` to
  `rustc_index` so it can be used in the latter; the former now
  re-exports it so existing users are unaffected.
- Factors out some common "clear excess bits in the final word"
  functionality in `bit_set.rs`.
- Uses `fill` in a few places instead of loops.
2022-02-23 10:18:49 +11:00
Mark Rousskov
2098ea6eba Provide copy-free access to raw Decoder bytes 2022-02-22 18:11:59 -05:00
lcnr
6a1f5eab83 obligation forest docs 2022-02-21 12:00:26 +01:00
Mark Rousskov
9deed6f74e Move Sharded maps into each QueryCache impl 2022-02-20 12:10:46 -05:00
bors
c1aa85475c Auto merge of #93934 - rusticstuff:inline_ensure_sufficient_stack, r=estebank
Allow inlining of `ensure_sufficient_stack()`

This functions is monomorphized a lot and allowing the compiler to inline it improves instructions count and max RSS significantly in my local tests.
2022-02-20 15:10:19 +00:00
est31
2ef8af6619 Adopt let else in more places 2022-02-19 17:27:43 +01:00
Nicholas Nethercote
80632de4a1 Address review comments. 2022-02-15 16:20:01 +11:00
Nicholas Nethercote
0c2ebbd412 Rename PtrKey as Interned and improve it.
In particular, there's now more protection against incorrect usage,
because you can only create one via `Interned::new_unchecked`, which
makes it more obvious that you must be careful.

There are also some tests.
2022-02-15 15:50:29 +11:00
Santiago Pastorino
f4bb4500dd
Call the method fork instead of clone and add proper comments 2022-02-14 12:57:20 -03:00
Hans Kratz
59536fb023 Allow inlining of ensure_sufficient_stack() 2022-02-12 11:30:04 +01:00
Jakub Beránek
5fc2e5623b
Use const generics in SipHasher128's short_write 2022-02-05 19:55:44 +01:00
Jakub Beránek
c21b8e12a4
Fix isize optimization in StableHasher for big-endian architectures 2022-02-03 11:47:41 +01:00
bors
1be5c8f909 Auto merge of #93432 - Kobzol:stable-hash-isize-hash-compression, r=the8472
Compress amount of hashed bytes for `isize` values in StableHasher

This is another attempt to land https://github.com/rust-lang/rust/pull/92103, this time hopefully with a correct implementation w.r.t. stable hashing guarantees. The previous PR was [reverted](https://github.com/rust-lang/rust/pull/93014) because it could produce the [same hash](https://github.com/rust-lang/rust/pull/92103#issuecomment-1014625442) for different values even in quite simple situations. I have since added a basic [test](https://github.com/rust-lang/rust/pull/93193) that should guard against that situation, I also added a new test in this PR, specialised for this optimization.

## Why this optimization helps
Since the original PR, I have tried to analyze why this optimization even helps (and why it especially helps for `clap`). I found that the vast majority of stable-hashing `i64` actually comes from hashing `isize` (which is converted to `i64` in the stable hasher). I only found a single place where is this datatype used directly in the compiler, and this place has also been showing up in traces that I used to find out when is `isize` being hashed. This place is `rustc_span::FileName::DocTest`, however, I suppose that isizes also come from other places, but they might not be so easy to find (there were some other entries in the trace). `clap` hashes about 8.5 million `isize`s, and all of them fit into a single byte, which is why this optimization has helped it [quite a lot](https://github.com/rust-lang/rust/pull/92103#issuecomment-1005711861).

Now, I'm not sure if special casing `isize` is the correct solution here, maybe something could be done with that `isize` inside `DocTest` or in other places, but that's for another discussion I suppose. In this PR, instead of hardcoding a special case inside `SipHasher128`, I instead put it into `StableHasher`, and only used it for `isize` (I tested that for `i64` it doesn't help, or at least not for `clap` and other few benchmarks that I was testing).

## New approach
Since the most common case is a single byte, I added a fast path for hashing `isize` values which positive value fits within a single byte, and a cold path for the rest of the values.

To avoid the previous correctness problem, we need to make sure that each unique `isize` value will produce a unique hash stream to the hasher. By hash stream I mean a sequence of bytes that will be hashed (a different sequence should produce a different hash, but that is of course not guaranteed).

We have to distinguish different values that produce the same bit pattern when we combine them. For example, if we just simply skipped the leading zero bytes for values that fit within a single byte, `(0xFF, 0xFFFFFFFFFFFFFFFF)` and `(0xFFFFFFFFFFFFFFFF, 0xFF)` would send the same hash stream to the hasher, which must not happen.

To avoid this situation, values `[0, 0xFE]` are hashed as a single byte. When we hash a larger (treating `isize` as `u64`) value, we first hash an additional byte `0xFF`. Since `0xFF` cannot occur when we apply the single byte optimization, we guarantee that the hash streams will be unique when hashing two values `(a, b)` and `(b, a)` if `a != b`:
1) When both `a` and `b` are within `[0, 0xFE]`, their hash streams will be different.
2) When neither `a` and `b` are within `[0, 0xFE]`, their hash streams will be different.
3) When `a` is within `[0, 0xFE]` and `b` isn't, when we hash `(a, b)`, the hash stream will definitely not begin with `0xFF`. When we hash `(b, a)`, the hash stream will definitely begin with `0xFF`. Therefore the hash streams will be different.

r? `@the8472`
2022-02-03 01:08:45 +00:00
Matthias Krüger
21ffe45631
Rollup merge of #92528 - tmiasko:combine-commutative, r=michaelwoerister
Make `Fingerprint::combine_commutative` associative

The previous implementation swapped lower and upper 64-bits of a result
of modular addition, so the function was non-associative.

r? `@Aaron1011`
2022-02-02 19:34:01 +01:00
lcnr
a1a30f7548 add a rustc::query_stability lint 2022-02-01 10:15:59 +01:00
Jakub Beránek
8de59be933
Compress amount of hashed bytes for isize values in StableHasher 2022-01-30 09:52:44 +01:00
Matthias Krüger
bc26f97394
Rollup merge of #93193 - Kobzol:stable-hash-permutation-test, r=the8472
Add test for stable hash uniqueness of adjacent field values

This PR adds a simple test to check that stable hash will produce a different hash if the order of two values that have the same combined bit pattern changes.

r? `@the8472`
2022-01-27 22:32:24 +01:00
bors
e7825f2b69 Auto merge of #90842 - pierwill:localdefid-indexmap, r=wesleywiser
Use `indexmap` to avoid sorting `LocalDefId`s

See discussion in https://github.com/rust-lang/rust/pull/90408#discussion_r745935459.

Related to work on https://github.com/rust-lang/rust/issues/90317.
2022-01-24 22:04:55 +00:00
Jakub Beránek
1ffd043caf
Add test stable hash uniqueness of adjacent field values 2022-01-24 15:35:52 +01:00
Jakub Beránek
50f8062316
Revert "Do not hash leading zero bytes of i64 numbers in Sip128 hasher" 2022-01-24 09:07:47 +01:00
pierwill
4f89224f7f Use an indexmap to avoid sorting LocalDefIds
Update `indexmap` to 1.8.0.

Bless test
2022-01-22 22:34:16 -06:00
Nicholas Nethercote
416399dc10 Make Decodable and Decoder infallible.
`Decoder` has two impls:
- opaque: this impl is already partly infallible, i.e. in some places it
  currently panics on failure (e.g. if the input is too short, or on a
  bad `Result` discriminant), and in some places it returns an error
  (e.g. on a bad `Option` discriminant). The number of places where
  either happens is surprisingly small, just because the binary
  representation has very little redundancy and a lot of input reading
  can occur even on malformed data.
- json: this impl is fully fallible, but it's only used (a) for the
  `.rlink` file production, and there's a `FIXME` comment suggesting it
  should change to a binary format, and (b) in a few tests in
  non-fundamental ways. Indeed #85993 is open to remove it entirely.

And the top-level places in the compiler that call into decoding just
abort on error anyway. So the fallibility is providing little value, and
getting rid of it leads to some non-trivial performance improvements.

Much of this commit is pretty boring and mechanical. Some notes about
a few interesting parts:
- The commit removes `Decoder::{Error,error}`.
- `InternIteratorElement::intern_with`: the impl for `T` now has the same
  optimization for small counts that the impl for `Result<T, E>` has,
  because it's now much hotter.
- Decodable impls for SmallVec, LinkedList, VecDeque now all use
  `collect`, which is nice; the one for `Vec` uses unsafe code, because
  that gave better perf on some benchmarks.
2022-01-22 10:38:31 +11:00
bors
42852d7857 Auto merge of #92740 - cuviper:update-rayons, r=Mark-Simulacrum
Update rayon and rustc-rayon

This updates rayon for various tools and rustc-rayon for the compiler's parallel mode.

- rayon v1.3.1 -> v1.5.1
- rayon-core v1.7.1 -> v1.9.1
- rustc-rayon v0.3.1 -> v0.3.2
- rustc-rayon-core v0.3.1 -> v0.3.2

... and indirectly, this updates all of crossbeam-* to their latest versions.

Fixes #92677 by removing crossbeam-queue, but there's still a lingering question about how tidy discovers "runtime" dependencies. None of this is truly in the standard library's dependency tree at all.
2022-01-16 08:12:23 +00:00
bors
2e2c86eba2 Auto merge of #92070 - rukai:replace_vec_into_iter_with_array_into_iter, r=Mark-Simulacrum
Replace usages of vec![].into_iter with [].into_iter

`[].into_iter` is idiomatic over `vec![].into_iter` because its simpler and faster (unless the vec is optimized away in which case it would be the same)

So we should change all the implementation, documentation and tests to use it.

I skipped:
* `src/tools` - Those are copied in from upstream
* `src/test/ui` - Hard to tell if `vec![].into_iter` was used intentionally or not here and not much benefit to changing it.
*  any case where `vec![].into_iter` was used because we specifically needed a `Vec::IntoIter<T>`
*  any case where it looked like we were intentionally using `vec![].into_iter` to test it.
2022-01-11 14:23:24 +00:00
Josh Stone
f3b8812f24 Update rayon and rustc-rayon 2022-01-10 11:34:07 -08:00
Lucas Kent
08829853d3 eplace usages of vec![].into_iter with [].into_iter 2022-01-09 14:09:25 +11:00
Aaron Hill
5580e5e1dd
Ensure that Fingerprint caching respects hashing configuration
Fixes #92266

In some `HashStable` impls, we use a cache to avoid re-computing
the same `Fingerprint` from the same structure (e.g. an `AdtDef`).
However, the `StableHashingContext` used can be configured to
perform hashing in different ways (e.g. skipping `Span`s). This
configuration information is not included in the cache key,
which will cause an incorrect `Fingerprint` to be used if
we hash the same structure with different `StableHashingContext`
settings.

To fix this, the configuration settings of `StableHashingContext`
are split out into a separate `HashingControls` struct. This
struct is used as part of the cache key, ensuring that our caches
always produce the correct result for the given settings.

With this in place, we now turn off `Span` hashing during the
entire process of computing the hash included in legacy symbols.
This current has no effect, but will matter when a future PR
starts hashing more `Span`s that we currently skip.
2022-01-05 10:13:28 -05:00
Jakub Beránek
65a3279f4a
Do not hash zero bytes of i64 and u32 in Sip128 hasher 2022-01-04 19:12:10 +01:00
Tomasz Miąsko
1d64b59664 Make Fingerprint::combine_commutative associative
The previous implementation swapped lower and upper 64-bits of a result
of modular addition, so the function was non-associative.
2022-01-03 19:07:29 +01:00
Jakub Beránek
3d8d3f1435
Rustdoc: use ThinVec for GenericArgs bindings 2022-01-01 11:29:14 +01:00
bors
41ce641a40 Auto merge of #92130 - Kobzol:stable-hash-str, r=cjgillot
Use hash_stable for hashing str

This seemed like an oversight. With this change the hash can go through the `HashStable` machinery directly.
2021-12-28 01:04:33 +00:00
pierwill
e6ff0bac1e rustc VecGraph: require the index type to implement Ord 2021-12-22 10:50:57 -06:00
pierwill
8df9248591 Remove PartialOrd and Ord from LocalDefId
Implement `Ord`, `PartialOrd` for SpanData
2021-12-22 10:50:57 -06:00
bors
87e8639d8d Auto merge of #91903 - tmiasko:bit-set-hash, r=jackh726
Implement StableHash for BitSet and BitMatrix via Hash

This fixes an issue where bit sets / bit matrices the same word
content but a different domain size would receive the same hash.
2021-12-21 05:42:10 +00:00
Jakub Beránek
c695b6026c
Use hash_stable for hashing str 2021-12-20 18:46:34 +01:00
bors
daf2204aa4 Auto merge of #91837 - Kobzol:stable-hash-map-avoid-sort, r=the8472
Avoid sorting in hash map stable hashing

Suggested by `@the8472` [here](https://github.com/rust-lang/rust/pull/89404#issuecomment-991813333). I hope that I understood it right, I replaced the sort with modular multiplication, which should be commutative.

Can I ask for a perf. run? However, locally it didn't help at all. Creating the `StableHasher` all over again is probably slowing it down quite a lot. And using `FxHasher` is not straightforward, because the keys and values only implement `HashStable` (and probably they shouldn't be just hashed via `Hash` anyway for it to actually be stable).

Maybe the `StableHash` interface could be changed somehow to better suppor these scenarios where the hasher is short-lived. Or the `StableHasher` implementation could have variants with e.g. a shorter buffer for these scenarios.
2021-12-18 21:23:37 +00:00
Tomasz Miąsko
d0281bcb25 Implement StableHash for BitSet and BitMatrix via Hash
This fixes an issue where bit sets / bit matrices the same word
content but a different domain size would receive the same hash.
2021-12-18 00:00:00 +00:00
Jakub Beránek
1f284b07ed
Add special case for length 1 2021-12-13 21:34:54 +01:00
Jakub Beránek
ac08f13948
Remove sort from hashing hashset, treeset and treemap 2021-12-13 16:11:28 +01:00
Jakub Beránek
6e33d3ecc2
Use modular arithmetic 2021-12-12 23:48:11 +01:00
bors
22f8bde876 Auto merge of #91549 - fee1-dead:const_env, r=spastorino
Eliminate ConstnessAnd again

Closes #91489.
Closes #89432.

Reverts #91491.
Reverts #89450.

r? `@spastorino`
2021-12-12 22:15:32 +00:00
Jakub Beránek
a5f5f6b689
Avoid sorting in hash map stable hashing 2021-12-12 20:57:24 +01:00
Deadbeef
83587e8d30
Small performance tweaks 2021-12-12 12:35:01 +08:00
bors
58457bbfd3 Auto merge of #89404 - Kobzol:hash-stable-sort, r=Mark-Simulacrum
Slightly optimize hash map stable hashing

I was profiling some of the `rustc-perf` benchmarks locally and noticed that quite some time is spent inside the stable hash of hashmaps. I tried to use a `SmallVec` instead of a `Vec` there, which helped very slightly.

Then I tried to remove the sorting, which was a bottleneck, and replaced it with insertion into a binary heap. Locally, it yielded nice improvements in instruction counts and RSS in several benchmarks for incremental builds. The implementation could probably be much nicer and possibly extended to other stable hashes, but first I wanted to test the perf impact properly.

Can I ask someone to do a perf run? Thank you!
2021-12-12 03:50:30 +00:00
Matthias Krüger
7e18b79c77
Rollup merge of #91426 - eggyal:idfunctor-panic-safety, r=lcnr
Make IdFunctor::try_map_id panic-safe

Addresses FIXME comment created in #78313

r? ```@lcnr```
2021-12-11 08:22:32 +01:00
est31
15de4cbc4b Remove redundant [..]s 2021-12-09 00:01:29 +01:00
Alan Egerton
acd39ff0fe
Make IdFunctor::try_map_id panic-safe 2021-12-07 11:11:23 +00:00
Mark Rousskov
15483ccf9d Annotate comments onto the LT algorithm 2021-12-06 20:30:15 -05:00
Mark Rousskov
3187480070 Avoid using Option where values are always Some 2021-12-06 15:05:22 -05:00
Mark Rousskov
2b63059772 Create newtype around the pre order index 2021-12-06 15:05:22 -05:00
Mark Rousskov
cc63ec32fb Use variables rather than lengths directly 2021-12-06 15:05:22 -05:00
Mark Rousskov
345ada0e1b Optimize: reuse the real-to-preorder mapping as the visited set 2021-12-06 15:05:22 -05:00
Mark Rousskov
8991002644 Remove separate RPO traversal
This integrates the preorder and postorder traversals into one.
2021-12-06 15:05:22 -05:00
Mark Rousskov
7d12767dc5 Use preorder indices for data structures
This largely avoids remapping from and to the 'real' indices, with the exception
of predecessor lookup and the final merge back, and is conceptually better.
2021-12-06 15:05:22 -05:00
Mark Rousskov
92186cb5c9 Avoid inserting into buckets if not necessary 2021-12-06 15:05:22 -05:00
Mark Rousskov
7379d24ebc Optimization: process buckets only once 2021-12-06 15:05:22 -05:00
Mark Rousskov
c82fe0efb4 Optimization: Merge parent and ancestor arrays
As the paper indicates, the unprocessed vertices in the DFS tree and processed
vertices are disjoint, and we can use them in the same space, tracking only the index
of the split.
2021-12-06 15:05:22 -05:00
Mark Rousskov
e8d7248093 Implement the simple Lengauer-Tarjan algorithm
This replaces the previous implementation with the simple variant of
Lengauer-Tarjan, which performs better in the general case. Performance on the
keccak benchmark is about equivalent between the two, but we don't see
regressions (and indeed see improvements) on other benchmarks, even on a
partially optimized implementation.

The implementation here follows that of the pseudocode in "Linear-Time
Algorithms for Dominators and Related Problems" thesis by Loukas Georgiadis. The
next few commits will optimize the implementation as suggested in the thesis.
Several related works are cited in the comments within the implementation, as
well.

Implement the simple Lengauer-Tarjan algorithm

This replaces the previous implementation (from #34169), which has not been
optimized since, with the simple variant of Lengauer-Tarjan which performs
better in the general case. A previous attempt -- not kept in commit history --
attempted a replacement with a bitset-based implementation, but this led to
regressions on perf.rust-lang.org benchmarks and equivalent wins for the keccak
benchmark, so was rejected.

The implementation here follows that of the pseudocode in "Linear-Time
Algorithms for Dominators and Related Problems" thesis by Loukas Georgiadis. The
next few commits will optimize the implementation as suggested in the thesis.
Several related works are cited in the comments within the implementation, as
well.

On the keccak benchmark, we were previously spending 15% of our cycles computing
the NCA / intersect function; this function is quite expensive, especially on
modern CPUs, as it chases pointers on every iteration in a tight loop. With this
commit, we spend ~0.05% of our time in dominator computation.
2021-12-06 15:03:09 -05:00
Scott McMurray
308fd59f42 Stop enabling in_band_lifetimes in rustc_data_structures
There's a conversation in the tracking issue about possibly unaccepting `in_band_lifetimes`, but it's used heavily in the compiler, and thus there'd need to be a bunch of PRs like this if that were to happen.

So here's one to see how much of an impact it has.

(Oh, and I removed `nll` while I was here too, since it didn't seem needed.  Let me know if I should put that back.)
2021-12-05 20:17:35 -08:00
Matthias Krüger
31003a3089
Rollup merge of #88906 - Kixunil:box-maybe-uninit-write, r=dtolnay
Implement write() method for Box<MaybeUninit<T>>

This adds method similar to `MaybeUninit::write` main difference being
it returns owned `Box`. This can be used to elide copy from stack
safely, however it's not currently tested that the optimization actually
occurs.

Analogous methods are not provided for `Rc` and `Arc` as those need to
handle the possibility of sharing. Some version of them may be added in
the future.

This was discussed in #63291 which this change extends.
2021-12-03 06:24:11 +01:00
Martin Habovstiak
41e21aa1c2 Implement write() method for Box<MaybeUninit<T>>
This adds method similar to `MaybeUninit::write` main difference being
it returns owned `Box`. This can be used to elide copy from stack
safely, however it's not currently tested that the optimization actually
occurs.

Analogous methods are not provided for `Rc` and `Arc` as those need to
handle the possibility of sharing. Some version of them may be added in
the future.

This was discussed in #63291 which this change extends.
2021-12-02 17:18:34 +01:00
Alan Egerton
db7295fa96
Remove no-longer used IdFunctor::map_id 2021-12-02 15:45:40 +00:00
Alan Egerton
afa6f92c46
Use intrinsic pointer methods 2021-11-27 16:59:18 +00:00
Alan Egerton
9f714ef035
Delegate from map_id to try_map_id 2021-11-27 16:55:21 +00:00
Alan Egerton
04f1c09f90
Avoid UB when short-circuiting try_map_id for Vec 2021-11-27 15:06:06 +00:00
LeSeulArtichaut
6e3fa20b00
Make TypeFoldable implementors short-circuit on error
Co-authored-by: Alan Egerton <eggyal@gmail.com>
2021-11-26 07:17:59 +00:00
Yuki Okushi
8d4fbc9a73
Add #[inline]s to SortedIndexMultiMap 2021-11-11 08:35:59 +09:00
Matthias Krüger
5c454551da more clippy fixes 2021-11-07 16:59:05 +01:00
bors
88a5a984fe Auto merge of #90380 - Mark-Simulacrum:revert-89558-query-stable-lint, r=lcnr
Revert "Add rustc lint, warning when iterating over hashmaps"

Fixes perf regressions introduced in https://github.com/rust-lang/rust/pull/90235 by temporarily reverting the relevant PR.
2021-10-29 04:55:51 +00:00
Mark Rousskov
3215eeb99f
Revert "Add rustc lint, warning when iterating over hashmaps" 2021-10-28 11:01:42 -04:00
bors
c4ff03f689 Auto merge of #90145 - cjgillot:sorted-map, r=michaelwoerister
Use SortedMap in HIR.

Closes https://github.com/rust-lang/rust/issues/89788
r? `@ghost`
2021-10-28 13:04:40 +00:00
bors
235d9853d8 Auto merge of #90042 - pietroalbini:1.56-master, r=Mark-Simulacrum
Bump bootstrap compiler to 1.57

Fixes https://github.com/rust-lang/rust/issues/90152

r? `@Mark-Simulacrum`
2021-10-25 11:31:47 +00:00
Jakub Beránek
e4b4d18f58
Use SmallVec in Hash map stable hashing 2021-10-25 08:26:00 +02:00
Matthias Krüger
87822b27ee
Rollup merge of #89558 - lcnr:query-stable-lint, r=estebank
Add rustc lint, warning when iterating over hashmaps

r? rust-lang/wg-incr-comp
2021-10-24 15:48:42 +02:00
Pietro Albini
b63ab8005a update cfg(bootstrap) 2021-10-23 21:55:57 -04:00
Mark Rousskov
3cd5c95ab0 Specialize HashStable for [u8] slices
Particularly for ctfe-stress-4, the hashing of byte slices as part of the
MIR Allocation is quite hot. Previously, we were falling back on byte-by-byte
copying of the slice into the SipHash buffer (64 bytes long) before hashing a 64
byte chunk, and then doing that again and again.

This should hopefully be an improvement for that code.
2021-10-23 12:11:05 -04:00
Camille GILLOT
6f6fa8b954 Use SortedMap in HIR. 2021-10-21 23:08:57 +02:00
lcnr
00e5abe9b6 allow potential_query_instability everywhere 2021-10-15 10:58:18 +02:00
Clemens Wasser
b8c151c9d2 Remove for loop range 2021-10-10 16:32:16 +02:00
Clemens Wasser
71dd0b928b Apply clippy suggestions 2021-10-10 15:38:19 +02:00
Ryan Levick
757f76ef73 Update to measureme v10 2021-10-07 15:08:44 +02:00
Ryan Levick
947a33bf20 Add support for artifact size profiling 2021-10-07 14:22:29 +02:00
Jubilee
99e6e3ff07
Rollup merge of #87993 - kornelski:try_reserve_stable, r=joshtriplett
Stabilize try_reserve

Stabilization PR for the [`try_reserve` feature](https://github.com/rust-lang/rust/issues/48043#issuecomment-898040475).
2021-10-04 21:12:33 -07:00
Jubilee
9866b090f4
Rollup merge of #89508 - jhpratt:stabilize-const_panic, r=joshtriplett
Stabilize `const_panic`

Closes #51999

FCP completed in #89006

```@rustbot``` label +A-const-eval +A-const-fn +T-lang

cc ```@oli-obk``` for review (not `r?`'ing as not on lang team)
2021-10-04 13:58:17 -07:00
Kornel
00152d8977 Stabilize try_reserve 2021-10-04 10:29:46 +01:00
Jacob Pratt
bce8621983
Stabilize const_panic 2021-10-04 02:33:33 -04:00
bjorn3
83ddedf170 Remove various unused feature gates 2021-10-02 19:09:18 +02:00
Oli Scherer
9b5aa063d8 More tracing instrumentation 2021-09-28 12:28:22 +00:00
Manish Goregaokar
653dcaac2b
Rollup merge of #89216 - r00ster91:bigo, r=dtolnay
Consistent big O notation

This makes the big O time complexity notation in places with markdown support more consistent.
Inspired by #89210
2021-09-25 18:22:20 -07:00
Ellen
f1e71a5b41 arrr caught ya caller
awd
2021-09-25 00:34:02 +01:00
r00ster91
956f87fb04 consistent big O notation 2021-09-24 12:44:28 +02:00
bors
d8d1d1059a Auto merge of #89158 - the8472:rollup-3e4ijth, r=the8472
Rollup of 12 pull requests

Successful merges:

 - #88795 (Print a note if a character literal contains a variation selector)
 - #89015 (core::ascii::escape_default: reduce struct size)
 - #89078 (Cleanup: Remove needless reference in ParentHirIterator)
 - #89086 (Stabilize `Iterator::map_while`)
 - #89096 ([bootstrap] Improve the error message when `ninja` is not found to link to installation instructions)
 - #89113 (dont `.ensure()` the `thir_abstract_const` query call in `mir_build`)
 - #89114 (Fixes a technicality regarding the size of C's `char` type)
 - #89115 (⬆️ rust-analyzer)
 - #89126 (Fix ICE when `indirect_structural_match` is allowed)
 - #89141 (Impl `Error` for `FromSecsError` without foreign type)
 - #89142 (Fix match for placeholder region)
 - #89147 (add case for checking const refs in check_const_value_eq)

Failed merges:

r? `@ghost`
`@rustbot` modify labels: rollup
2021-09-21 22:07:32 +00:00
the8472
d7de8d2b53
Rollup merge of #89086 - WaffleLapkin:stabilize_iter_map_while, r=kennytm
Stabilize `Iterator::map_while`

Per the FCP: https://github.com/rust-lang/rust/issues/68537#issuecomment-922385035

This PR stabilizes `Iterator::map_while` and `iter::MapWhile` in Rust 1.57.
2021-09-21 22:54:01 +02:00
bors
ac2d9fc509 Auto merge of #89103 - Mark-Simulacrum:migrate-2021, r=estebank
Migrate in-tree crates to 2021

This replaces #89075 (cherry picking some of the commits from there), and closes #88637 and fixes #89074.

It excludes a migration of the library crates for now (see tidy diff) because we have some pending bugs around macro spans to fix there.

I instrumented bootstrap during the migration to make sure all crates moved from 2018 to 2021 had the compatibility warnings applied first.

Originally, the intent was to support cargo fix --edition within bootstrap, but this proved fairly difficult to pull off. We'd need to architect the check functionality to support running cargo check and cargo fix within the same x.py invocation, and only resetting sysroots on check. Further, it was found that cargo fix doesn't behave too well with "not quite workspaces", such as Clippy which has several crates. Bootstrap runs with --manifest-path ... for all the tools, and this makes cargo fix only attempt migration for that crate. We can't use e.g. --workspace due to needing to maintain sysroots for different phases of compilation appropriately.

It is recommended to skip the mass migration of Cargo.toml's to 2021 for review purposes; you can also use `git diff d6cd2c6c87 -I'^edition = .20...$'` to ignore the edition = 2018/21 lines in the diff.
2021-09-21 19:25:49 +00:00
Mark Rousskov
c746be2219 Migrate to 2021 2021-09-20 22:21:42 -04:00
bjorn3
7b50fd5450 Use <[T; N]>::map in Sharded instead of SmallVec and unsafe code
This results in a lot less assembly
2021-09-18 15:07:24 +02:00
Maybe Waffle
71e2eacc7b Stabilize Iterator::map_while 2021-09-17 19:42:46 +03:00
Manish Goregaokar
b87d0d0d94
Rollup merge of #88711 - Mark-Simulacrum:fix-dfs-bug, r=jackh726
Rework DepthFirstSearch API

This expands the API to be more flexible, allowing for more visitation patterns
on graphs. This will be useful to avoid extra datasets (and allocations) in
cases where the expanded DFS API is sufficient.

This also fixes a bug with the previous DFS constructor, which left the start
node not marked as visited (even though it was immediately returned).

Commit written by ```@nikomatsakis``` originally, cherry picked from several commits in work on never type stabilization, but stands alone.
2021-09-12 03:44:57 -07:00
Vadim Petrochenkov
294510e1bb rustc: Remove local variable IDs from Exports
Local variables can never be exported.
2021-09-10 23:41:48 +03:00
Mark Rousskov
b4e7649d6d Bump stage0 compiler to 1.56 2021-09-08 20:51:05 -04:00
Niko Matsakis
c9d46eb224 Rework DepthFirstSearch API
This expands the API to be more flexible, allowing for more visitation patterns
on graphs. This will be useful to avoid extra datasets (and allocations) in
cases where the expanded DFS API is sufficient.

This also fixes a bug with the previous DFS constructor, which left the start
node not marked as visited (even though it was immediately returned).
2021-09-08 12:23:37 -04:00
Ryan Levick
5f09e93318
Rollup merge of #88659 - est31:update_smallvec_name, r=matthewjasper
Remove SmallVector mention

SmallVector is long gone, as it's been first replaced
by OneVector in commit e5e6375352,
which then has been removed entirely in favour of SmallVec in
commit 130a32fa72.
2021-09-06 12:38:55 +02:00
est31
d84a39b35a Remove SmallVector mention
SmallVector is long gone, as it's been first replaced
by OneVector in commit e5e6375352,
which then has been removed entirely in favour of SmallVec in
commit 130a32fa72.
2021-09-05 06:10:21 +02:00
Mara Bos
8fd53e3085
Rollup merge of #88053 - bjorn3:fix_flock_fallback_impl, r=cjgillot
Fix the flock fallback implementation
2021-09-01 09:23:25 +02:00
Mara Bos
4adacfd43e
Rollup merge of #88492 - est31:maybe_uninit_write, r=wesleywiser
Use MaybeUninit::write in functor.rs

MaybeUninit::write is stable as of 1.55.0.
2021-08-31 10:41:26 +02:00
est31
403b80d32e Use MaybeUninit::write in functor.rs
MaybeUninit::write is stable as of 1.55.0.
2021-08-30 17:23:49 +02:00
Mateusz Mikuła
f58289cc51 Update stacker and psm crates 2021-08-28 00:40:49 +02:00
Frank Steffahn
6248dbcf70 Also fix “a OwningRef 2021-08-24 02:28:38 +02:00
Frank Steffahn
b823dc1bbd Also fix “a RwLock*” 2021-08-24 02:24:35 +02:00
Frank Steffahn
04fa1d81dd Fix typo “a Rc” → “an Rc” 2021-08-24 02:23:16 +02:00
Frank Steffahn
be9d2699ca Fix more “a”/“an” typos 2021-08-22 16:35:29 +02:00
Frank Steffahn
bf88b113ea Fix typos “a”→“an” 2021-08-22 15:35:11 +02:00
bjorn3
21f07b55df
Fix the flock fallback implementation 2021-08-15 18:44:06 +02:00
pierwill
3e123e4150 Remove duplicate trait bounds in rustc_data_structures::graph 2021-08-09 08:52:04 -05:00
Jade
3cf820e17d rfc3052: Remove authors field from Cargo manifests
Since RFC 3052 soft deprecated the authors field anyway, hiding it from
crates.io, docs.rs, and making Cargo not add it by default, and it is
not generally up to date/useful information, we should remove it from
crates in this repo.
2021-07-29 14:56:05 -07:00
Santiago Pastorino
5bff8429a0
Use type_alias_impl_trait instead of min in compiler and lib 2021-07-27 12:27:08 -03:00
bors
3b4a0dfc13 Auto merge of #86429 - JohnTitor:get-by-key-enum-part-2, r=oli-obk
Improve `get_by_key_enumerated` more

Follow-up of #86392, this applies the suggestions by `@m-ou-se.`

r? `@m-ou-se`
2021-07-23 23:17:38 +00:00
Santiago Pastorino
d71410757d
Add VecMap::get_value_matching and assert if > 1 element
Otherwise is a bug that we want to uncover.
2021-07-23 08:44:23 -03:00
Yuki Okushi
6761826b1b
Sort features alphabetically 2021-07-23 18:08:26 +09:00
Yuki Okushi
8d00be9980
Use map_while instead of take_while + map 2021-07-23 18:04:28 +09:00
Yuki Okushi
cb3b3cf6ab
Improve get_by_key_enumerated more 2021-07-23 18:04:21 +09:00
bors
b2b7c859c1 Auto merge of #87287 - oli-obk:fixup_fixup_fixup_opaque_types, r=spastorino
Make mir borrowck's use of opaque types independent of the typeck query's result

fixes #87218
fixes #86465

we used to use the typeck results only to generate an obligation for the mir borrowck type to be equal to the typeck result.

When i removed the `fixup_opaque_types` function in #87200, I exposed a bug that showed that mir borrowck can't doesn't get enough information from typeck in order to build the correct lifetime mapping from opaque type usage to the actual concrete type. We therefor now fully compute the information within mir borrowck (we already did that, but we only used it to verify the typeck result) and stop using the typeck information.

We will likely be able to remove most opaque type information from the borrowck results in the future and just have all current callers use the mir borrowck result instead.

r? `@spastorino`
2021-07-23 03:40:26 +00:00
Oli Scherer
6d76002baf Make mir borrowck's use of opaque types independent of the typeck query's result 2021-07-22 11:20:29 +00:00
Oli Scherer
d693a98f4e Fix VecMap::iter_mut
It used to allow you to mutate the key, even though that can invalidate the map by creating duplicate keys.
2021-07-22 11:20:29 +00:00
bors
f913a4fe90 Auto merge of #86619 - rylev:incr-hashing-profiling, r=wesleywiser
Profile incremental compilation hashing fingerprints

Adds profiling instrumentation for the hashing of incremental compilation fingerprints per query.

This will eventually feed into the `measureme` and `rustc-perf` infrastructure for tracking if computing hashes changes over time.

TODOs:
* [x] Address the FIXME where we are including node interning in the hash timing.
* [ ] Update measureme/summarize to handle this new data: https://github.com/rust-lang/measureme/pull/166
* [ ] ~Update rustc-perf to handle the new data from measureme~ (will be done at a later time)

r? `@ghost`

cc `@michaelwoerister`
2021-07-22 10:04:44 +00:00
jackh726
d954a8ee8e Some perf optimizations and logging 2021-07-17 16:09:17 -04:00
Oli Scherer
692f638036 Fix VecMap Extend impl 2021-07-13 15:05:29 +00:00
Ryan Levick
b5bec17184 Add docs to new methods 2021-07-07 11:14:14 +02:00
Ryan Levick
6e33dce9c2 Profile incremental hashing 2021-07-07 10:43:30 +02:00
Josh Triplett
6e75aae355 rustc_data_structures: Drop unused dependency on crossbeam-utils
rustc_data_structures has a dependency on crossbeam-utils but never uses
it. It appears to have originally had this dependency in order to set
the "nightly" feature; however, its other dependencies use a different
version of crossbeam-utils, so this doesn't actually affect anything.
Furthermore, in current crossbeam-utils, the "nightly" feature has
become a no-op.
2021-06-25 01:03:16 -07:00
Yuki Okushi
d2852354dc
Rollup merge of #86387 - JohnTitor:now-no-unused-lifetimes, r=Mark-Simulacrum
Remove `#[allow(unused_lifetimes)]` which is now unnecessary

Seems FP has been fixed, it doesn't need `#[allow(unused_lifetimes)]` anymore.
2021-06-22 07:37:53 +09:00
Yuki Okushi
c0efd2a15b
Prefer partition_point to look up assoc items 2021-06-17 11:40:37 +09:00
Yuki Okushi
636170a1e2
Remove #[allow(unused_lifetimes)] which is now unnecessary 2021-06-17 08:56:54 +09:00
bors
0a8629bff6 Auto merge of #85885 - bjorn3:remove_box_region, r=cjgillot
Don't use a generator for BoxedResolver

The generator is non-trivial and requires unsafe code anyway. Using regular unsafe code without a generator is much easier to follow.

Based on #85810 as it touches rustc_interface too.
2021-06-11 16:11:20 +00:00
Santiago Pastorino
7b1e1c7333
add VecMap docs 2021-06-08 17:17:48 -03:00
Santiago Pastorino
cad762b1e2
Use impl FnMut directly as predicate type 2021-06-08 17:17:48 -03:00
Santiago Pastorino
ed94da14ed
Explicitly pass find arguments down the predicate so coercions can apply 2021-06-08 17:17:47 -03:00
bjorn3
99e112d282 Inline the rest of box_region 2021-06-08 19:24:16 +02:00
bjorn3
86b3ebe2da Inline box_region macro calls 2021-06-08 19:24:15 +02:00
bjorn3
8f397bc8a0 Simplify box_region macros 2021-06-08 19:24:15 +02:00
Santiago Pastorino
dd56ec653c
Add VecMap::get_by(FnMut -> bool) 2021-06-08 09:40:58 -03:00
Santiago Pastorino
aa7024b0c7
Add VecMap to rustc_data_structures 2021-06-07 19:03:51 -03:00
Yuki Okushi
b3bcf4af74
Rollup merge of #85436 - tamird:save-clone, r=estebank
Avoid cloning cache key

r? `@estebank`
2021-06-06 19:11:16 +09:00
Joshua Nelson
3412957e7f Unify parallel and non-parallel APIs
It's confusing for these to be different, even if some of the methods
are unused.
2021-06-04 15:26:08 -04:00
Joshua Nelson
1a5cc25525 Remove unused code from rustc_data_structures::sync
Found using https://github.com/est31/warnalyzer.
2021-06-04 15:25:05 -04:00
Yuki Okushi
36f1ed6de2
Rollup merge of #85850 - bjorn3:less_feature_gates, r=jyn514
Remove unused feature gates

The first commit removes a usage of a feature gate, but I don't expect it to be controversial as the feature gate was only used to workaround a limitation of rust in the past. (closures never being `Clone`)

The second commit uses `#[allow_internal_unstable]` to avoid leaking the `trusted_step` feature gate usage from inside the index newtype macro. It didn't work for the `min_specialization` feature gate though.

The third commit removes (almost) all feature gates from the compiler that weren't used anyway.
2021-06-04 13:42:54 +09:00
bors
1e13a9bb33 Auto merge of #85892 - tmiasko:i, r=oli-obk
Miscellaneous inlining improvements
2021-06-02 10:47:58 +00:00
Tomasz Miąsko
c1f6495b8e Miscellaneous inlining improvements 2021-06-02 08:49:58 +02:00
Camille GILLOT
273778086c Remove StableVec. 2021-06-01 20:53:04 +02:00
Camille Gillot
0f0f3138cb
Revert "Reduce the amount of untracked state in TyCtxt" 2021-06-01 09:05:22 +02:00
bjorn3
312f964478 Remove unused feature gates 2021-05-31 13:55:43 +02:00
bjorn3
b4ed7114bd Remove unnecessary unboxed_closures feature usage
It has been possible to clone closures for a while now
2021-05-31 12:13:47 +02:00
Camille GILLOT
ee567fe1b1 Remove StableVec. 2021-05-30 19:54:21 +02:00
Eric Huss
074d667cf5 Don't panic when failing to initialize incremental directory. 2021-05-25 14:40:33 -07:00
Tamir Duberstein
42b9d3fb72
Simplify map | unwrap_or to map_or 2021-05-18 10:37:42 -04:00
Tamir Duberstein
a36c636199
Avoid cloning cache key 2021-05-18 07:48:00 -04:00
Esteban Küber
4bd5505718 Only compute Obligation cache_key once in register_obligation_at 2021-05-04 11:57:53 -07:00
bors
ada102456d Auto merge of #84614 - RalfJung:daily, r=Mark-Simulacrum
don't enable parking_lot nightly features

Having the compiler itself depend on external libraries that use nightly features can lead to "fun" bootstrap situations. Within the rustc repo we use `cfg(bootstrap)` to resolve those, but that is not a reasonable option for external dependencies.

So I propose we stop enabling the "nightly" feature of `parking_lot` here. In my experiments, this then indeed leads to the feature not being enabled (i.e., nothing else enables it), and everything still builds. However, this means parking_lot's `RwLock` will no longer have hardware lock elision for readers -- I hope that is okay to lose in exchange for less bootstrap brain twisting. ;)

Cc `@Amanieu`
2021-04-29 02:53:52 +00:00
Ralf Jung
170a10b2dd don't enable parking_lot nightly features 2021-04-27 15:05:55 +02:00
Jubilee Young
1f5e1c5efa Use latest crossbeam 2021-04-23 16:27:08 -07:00
Jubilee Young
b2c1dbbd33 Use tempfile 2021-04-23 15:33:57 -07:00
Jubilee Young
e8eb691c1f Use arrayvec 0.7, drop smallvec 0.6
With the arrival of min const generics, many alt-vec libraries have
updated to use it in some way and arrayvec is no exception. Use the
latest with minor refactoring.

Also, rustc_workspace_hack is the only user of smallvec 0.6 in the
entire tree, so drop it.
2021-04-21 22:39:08 -07:00
pierwill
0019ca9141 Fix outdated crate names in compiler docs
Changes `librustc_X` to `rustc_X`, only in documentation comments.
Plain code comments are left unchanged.

Also fix incorrect file paths.
2021-04-08 11:12:14 -05:00
bors
e1d49aaad4 Auto merge of #83821 - camelid:improve-thinvec, r=petrochenkov
Add `FromIterator` and `IntoIterator` impls for `ThinVec`

These should make using `ThinVec` feel much more like using `Vec`.
They will allow users of `Vec` to switch to `ThinVec` while continuing
to use `collect()`, `for` loops, and other parts of the iterator API.

I don't know if there were use cases before for using the iterator API
with `ThinVec`, but I would like to start using `ThinVec` in rustdoc,
and having it conform to the iterator API would make the transition
*a lot* easier.

I added a `FromIterator` impl, an `IntoIterator` impl that yields owned
elements, and `IntoIterator` impls that yield immutable or mutable
references to elements. I also added some unit tests for `ThinVec`.
2021-04-06 09:57:12 +00:00
Camelid
09ff88b600 Add FromIterator and IntoIterator impls for ThinVec
These should make using `ThinVec` feel much more like using `Vec`.
They will allow users of `Vec` to switch to `ThinVec` while continuing
to use `collect()`, `for` loops, and other parts of the iterator API.

I don't know if there were use cases before for using the iterator API
with `ThinVec`, but I would like to start using `ThinVec` in rustdoc,
and having it conform to the iterator API would make the transition
*a lot* easier.

I added a `FromIterator` impl, an `IntoIterator` impl that yields owned
elements, and `IntoIterator` impls that yield immutable or mutable
references to elements. I also added some unit tests for `ThinVec`.
2021-04-05 19:09:51 -07:00
bors
97717a5618 Auto merge of #83682 - bjorn3:mmap_wrapper, r=cjgillot
Add an Mmap wrapper to rustc_data_structures

This wrapper implements StableAddress and falls back to directly reading the file on wasm32.

Taken from #83640, which I will close due to the perf regression.
2021-04-03 13:23:42 +00:00
bjorn3
bda6d1f158 Add safety comment to StableAddress impl for Mmap 2021-04-03 14:51:05 +02:00
bjorn3
5773e51678 Inline a few methods 2021-03-31 09:19:29 +02:00
bjorn3
8331dbe6d0 Add an Mmap wrapper to rustc_data_structures
This wrapper implements StableAddress and falls back to directly reading
the file on wasm32
2021-03-30 18:57:03 +02:00
Joshua Nelson
526bb10701 Revert changes to sync data structures
There isn't currently a good reviewer for these, and I don't want to
remove things that will just be added again. I plan to make a separate
PR for these changes so the rest of the cleanup can land.
2021-03-29 13:50:40 -04:00
Joshua Nelson
de0fda9558 Address review comments
- Add back `HirIdVec`, with a comment that it will soon be used.
- Add back `*_region` functions, with a comment they may soon be used.
- Remove `-Z borrowck_stats` completely. It didn't do anything.
- Remove `make_nop` completely.
- Add back `current_loc`, which is used by an out-of-tree tool.
- Fix style nits
- Remove `AtomicCell` with `cfg(parallel_compiler)` for consistency.
2021-03-27 22:16:34 -04:00
Joshua Nelson
441dc3640a Remove (lots of) dead code
Found with https://github.com/est31/warnalyzer.

Dubious changes:
- Is anyone else using rustc_apfloat? I feel weird completely deleting
  x87 support.
- Maybe some of the dead code in rustc_data_structures, in case someone
  wants to use it in the future?
- Don't change rustc_serialize

  I plan to scrap most of the json module in the near future (see
  https://github.com/rust-lang/compiler-team/issues/418) and fixing the
  tests needed more work than I expected.

TODO: check if any of the comments on the deleted code should be kept.
2021-03-27 22:16:33 -04:00
bors
0ced530534 Auto merge of #83465 - michaelwoerister:safe-read_raw_bytes, r=cjgillot
Allow for reading raw bytes from rustc_serialize::Decoder without unsafe code

The current `read_raw_bytes` method requires using `MaybeUninit` and `unsafe`. I don't think this is necessary. Let's see if a safe interface has any performance drawbacks.

This is a followup to #83273 and will make it easier to rebase #82183.

r? `@cjgillot`
2021-03-26 01:28:59 +00:00
Michael Woerister
517d5ac230 Allow for reading raw bytes from rustc_serialize::Decoder without unsafe code. 2021-03-25 14:05:00 +01:00
Mara Bos
81932be5e7 Revert "Revert stabilizing integer::BITS." 2021-03-24 22:34:36 +01:00
bors
d04c3aa865 Auto merge of #83273 - cjgillot:endecode, r=michaelwoerister
Simplify encoder and decoder

Extracted from https://github.com/rust-lang/rust/pull/83036 and https://github.com/rust-lang/rust/pull/82780.
2021-03-22 12:18:57 +00:00
Camille GILLOT
11b3409b5d Remove FingerprintEncoder/Decoder. 2021-03-19 19:36:05 +01:00
Camille GILLOT
09a638820e Move raw bytes handling to Encoder/Decoder. 2021-03-19 19:35:22 +01:00
Dylan DPC
37b7031078
Rollup merge of #83197 - jyn514:cfg-test-dead-code, r=joshtriplett
Move some test-only code to test files

Split out from https://github.com/rust-lang/rust/pull/83185.
2021-03-19 15:03:24 +01:00
Joshua Nelson
620ecc01a2 Move some test-only code to test files
This also relaxes the bounds on some structs and moves them to the impl
block instead.
2021-03-17 10:31:30 -04:00
bors
2a55274e0c Auto merge of #82999 - cuviper:rustc-rayon-0.3.1, r=Mark-Simulacrum
Update to rustc-rayon 0.3.1

This pulls in rust-lang/rustc-rayon#8 to fix #81425. (h/t `@ammaraskar)`

That revealed weak constraints on `rustc_arena::DropArena`, because its
`DropType` was holding type-erased raw pointers to generic `T`. We can
implement `Send` for `DropType` (under `cfg(parallel_compiler)`) by
requiring all `T: Send` before they're type-erased.
2021-03-15 08:49:25 +00:00
bors
acca818928 Auto merge of #83064 - cjgillot:fhash, r=jackh726
Tweaks to stable hashing
2021-03-13 20:21:40 +00:00
Camille GILLOT
84bf599bac Add inlining. 2021-03-11 12:24:43 +01:00
bors
04fce73196 Auto merge of #82641 - camelid:lang-item-docs, r=jyn514
Improve lang item generated docs

cc https://rust-lang.zulipchat.com/#narrow/stream/146229-wg-secure-code/topic/Is.20.60core.60.20part.20of.20the.20compiler.3F/near/226738260

r? `@jyn514`
2021-03-11 06:38:22 +00:00
Josh Stone
f7e75a2124 Update to rustc-rayon 0.3.1
This pulls in rust-lang/rustc-rayon#8 to fix #81425. (h/t @ammaraskar)

That revealed weak constraints on `rustc_arena::DropArena`, because its
`DropType` was holding type-erased raw pointers to generic `T`. We can
implement `Send` for `DropType` (under `cfg(parallel_compiler)`) by
requiring all `T: Send` before they're type-erased.
2021-03-10 17:53:35 -08:00
bors
76c500ec6c Auto merge of #81635 - michaelwoerister:structured_def_path_hash, r=pnkfelix
Let a portion of DefPathHash uniquely identify the DefPath's crate.

This allows to directly map from a `DefPathHash` to the crate it originates from, without constructing side tables to do that mapping -- something that is useful for incremental compilation where we deal with `DefPathHash` instead of `DefId` a lot.

It also allows to reliably and cheaply check for `DefPathHash` collisions which allows the compiler to gracefully abort compilation instead of running into a subsequent ICE at some random place in the code.

The following new piece of documentation describes the most interesting aspects of the changes:

```rust
/// A `DefPathHash` is a fixed-size representation of a `DefPath` that is
/// stable across crate and compilation session boundaries. It consists of two
/// separate 64-bit hashes. The first uniquely identifies the crate this
/// `DefPathHash` originates from (see [StableCrateId]), and the second
/// uniquely identifies the corresponding `DefPath` within that crate. Together
/// they form a unique identifier within an entire crate graph.
///
/// There is a very small chance of hash collisions, which would mean that two
/// different `DefPath`s map to the same `DefPathHash`. Proceeding compilation
/// with such a hash collision would very probably lead to an ICE and, in the
/// worst case, to a silent mis-compilation. The compiler therefore actively
/// and exhaustively checks for such hash collisions and aborts compilation if
/// it finds one.
///
/// `DefPathHash` uses 64-bit hashes for both the crate-id part and the
/// crate-internal part, even though it is likely that there are many more
/// `LocalDefId`s in a single crate than there are individual crates in a crate
/// graph. Since we use the same number of bits in both cases, the collision
/// probability for the crate-local part will be quite a bit higher (though
/// still very small).
///
/// This imbalance is not by accident: A hash collision in the
/// crate-local part of a `DefPathHash` will be detected and reported while
/// compiling the crate in question. Such a collision does not depend on
/// outside factors and can be easily fixed by the crate maintainer (e.g. by
/// renaming the item in question or by bumping the crate version in a harmless
/// way).
///
/// A collision between crate-id hashes on the other hand is harder to fix
/// because it depends on the set of crates in the entire crate graph of a
/// compilation session. Again, using the same crate with a different version
/// number would fix the issue with a high probability -- but that might be
/// easier said then done if the crates in questions are dependencies of
/// third-party crates.
///
/// That being said, given a high quality hash function, the collision
/// probabilities in question are very small. For example, for a big crate like
/// `rustc_middle` (with ~50000 `LocalDefId`s as of the time of writing) there
/// is a probability of roughly 1 in 14,750,000,000 of a crate-internal
/// collision occurring. For a big crate graph with 1000 crates in it, there is
/// a probability of 1 in 36,890,000,000,000 of a `StableCrateId` collision.
```

Given the probabilities involved I hope that no one will ever actually see the error messages. Nonetheless, I'd be glad about some feedback on how to improve them. Should we create a GH issue describing the problem and possible solutions to point to? Or a page in the rustc book?

r? `@pnkfelix` (feel free to re-assign)
2021-03-07 23:45:57 +00:00
Camelid
58758f0275 Allow variant attributes in enum_from_u32! 2021-02-28 11:53:55 -08:00
Dylan DPC
6d288c65df
Rollup merge of #82537 - wesleywiser:update_measureme, r=oli-obk
Update measureme dependency to the latest version

This version adds the ability to use `rdpmc` hardware-based performance
counters instead of wall-clock time for measuring duration. This also
introduces a dependency on the `perf-event-open-sys` crate on Linux
which is used when using hardware counters.

r? ```@oli-obk```
2021-02-27 21:56:20 +01:00
Dylan DPC
cabe97272d
Rollup merge of #82057 - upsuper-forks:cstr, r=davidtwco,wesleywiser
Replace const_cstr with cstr crate

This PR replaces the `const_cstr` macro inside `rustc_data_structures` with `cstr` macro from [cstr](https://crates.io/crates/cstr) crate.

The two macros basically serve the same purpose, which is to generate `&'static CStr` from a string literal. `cstr` is better because it validates the literal at compile time, while the existing `const_cstr` does it at runtime when `debug_assertions` is enabled. In addition, the value `cstr` generates can be used in constant context (which is seemingly not needed anywhere currently, though).
2021-02-27 02:34:21 +01:00
Wesley Wiser
e130e9cf77 Update measureme dependency to the latest version
This version adds the ability to use `rdpmc` hardware-based performance
counters instead of wall-clock time for measuring duration. This also
introduces a dependency on the `perf-event-open-sys` crate on Linux
which is used when using hardware counters.
2021-02-25 18:25:38 -05:00
Joshua Nelson
3733275854 Update the bootstrap compiler
Note this does not change `core::derive` since it was merged after the
beta bump.
2021-02-20 17:19:30 -05:00
Eduard-Mihai Burtescu
6165d1cc72 Print -Ztime-passes (and misc stats/logs) on stderr, not stdout. 2021-02-18 14:13:38 +02:00
bors
d1206f950f Auto merge of #81855 - cjgillot:ensure-cache, r=oli-obk
Check the result cache before the DepGraph when ensuring queries

Split out of https://github.com/rust-lang/rust/pull/70951

Calling `ensure` on already forced queries is a common operation.
Looking at the results cache first is faster than checking the DepGraph for a green node.
2021-02-15 12:11:59 +00:00
klensy
93c8ebe022 bumped smallvec deps 2021-02-14 18:03:11 +03:00
Xidorn Quan
38e4233a32 Replace const_cstr with cstr crate 2021-02-14 09:45:35 +11:00
Camille GILLOT
15b0bc6b83 Separate the query cache from the query state. 2021-02-13 21:14:58 +01:00
Dániel Buga
3c1d792f49 Only initialize what is used 2021-02-10 09:20:41 +01:00
Mara Bos
08d8fc14be
Rollup merge of #81771 - tgnottingham:time-passes-rss-delta, r=oli-obk
Indicate change in RSS from start to end of pass in time-passes output

Previously, this was omitted because it could be misleading, but the
functionality seems too useful not to include.

r? ``@oli-obk``
2021-02-05 12:26:08 +01:00
Tyson Nottingham
4253919f1d Indicate change in RSS from start to end of pass in time-passes output
Previously, this was omitted because it could be misleading, but the
functionality seems too useful not to include.
2021-02-05 01:11:52 -08:00
Michael Woerister
d4d8bdf52b Add documentation to Unhasher impl for Fingerprint. 2021-02-04 10:37:11 +01:00
Mara Bos
89882388d9 Revert stabilizing integer::BITS. 2021-02-03 22:23:58 +01:00
Michael Woerister
22d489be76 Let a portion of DefPathHash uniquely identify the DefPath's crate.
This allows to directly map from a DefPathHash to the crate it
originates from, without constructing side tables to do that mapping.

It also allows to reliably and cheaply check for DefPathHash collisions.
2021-02-02 17:40:29 +01:00
Jonas Schievink
82b00ec606
Rollup merge of #81536 - tgnottingham:time-passes-rss, r=oli-obk
Indicate both start and end of pass RSS in time-passes output

Previously, only the end of pass RSS was indicated. This could easily
lead one to believe that the change in RSS from one pass to the next was
attributable to the second pass, when in fact it occurred between the
end of the first pass and the start of the second.

Also, improve alignment of columns.

Sample of output:

```
time:   0.739; rss:   607MB ->   637MB	item_types_checking
time:   8.429; rss:   637MB ->   775MB	item_bodies_checking
time:  11.063; rss:   470MB ->   775MB	type_check_crate
time:   0.232; rss:   775MB ->   777MB	match_checking
time:   0.139; rss:   777MB ->   779MB	liveness_and_intrinsic_checking
time:   0.372; rss:   775MB ->   779MB	misc_checking_2
time:   8.188; rss:   779MB ->  1019MB	MIR_borrow_checking
time:   0.062; rss:  1019MB ->  1021MB	MIR_effect_checking
```
2021-02-01 14:29:40 +01:00
Ashley Mannix
8940a2652e stabilize int_bits_const 2021-01-31 21:50:47 +10:00
Tyson Nottingham
849dc1a20c Indicate both start and end of pass RSS in time-passes output
Previously, only the end of pass RSS was indicated. This could easily
lead one to believe that the change in RSS from one pass to the next was
attributable to the second pass, when in fact it occurred between the
end of the first pass and the start of the second.

Also, improve alignment of columns.
2021-01-29 12:46:29 -08:00
bors
a8f7075532 Auto merge of #80692 - Aaron1011:feature/query-result-debug, r=estebank
Enforce that query results implement Debug

Currently, we require that query keys implement `Debug`, but we do not do the same for query values. This can make incremental compilation bugs difficult to debug - there isn't a good place to print out the result loaded from disk.

This PR adds `Debug` bounds to several query-related functions, allowing us to debug-print the query value when an 'unstable fingerprint' error occurs. This required adding `#[derive(Debug)]` to a fairly large number of types - hopefully, this doesn't have much of an impact on compiler bootstrapping times.
2021-01-26 05:47:23 +00:00
Dániel Buga
f8416faaaf Clean up dominators_given_rpo 2021-01-24 13:32:18 +01:00
Joshua Nelson
394d7018b9 Add track_caller to .steal()
Before:

```
thread 'rustc' panicked at 'attempt to read from stolen value', /home/joshua/rustc/compiler/rustc_data_structures/src/steal.rs:43:15
```

After:

```
thread 'rustc' panicked at 'attempt to steal from stolen value', compiler/rustc_mir/src/transform/mod.rs:423:25
```
2021-01-17 12:27:20 -05:00
Aaron Hill
7afb32557d
Enforce that query results implement Debug 2021-01-16 17:53:02 -05:00
LingMan
a56bffb4f9 Use Option::map_or instead of .map(..).unwrap_or(..) 2021-01-14 19:23:59 +01:00
Tyson Nottingham
52f21791fb Serialize incr comp structures to file via fixed-size buffer
Reduce a large memory spike that happens during serialization by writing
the incr comp structures to file by way of a fixed-size buffer, rather
than an unbounded vector.

Effort was made to keep the instruction count close to that of the
previous implementation. However, buffered writing to a file inherently
has more overhead than writing to a vector, because each write may
result in a handleable error. To reduce this overhead, arrangements are
made so that each LEB128-encoded integer can be written to the buffer
with only one capacity and error check. Higher-level optimizations in
which entire composite structures can be written with one capacity and
error check are possible, but would require much more work.

The performance is mostly on par with the previous implementation, with
small to moderate instruction count regressions. The memory reduction is
significant, however, so it seems like a worth-while trade-off.
2021-01-11 12:13:22 -08:00
Tyson Nottingham
7c6274d464 rustc_serialize: have read_raw_bytes take MaybeUninit<u8> slice 2021-01-01 22:49:16 -08:00
Mark Rousskov
fe031180d0 Bump bootstrap compiler to 1.50 beta 2020-12-30 09:27:19 -05:00
Matthias Krüger
0c3af22e08 don't redundantly repeat field names 2020-12-29 22:26:58 +01:00
Bastian Kauschke
06cc9c26da stabilize min_const_generics 2020-12-26 18:24:10 +01:00
Yuki Okushi
c111404cb5
Rollup merge of #79612 - jyn514:compiler-links, r=Aaron1011
Switch some links in compiler/ to intra-doc links
2020-12-19 15:16:03 +09:00
Yuki Okushi
0765536c0b
Rollup merge of #78083 - ChaiTRex:master, r=m-ou-se
Stabilize or_insert_with_key

Stabilizes the `or_insert_with_key` feature from https://github.com/rust-lang/rust/issues/71024. This allows inserting key-derived values when a `HashMap`/`BTreeMap` entry is vacant.

The difference between this and  `.or_insert_with(|| ... )` is that this provides a reference to the key to the closure after it is moved with `.entry(key_being_moved)`, avoiding the need to copy or clone the key.
2020-12-19 15:15:57 +09:00
Joshua Nelson
35f16c60e7 Switch compiler/ to intra-doc links
rustc_lint and rustc_lint_defs weren't switched because they're included
in the compiler book and so can't use intra-doc links.
2020-12-18 15:22:51 -05:00
Camelid
810324d1f3 Rename optin_builtin_traits to auto_traits
They were originally called "opt-in, built-in traits" (OIBITs), but
people realized that the name was too confusing and a mouthful, and so
they were renamed to just "auto traits". The feature flag's name wasn't
updated, though, so that's what this PR does.

There are some other spots in the compiler that still refer to OIBITs,
but I don't think changing those now is worth it since they are internal
and not particularly relevant to this PR.

Also see <https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/opt-in.2C.20built-in.20traits.20(auto.20traits).20feature.20name>.
2020-11-23 14:14:06 -08:00
bors
8cfa7b4ec9 Auto merge of #78588 - HeroicKatora:sccc, r=nikomatsakis
Reworks Sccc computation to iteration instead of recursion

Linear graphs, producing as many scc's as nodes, would recurse once for every node when entered from the start of the list. This adds a test that exhausted the stack at least on my machine with error:

```
thread 'graph::scc::tests::test_deep_linear' has overflowed its stack
fatal runtime error: stack overflow
```

This may or may not be connected to #78567. I was only reminded that I started this rework some time ago. It might be plausible as borrow checking a long function with many borrow regions around each other—((((((…))))))— may produce the linear list setup to trigger this stack overflow ? I don't know enough about borrow check to say for sure.

This is best read in two separate commits. The first addresses only `find_state` internally. This is classical union phase from union-find. There's also a common solution of using the parent pointers in the (virtual) linked list to track the backreferences while traversing upwards and then following them backwards in a second path compression phase.

The second is more involved as it rewrites the mutually recursive `walk_node` and `walk_unvisited_node`. Firstly, the caller is required to handle the unvisited case of `walk_node` so a new `start_walk_from` method is added to handle that by walking the unvisited node if necessary. Then `walk_unvisited_node`, where we would previously recurse into in the missing case, is rewritten to construct a manual stack of its frames. The state fields consist of the previous stack slots.
2020-11-21 01:30:26 +00:00
Tyson Nottingham
142932ab19 Set unaligned_references lint to deny in rustc_data_structures
To detect misuse of private packed field in `PackedFingerprint`.
2020-11-20 01:13:15 -08:00
Tyson Nottingham
05dde137ca Make PackedFingerprint's Fingerprint private 2020-11-18 15:10:43 -08:00
Tyson Nottingham
f09d474836 Use PackedFingerprint in DepNode to reduce memory consumption 2020-11-18 12:49:09 -08:00
Mara Bos
fa45fce0d3
Rollup merge of #78702 - wesleywiser:self_profile_cgu_sizes, r=Mark-Simulacrum
[self-profiling] Include the estimated size of each cgu in the profile

This is helpful when looking for CGUs where the size estimate isn't a
good indicator of compilation time.

I verified that moving the profiling timer call doesn't affect the
results.

Results:

<img width="297" alt="Screen Shot 2020-11-03 at 7 25 04 AM" src="https://user-images.githubusercontent.com/831192/97985503-5901d100-1da6-11eb-9f10-f3e399702952.png">

`measureme` doesn't have support for custom arg names yet so `arg0` is the CGU name and `arg1` is the estimated size.
2020-11-17 16:13:49 +01:00
lcnr
a6cbd64dae words 2020-11-16 22:42:09 +01:00
Bastian Kauschke
2bf93bd852 compiler: fold by value 2020-11-16 22:34:57 +01:00
Bastian Kauschke
3ec6720bf1 add IdFunctor to rustc_data_structures 2020-11-16 22:27:20 +01:00
Jonas Schievink
ae1916b3b4
Rollup merge of #79058 - dtolnay:likelymacro, r=Mark-Simulacrum
Move likely/unlikely argument outside of invisible unsafe block

The previous `likely!`/`unlikely!` macros were unsound because it permits the caller's expr to contain arbitrary unsafe code.

```rust
pub fn huh() -> bool {
    likely!(std::ptr::read(&() as *const () as *const bool))
}
```

**Before:** compiles cleanly.
**After:**

```console
error[E0133]: call to unsafe function is unsafe and requires unsafe function or block
   |
70 |     likely!(std::ptr::read(&() as *const () as *const bool))
   |             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ call to unsafe function
   |
   = note: consult the function's documentation for information on how to avoid undefined behavior
```
2020-11-15 13:40:03 +01:00
David Tolnay
afb817054c
Move likely/unlikely argument outside of invisible unsafe block
The previous `likely!`/`unlikely!` macros were unsound because it
permits the caller's expr to contain arbitrary unsafe code.

    pub fn huh() -> bool {
        likely!(std::ptr::read(&() as *const () as *const bool))
    }

Before: compiles cleanly.
After:

    error[E0133]: call to unsafe function is unsafe and requires unsafe function or block
       |
    70 |     likely!(std::ptr::read(&() as *const () as *const bool))
       |             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ call to unsafe function
       |
       = note: consult the function's documentation for information on how to avoid undefined behavior
2020-11-14 14:03:57 -08:00
Camille GILLOT
41c44b498f Move Steal to rustc_data_structures. 2020-11-14 01:30:56 +01:00
Andreas Molzer
eb597f5c4e Remove recursion from sccc walking
This allows constructing the sccc for large that visit many nodes before
finding a single cycle of sccc, for example lists. When used to find
dependencies in borrow checking the list case is what occurs in very
long functions.
2020-11-08 18:07:45 +01:00
Andreas Molzer
355904dca0 Add test for sccc of a long list 2020-11-05 19:24:49 +01:00
Andreas Molzer
a41e2fd963 Convert the recursive find_state to a loop
The basic conversion is a straightforward conversion of the linear
recursion to a loop forwards and backwards propagation of the result.
But this uses an optimization to avoid the need for extra space that
would otherwise be necessary to store the stack of unfinished states as
the function is not tail recursive.

Observe that only non-root-nodes in cycles have a recursive call and
that every such call overwrites their own node state. Thus we reuse the
node state itself as temporary storage for the stack of unfinished
states by inverting the links to a chain back to the previous state
update. When we hit the root or end of the full explored chain we
propagate the node state update backwards by following the chain until
a node with a link to itself.
2020-11-05 19:24:49 +01:00
Wesley Wiser
efe703a01a [self-profiling] Include the estimated size of each cgu in the profile
This is helpful when looking for CGUs where the size estimate isn't a
good indicator of compilation time.

I verified that moving the profiling timer call doesn't affect the
results.
2020-11-03 07:55:17 -05:00
Andreas Molzer
af72a70ee2 Move post order walk to iterative approach
The previous recursive approach might overflow the stack when walking a
particularly deep, list-like, graph. In particular, dominator
calculation for borrow checking does such a traversal and very long
functions might lead to a region dependency graph with in this
problematic structure.
2020-10-31 18:52:00 +01:00
Andreas Molzer
4fdf8a5630 Add a benchmark test for sccc finding
While a bit primitive, it should get us at least a better number than
nothing.
2020-10-31 01:05:15 +01:00
Joshua Nelson
57c6ed0c07 Fix even more clippy warnings 2020-10-30 10:13:39 -04:00
Yuki Okushi
f8539221d0
Rollup merge of #78524 - tmiasko:source-files-borrow, r=Aaron1011
Avoid BorrowMutError with RUSTC_LOG=debug

```console
$ touch empty.rs
$ env RUSTC_LOG=debug rustc +stage1 --crate-type=lib empty.rs
```

Fails with a `BorrowMutError` because source map files are already
borrowed while `features_query` attempts to format a log message
containing a span.

Release the borrow before the query to avoid the issue.
2020-10-30 18:00:54 +09:00
Dániel Buga
3fba948510 Fix typos 2020-10-29 16:51:46 +01:00
Tomasz Miąsko
79cc5099b1 Use RwLock instead of Lock for SourceMap::files 2020-10-29 18:09:53 +01:00
Dániel Buga
b01c74b73c Fix typo in vec_graph 2020-10-27 18:37:43 +01:00
bors
5171cc76c2 Auto merge of #77476 - tgnottingham:buffered_siphasher128, r=nnethercote
perf: buffer SipHasher128

This is an attempt to improve Siphasher128 performance by buffering input. Although it reduces instruction count, I'm not confident the effect on wall times, or lack-thereof, is worth the change.

---

Additional notes not reflected in source comments:

* Implementation choices were guided by a combination of results from rustc-perf and micro-benchmarks, mostly the former.
* ~~I tried a couple of different struct layouts that might be more cache friendly with no obvious effect.~~ Update: a particular struct layout was chosen, but it's not critical to performance. See comments in source and discussion below.
* I suspect that buffering would be important to a SIMD-accelerated algorithm, but from what I've read and my own tests, SipHash does not seem very amenable to SIMD acceleration, at least by SSE.
2020-10-25 09:23:45 +00:00
Wesley Wiser
5ac5556d63 Upgrade to measureme 9.0.0 2020-10-24 22:39:42 -04:00
Jonas Schievink
4d72939af1
Rollup merge of #77830 - cjgillot:remacro, r=oli-obk
Simplify query proc-macros

The query code generation is split between proc-macros and regular macros in `rustc_middle::ty::query`.

This PR removes unused capabilities of the proc-macros, and tend to use regular macros for the logic.
2020-10-24 22:39:46 +02:00
Leonora Tindall
bc2317915f Don't re-export std::ops::ControlFlow in the compiler. 2020-10-22 17:26:55 -07:00
Leonora Tindall
84daccc559 change the order of type arguments on ControlFlow
This allows ControlFlow<BreakType> which is much more ergonomic for
common iterator combinator use cases.
2020-10-22 17:26:48 -07:00
Camille GILLOT
0a4d948b4a Remove unused ProfileCategory. 2020-10-22 22:35:32 +02:00
bors
f90e617305 Auto merge of #77908 - bugadani:obl-forest, r=nnethercote
Try to make ObligationForest more efficient

This PR tries to decrease the number of allocations in ObligationForest, as well as moves some cold path code to an uninlined function.
2020-10-19 15:14:15 +00:00
Chai T. Rex
c2de8fe294 Stabilize or_insert_with_key 2020-10-18 15:45:09 -04:00
Dániel Buga
8c7a8a62dd Turn Outcome into an opaque type to remove some runtime checks 2020-10-15 08:32:41 +02:00
Dániel Buga
5f11e71721 Reuse memory for process_cycles 2020-10-15 00:41:09 +02:00
Dániel Buga
86e030391b Make sure cold code is as small as possible 2020-10-15 00:41:08 +02:00
est31
215cd36e1c Remove unused code from remaining compiler crates 2020-10-14 04:14:32 +02:00
est31
a0fc455d30 Replace absolute paths with relative ones
Modern compilers allow reaching external crates
like std or core via relative paths in modules
outside of lib.rs and main.rs.
2020-10-13 14:16:45 +02:00
Tyson Nottingham
a602d155f0 SipHasher128: improve constant names and add more comments 2020-10-11 23:48:35 -07:00
bors
a1dfd2490a Auto merge of #77080 - richkadel:llvm-coverage-counters-2, r=tmandry
Working branch-level code coverage

Add a generalized implementation for computing branch-level coverage spans.

This iteration resolves some of the challenges I had identified a few weeks ago.

I've tried to implement a solution that is general enough to work for a lot of different graphs/patterns. It's encouraging to see the results on fairly large and complex crates seem to meet my expectations. This may be a "functionally complete" implementation.

Except for bug fixes or edge cases I haven't run into yet, the next and essentially final step, I think, is to replace some Counters with CounterExpressions (where their counter values can be computed by adding or subtracting other counters/expressions).

Examples of branch-level coverage support enabled in this PR:

* https://github.com/richkadel/rust/blob/llvm-coverage-counters-2/src/test/run-make-fulldeps/instrument-coverage-cov-reports-base/expected_show_coverage.coverage_of_drop_trait.txt
* https://github.com/richkadel/rust/blob/llvm-coverage-counters-2/src/test/run-make-fulldeps/instrument-coverage-cov-reports-base/expected_show_coverage.coverage_of_if.txt
* https://github.com/richkadel/rust/blob/llvm-coverage-counters-2/src/test/run-make-fulldeps/instrument-coverage-cov-reports-base/expected_show_coverage.coverage_of_if_else.txt
* https://github.com/richkadel/rust/blob/llvm-coverage-counters-2/src/test/run-make-fulldeps/instrument-coverage-cov-reports-base/expected_show_coverage.coverage_of_simple_loop.txt
* https://github.com/richkadel/rust/blob/llvm-coverage-counters-2/src/test/run-make-fulldeps/instrument-coverage-cov-reports-base/expected_show_coverage.coverage_of_simple_match.txt
* ... _and others in the same directory_

Examples of coverage analysis results (MIR spanview files) used to inject counters in the right `BasicBlocks`:

* https://htmlpreview.github.io/?https://github.com/richkadel/rust/blob/llvm-coverage-counters-2/src/test/run-make-fulldeps/instrument-coverage-mir-cov-html-base/expected_mir_dump.coverage_of_drop_trait/coverage_of_drop_trait.main.-------.InstrumentCoverage.0.html
* https://htmlpreview.github.io/?https://github.com/richkadel/rust/blob/llvm-coverage-counters-2/src/test/run-make-fulldeps/instrument-coverage-mir-cov-html-base/expected_mir_dump.coverage_of_if/coverage_of_if.main.-------.InstrumentCoverage.0.html
* https://htmlpreview.github.io/?https://github.com/richkadel/rust/blob/llvm-coverage-counters-2/src/test/run-make-fulldeps/instrument-coverage-mir-cov-html-base/expected_mir_dump.coverage_of_if_else/coverage_of_if_else.main.-------.InstrumentCoverage.0.html
* https://htmlpreview.github.io/?https://github.com/richkadel/rust/blob/llvm-coverage-counters-2/src/test/run-make-fulldeps/instrument-coverage-mir-cov-html-base/expected_mir_dump.coverage_of_simple_loop/coverage_of_simple_loop.main.-------.InstrumentCoverage.0.html
* https://htmlpreview.github.io/?https://github.com/richkadel/rust/blob/llvm-coverage-counters-2/src/test/run-make-fulldeps/instrument-coverage-mir-cov-html-base/expected_mir_dump.coverage_of_simple_match/coverage_of_simple_match.main.-------.InstrumentCoverage.0.html
* ... _and others in the same directory_

Here is some sample coverage output after compiling a few real-world crates with the new branch-level coverage features:

<img width="801" alt="Screen Shot 2020-09-25 at 1 03 11 PM" src="https://user-images.githubusercontent.com/3827298/94316848-fd882c00-ff39-11ea-9cff-0402d3abd1e7.png">
<img width="721" alt="Screen Shot 2020-09-25 at 1 00 36 PM" src="https://user-images.githubusercontent.com/3827298/94316886-11cc2900-ff3a-11ea-9d03-80b26c8a5173.png">
<img width="889" alt="Screen Shot 2020-09-25 at 12 54 57 PM" src="https://user-images.githubusercontent.com/3827298/94316900-18f33700-ff3a-11ea-8a80-58f67d84b8de.png">

r? `@tmandry`
FYI: `@wesleywiser`
2020-10-05 19:34:44 +00:00
bors
ea7e131435 Auto merge of #77171 - VFLashM:better_sso_structures, r=oli-obk
Better sso structures

This change greatly expands interface of MiniSet/MiniMap and renames them because they are no longer "Mini".
2020-10-05 17:18:01 +00:00
Rich Kadel
f5aebad28f Updates to experimental coverage counter injection
This is a combination of 18 commits.

Commit #2:

Additional examples and some small improvements.

Commit #3:

fixed mir-opt non-mir extensions and spanview title elements

Corrected a fairly recent assumption in runtest.rs that all MIR dump
files end in .mir. (It was appending .mir to the graphviz .dot and
spanview .html file names when generating blessed output files. That
also left outdated files in the baseline alongside the files with the
incorrect names, which I've now removed.)

Updated spanview HTML title elements to match their content, replacing a
hardcoded and incorrect name that was left in accidentally when
originally submitted.

Commit #4:

added more test examples

also improved Makefiles with support for non-zero exit status and to
force validation of tests unless a specific test overrides it with a
specific comment.

Commit #5:

Fixed rare issues after testing on real-world crate

Commit #6:

Addressed PR feedback, and removed temporary -Zexperimental-coverage

-Zinstrument-coverage once again supports the latest capabilities of
LLVM instrprof coverage instrumentation.

Also fixed a bug in spanview.

Commit #7:

Fix closure handling, add tests for closures and inner items

And cleaned up other tests for consistency, and to make it more clear
where spans start/end by breaking up lines.

Commit #8:

renamed "typical" test results "expected"

Now that the `llvm-cov show` tests are improved to normally expect
matching actuals, and to allow individual tests to override that
expectation.

Commit #9:

test coverage of inline generic struct function

Commit #10:

Addressed review feedback

* Removed unnecessary Unreachable filter.
* Replaced a match wildcard with remining variants.
* Added more comments to help clarify the role of successors() in the
CFG traversal

Commit #11:

refactoring based on feedback

* refactored `fn coverage_spans()`.
* changed the way I expand an empty coverage span to improve performance
* fixed a typo that I had accidently left in, in visit.rs

Commit #12:

Optimized use of SourceMap and SourceFile

Commit #13:

Fixed a regression, and synched with upstream

Some generated test file names changed due to some new change upstream.

Commit #14:

Stripping out crate disambiguators from demangled names

These can vary depending on the test platform.

Commit #15:

Ignore llvm-cov show diff on test with generics, expand IO error message

Tests with generics produce llvm-cov show results with demangled names
that can include an unstable "crate disambiguator" (hex value). The
value changes when run in the Rust CI Windows environment. I added a sed
filter to strip them out (in a prior commit), but sed also appears to
fail in the same environment. Until I can figure out a workaround, I'm
just going to ignore this specific test result. I added a FIXME to
follow up later, but it's not that critical.

I also saw an error with Windows GNU, but the IO error did not
specify a path for the directory or file that triggered the error. I
updated the error messages to provide more info for next, time but also
noticed some other tests with similar steps did not fail. Looks
spurious.

Commit #16:

Modify rust-demangler to strip disambiguators by default

Commit #17:

Remove std::process::exit from coverage tests

Due to Issue #77553, programs that call std::process::exit() do not
generate coverage results on Windows MSVC.

Commit #18:

fix: test file paths exceeding Windows max path len
2020-10-05 08:02:58 -07:00
Tyson Nottingham
581cc4abf5 SipHasher128: use specific struct layout 2020-10-05 00:47:44 -07:00
Tyson Nottingham
b86161ad9c SipHasher128: use more named constants, update comments 2020-10-05 00:47:26 -07:00
Tyson Nottingham
f6f96e2a87 perf: buffer SipHasher128 2020-10-03 10:03:30 -07:00
Valerii Lashmanov
d1d2184db4 SsoHashSet/Map - genericiy over Q removed
Due to performance regression, see SsoHashMap comment.
2020-10-02 20:13:23 -05:00
Tyson Nottingham
d061fee177 Stable hashing: add comments and tests concerning platform-independence
SipHasher128 implements short_write in an endian-independent way, yet
its write_xxx Hasher trait methods undo this endian-independence by byte
swapping the integer inputs on big-endian hardware. StableHasher then
adds endian-independence back by also byte-swapping on big-endian
hardware prior to invoking SipHasher128.

This double swap may have the appearance of being a no-op, but is in
fact by design. In particular, we really do want SipHasher128 to be
platform-dependent, in order to be consistent with the libstd SipHasher.
Try to clarify this intent. Also, add and update a couple of unit tests.
2020-09-30 00:57:35 -07:00
Valerii Lashmanov
92a0668c20 SsoHashMap minor refactoring, SSO_ARRAY_SIZE introduced 2020-09-27 23:48:19 -05:00
Valerii Lashmanov
41942fac7d SsoHashSet reimplemented as a wrapper on top of SsoHashMap
SsoHashSet::replace had to be removed because
it requires missing API from SsoHashMap.
It's not a widely used function, so I think it's ok
to omit it for now.

EitherIter moved into its own file.

Also sprinkled code with #[inline] attributes where appropriate.
2020-09-26 18:42:26 -05:00
Valerii Lashmanov
0600b178aa SsoHashSet/SsoHashMap API greatly expanded
Now both provide almost complete API of their non-SSO counterparts.
2020-09-26 14:30:05 -05:00
Valerii Lashmanov
5c224a484d MiniSet/MiniMap moved and renamed into SsoHashSet/SsoHashMap
It is a more descriptive name and with upcoming changes
there will be nothing "mini" about them.
2020-09-26 14:30:05 -05:00
est31
12187b7f86 Remove unused #[allow(...)] statements from compiler/ 2020-09-26 01:25:55 +02:00
Jonas Schievink
6f3da3d53f
Rollup merge of #77121 - duckymirror:html-root-url, r=jyn514
Updated html_root_url for compiler crates

Closes #77103

r? @jyn514
2020-09-25 02:29:45 +02:00
Erik Hofmayer
138a2e5eaa /nightly/nightly-rustc 2020-09-23 21:51:56 +02:00
Erik Hofmayer
dd66ea2d3d Updated html_root_url for compiler crates 2020-09-23 21:14:43 +02:00
Andreas Jonson
6586c37bec Move MiniSet to data_structures
remove the need for T to be copy from MiniSet as was done for MiniMap
2020-09-23 08:09:16 +02:00
bors
6d3acf5129 Auto merge of #76928 - lcnr:opaque-types-cache, r=tmandry
cache types during normalization

partially fixes #75992

reduces the following test from 14 to 3 seconds locally.

cc `@Mark-Simulacrum` would it make sense to add that test to `perf`?
```rust
#![recursion_limit="2048"]
#![type_length_limit="112457564"]

pub async fn h0(v: &String, x: &u64) { println!("{} {}", v, x) }
pub async fn h1(v: &String, x: &u64) { h0(v, x).await }
pub async fn h2(v: &String, x: &u64) { h1(v, x).await }
pub async fn h3(v: &String, x: &u64) { h2(v, x).await }
pub async fn h4(v: &String, x: &u64) { h3(v, x).await }
pub async fn h5(v: &String, x: &u64) { h4(v, x).await }
pub async fn h6(v: &String, x: &u64) { h5(v, x).await }
pub async fn h7(v: &String, x: &u64) { h6(v, x).await }
pub async fn h8(v: &String, x: &u64) { h7(v, x).await }
pub async fn h9(v: &String, x: &u64) { h8(v, x).await }

pub async fn h10(v: &String, x: &u64) { h9(v, x).await }
pub async fn h11(v: &String, x: &u64) { h10(v, x).await }
pub async fn h12(v: &String, x: &u64) { h11(v, x).await }
pub async fn h13(v: &String, x: &u64) { h12(v, x).await }
pub async fn h14(v: &String, x: &u64) { h13(v, x).await }
pub async fn h15(v: &String, x: &u64) { h14(v, x).await }
pub async fn h16(v: &String, x: &u64) { h15(v, x).await }
pub async fn h17(v: &String, x: &u64) { h16(v, x).await }
pub async fn h18(v: &String, x: &u64) { h17(v, x).await }
pub async fn h19(v: &String, x: &u64) { h18(v, x).await }

macro_rules! async_recursive {
    (29, $inner:expr) => { async { async_recursive!(28, $inner) }.await };
    (28, $inner:expr) => { async { async_recursive!(27, $inner) }.await };
    (27, $inner:expr) => { async { async_recursive!(26, $inner) }.await };
    (26, $inner:expr) => { async { async_recursive!(25, $inner) }.await };
    (25, $inner:expr) => { async { async_recursive!(24, $inner) }.await };
    (24, $inner:expr) => { async { async_recursive!(23, $inner) }.await };
    (23, $inner:expr) => { async { async_recursive!(22, $inner) }.await };
    (22, $inner:expr) => { async { async_recursive!(21, $inner) }.await };
    (21, $inner:expr) => { async { async_recursive!(20, $inner) }.await };
    (20, $inner:expr) => { async { async_recursive!(19, $inner) }.await };

    (19, $inner:expr) => { async { async_recursive!(18, $inner) }.await };
    (18, $inner:expr) => { async { async_recursive!(17, $inner) }.await };
    (17, $inner:expr) => { async { async_recursive!(16, $inner) }.await };
    (16, $inner:expr) => { async { async_recursive!(15, $inner) }.await };
    (15, $inner:expr) => { async { async_recursive!(14, $inner) }.await };
    (14, $inner:expr) => { async { async_recursive!(13, $inner) }.await };
    (13, $inner:expr) => { async { async_recursive!(12, $inner) }.await };
    (12, $inner:expr) => { async { async_recursive!(11, $inner) }.await };
    (11, $inner:expr) => { async { async_recursive!(10, $inner) }.await };
    (10, $inner:expr) => { async { async_recursive!(9, $inner) }.await };

    (9, $inner:expr) => { async { async_recursive!(8, $inner) }.await };
    (8, $inner:expr) => { async { async_recursive!(7, $inner) }.await };
    (7, $inner:expr) => { async { async_recursive!(6, $inner) }.await };
    (6, $inner:expr) => { async { async_recursive!(5, $inner) }.await };
    (5, $inner:expr) => { async { async_recursive!(4, $inner) }.await };
    (4, $inner:expr) => { async { async_recursive!(3, $inner) }.await };
    (3, $inner:expr) => { async { async_recursive!(2, $inner) }.await };
    (2, $inner:expr) => { async { async_recursive!(1, $inner) }.await };
    (1, $inner:expr) => { async { async_recursive!(0, $inner) }.await };
    (0, $inner:expr) => { async { h19(&String::from("owo"), &0).await; $inner }.await };
}

async fn f() {
    async_recursive!(14, println!("hello"));
}

fn main() {
    let _ = f();
}
```
r? `@eddyb` requires a perf run.
2020-09-22 22:52:07 +00:00
bors
b01326ab03 Auto merge of #76680 - Julian-Wollersberger:nongeneric_ensure_sufficient_stack, r=jyn514
Make `ensure_sufficient_stack()` non-generic, using cargo-llvm-lines

Inspired by [this blog post](https://blog.mozilla.org/nnethercote/2020/08/05/how-to-speed-up-the-rust-compiler-some-more-in-2020/) from `@nnethercote,` I used [cargo-llvm-lines](https://github.com/dtolnay/cargo-llvm-lines/) on the rust compiler itself, to improve it's compile time. This PR contains only one low-hanging fruit, but I also want to share some measurements.

The function `ensure_sufficient_stack()` was monomorphized 1500 times, and with it the `stacker` and `psm` crates, for a total of 1.5% of all llvm IR lines. With some trickery I convert the generic closure into a dynamic one, and thus all that code is only monomorphized once.

# Measurements
Getting these numbers took some fiddling with CLI flags and I [modified](https://github.com/Julian-Wollersberger/cargo-llvm-lines/blob/master/src/main.rs#L115) cargo-llvm-lines to read from a folder instead of invoking cargo. Commands I used:
```
./x.py clean
RUSTFLAGS="--emit=llvm-ir -C link-args=-fuse-ld=lld -Z self-profile=profile" CARGOFLAGS_BOOTSTRAP="-Ztimings" RUSTC_BOOTSTRAP=1 ./x.py build -i --stage 1 library/std

# Then manually copy all .ll files into a folder I hardcoded in cargo-llvm-lines in main.rs#L115
cd ../cargo-llvm-lines
cargo run llvm-lines
```

The result is this list (see [first 500 lines](https://github.com/Julian-Wollersberger/cargo-llvm-lines/blob/master/llvm-lines-rustc-before.txt) ), before the change:
```
  Lines            Copies        Function name
  -----            ------        -------------
  16894211 (100%)  58417 (100%)  (TOTAL)
   2223855 (13.2%)   502 (0.9%)  rustc_query_system::query::plumbing::get_query_impl::{{closure}}
   1331918 (7.9%)   1287 (2.2%)  hashbrown::raw::RawTable<T>::reserve_rehash
    774434 (4.6%)  12043 (20.6%) core::ptr::drop_in_place
    294170 (1.7%)    499 (0.9%)  rustc_query_system::dep_graph::graph::DepGraph<K>::with_task_impl
    245410 (1.5%)   1552 (2.7%)  psm::on_stack::with_on_stack
    210311 (1.2%)      1 (0.0%)  rustc_target::spec::load_specific
    200962 (1.2%)    513 (0.9%)  rustc_query_system::query::plumbing::get_query_impl
    190704 (1.1%)      1 (0.0%)  rustc_middle::ty::query::<impl rustc_middle::ty::context::TyCtxt>::alloc_self_profile_query_strings
    180272 (1.1%)    468 (0.8%)  rustc_query_system::query::plumbing::load_from_disk_and_cache_in_memory
    177396 (1.1%)    114 (0.2%)  rustc_query_system::query::plumbing::force_query_impl
    161134 (1.0%)    445 (0.8%)  rustc_query_system::dep_graph::graph::DepGraph<K>::with_anon_task
    141551 (0.8%)    186 (0.3%)  rustc_query_system::query::plumbing::incremental_verify_ich
    110191 (0.7%)      7 (0.0%)  rustc_middle::ty::context::_DERIVE_rustc_serialize_Decodable_D_FOR_TypeckResults::<impl rustc_serialize::serialize::Decodable<__D> for rustc_middle::ty::context::TypeckResults>::decode::{{closure}}
    108590 (0.6%)    420 (0.7%)  core::ops::function::FnOnce::call_once
     88488 (0.5%)     21 (0.0%)  rustc_query_system::dep_graph::graph::DepGraph<K>::try_mark_previous_green
     86368 (0.5%)      1 (0.0%)  rustc_middle::ty::query::stats::query_stats
     85654 (0.5%)   3973 (6.8%)  <&T as core::fmt::Debug>::fmt
     84475 (0.5%)      1 (0.0%)  rustc_middle::ty::query::Queries::try_collect_active_jobs
     81220 (0.5%)    862 (1.5%)  <hashbrown::raw::RawIterHash<T> as core::iter::traits::iterator::Iterator>::next
     77636 (0.5%)     54 (0.1%)  core::slice::sort::recurse
     66484 (0.4%)    461 (0.8%)  <hashbrown::raw::RawIter<T> as core::iter::traits::iterator::Iterator>::next
```

All `.ll` files together had 4.4GB. After my change they had 4.2GB. So a few percent less code LLVM has to process. Hurray!
Sadly, I couldn't measure an actual wall-time improvement. Watching YouTube while compiling added to much noise...

Here is the top of the list after the change:
```
  16460866 (100%)  58341 (100%)  (TOTAL)
   1903085 (11.6%)   504 (0.9%)  rustc_query_system::query::plumbing::get_query_impl::{{closure}}
   1331918 (8.1%)   1287 (2.2%)  hashbrown::raw::RawTable<T>::reserve_rehash
    777796 (4.7%)  12031 (20.6%) core::ptr::drop_in_place
    551462 (3.4%)   1519 (2.6%)  rustc_data_structures::stack::ensure_sufficient_stack::{{closure}}
```
Note that the total was reduced by 430 000 lines and `psm::on_stack::with_on_stack` has disappeared. Instead `rustc_data_structures::stack::ensure_sufficient_stack::{{closure}}` appeared. I'm confused about that one, but it seems to consist of inlined calls to `rustc_query_system::*` stuff.

Further note the other two big culprits in this list: `rustc_query_system` and `hashbrown`. These two are monomorphized many times, the query system summing to more than 20% of all lines, not even counting code that's probably inlined elsewhere.
Assuming compile times scale linearly with llvm-lines, that means a possible 20% compile time reduction.

Reducing eg. `get_query_impl` would probably need a major refactoring of the qery system though. _Everything_ in there is generic over multiple types, has associated types and passes generic Self arguments by value. Which means you can't simply make things `dyn`.

---------------------------------------
This PR is a small step to make rustc compile faster and thus make contributing to rustc less painful. Nonetheless I love Rust and I find the work around rustc fascinating :)
2020-09-21 17:32:57 +00:00
Ralf Jung
e177757a04
Rollup merge of #76963 - est31:remove_static_assert, r=oli-obk
Remove unused static_assert macro
2020-09-21 10:40:47 +02:00
Ralf Jung
048866bd6b
Rollup merge of #76958 - est31:ns, r=oli-obk
Replace manual as_nanos and as_secs_f64 reimplementations
2020-09-21 10:40:39 +02:00
Julian Wollersberger
53aaa1e532 To avoid monomorphizing psm::on_stack::with_on_stack 1500 times, I made a change in stacker to wrap the callback in dyn. 2020-09-20 19:07:52 +02:00
Ralf Jung
50d56bc774
Rollup merge of #76825 - lcnr:array-windows-apply, r=varkor
use `array_windows` instead of `windows` in the compiler

I do think these changes are beautiful, but do have to admit that using type inference for the window length
can easily be confusing. This seems like a general issue with const generics, where inferring constants adds an additional
complexity which users have to learn and keep in mind.
2020-09-20 12:08:26 +02:00
Ralf Jung
4322e1b92d
Rollup merge of #76821 - est31:remove_redundant_nightly_features, r=oli-obk,Mark-Simulacrum
Remove redundant nightly features

Removes a bunch of redundant/outdated nightly features. The first commit removes a `core_intrinsics` use for which a stable wrapper has been provided since. The second commit replaces the `const_generics` feature with `min_const_generics` which might get stabilized this year. The third commit is the result of a trial/error run of removing every single feature and then adding it back if compile failed. A bunch of unused features are the result that the third commit removes.
2020-09-20 12:08:22 +02:00
est31
c2dad1c6b9 Remove unused static_assert macro 2020-09-20 11:40:51 +02:00
est31
43193dcb88 Use as_secs_f64 in profiling.rs 2020-09-20 10:27:14 +02:00
Bastian Kauschke
3435683fd5 use array_windows instead of windows in the compiler 2020-09-20 08:11:05 +02:00
Bastian Kauschke
1146c39da7 cache types during normalization 2020-09-19 17:27:13 +02:00
Mara Bos
1e2dba1e7c Use T::BITS instead of size_of::<T> * 8. 2020-09-19 06:54:42 +02:00
est31
ebdea01143 Remove redundant #![feature(...)] 's from compiler/ 2020-09-17 07:58:45 +02:00
est31
4fe6ca3789 Replace const_generics feature gate with min_const_generics
The latter is on the path to stabilization.
2020-09-17 07:08:53 +02:00
Andreas Jonson
b8752fff19 update the version of itertools and parking_lot
this is to avoid compiling multiple version of the crates in rustc
2020-09-12 08:26:53 +02:00
Flying-Toast
2799aec6ab Capitalize safety comments 2020-09-08 22:37:18 -04:00
Scott McMurray
59e37332b0 Add BREAK too, and improve the comments 2020-09-04 16:28:23 -07:00
Scott McMurray
fac272688e Use ops::ControlFlow in graph::iterate 2020-09-04 01:45:10 -07:00
bors
51f79b618d Auto merge of #76233 - cuviper:unhasher, r=Mark-Simulacrum
Avoid rehashing Fingerprint as a map key

This introduces a no-op `Unhasher` for map keys that are already hash-
like, for example `Fingerprint` and its wrapper `DefPathHash`. For these
we can directly produce the `u64` hash for maps. The first use of this
is `def_path_hash_to_def_id: Option<UnhashMap<DefPathHash, DefId>>`.

cc #56308
r? @eddyb
2020-09-02 22:16:22 +00:00
Josh Stone
469ca379d6 Avoid rehashing Fingerprint as a map key
This introduces a no-op `Unhasher` for map keys that are already hash-
like, for example `Fingerprint` and its wrapper `DefPathHash`. For these
we can directly produce the `u64` hash for maps. The first use of this
is `def_path_hash_to_def_id: Option<UnhashMap<DefPathHash, DefId>>`.
2020-09-01 18:27:02 -07:00
marmeladema
1b650d0fea datastructures: replace lazy_static by SyncLazy from std 2020-09-01 22:06:47 +01:00
marmeladema
68500ffacb datastructures: replace once_cell crate with an impl from std 2020-08-30 20:06:14 +01:00
mark
9e5f7d5631 mv compiler to compiler/ 2020-08-30 18:45:07 +03:00