Commit Graph

126598 Commits

Author SHA1 Message Date
Mateusz Mikuła
ad69e66517 Ignore failing PGO/coverage tests on MinGW 2020-09-05 12:31:16 +02:00
Mateusz Mikuła
1ea121c74f Fix warning whe building profiler_builtins crate 2020-09-04 15:10:29 +02:00
Mateusz Mikuła
ed3950ba5a Enable profiler tests on Windows-gnu 2020-09-04 15:10:16 +02:00
bors
4ffb5c5954 Auto merge of #76004 - richkadel:llvm-coverage-map-gen-6b.5, r=tmandry
Tools, tests, and experimenting with MIR-derived coverage counters

Leverages the new mir_dump output file in HTML+CSS (from #76074) to visualize coverage code regions
and the MIR features that they came from (including overlapping spans).

See example below.

The `run-make-fulldeps/instrument-coverage` test has been refactored to maximize test coverage and reduce code duplication. The new tests support testing with and without `-Clink-dead-code`, so Rust coverage can be tested on MSVC (which, currently, only works with `link-dead-code` _disabled_).

New tests validate coverage region generation and coverage reports with multiple counters per function. Starting with a simple `if-else` branch tests, coverage tests for each additional syntax type can be added by simply dropping in a new Rust sample program.

Includes a basic, MIR-block-based implementation of coverage injection,
available via `-Zexperimental-coverage`. This implementation has known
flaws and omissions, but is simple enough to validate the new tools and
tests.

The existing `-Zinstrument-coverage` option currently enables
function-level coverage only, which at least appears to generate
accurate coverage reports at that level.

Experimental coverage is not accurate at this time. When branch coverage
works as intended, the `-Zexperimental-coverage` option should be
removed.

This PR replaces the bulk of PR #75828, with the remaining parts of
that PR distributed among other separate and indentpent PRs.

This PR depends on two of those other PRs: #76002, #76003 and #76074

Rust compiler MCP rust-lang/compiler-team#278

Relevant issue: #34701 - Implement support for LLVMs code coverage
instrumentation

![Screen-Recording-2020-08-21-at-2](https://user-images.githubusercontent.com/3827298/90972923-ff417880-e4d1-11ea-92bb-8713c6198f6d.gif)

r? @tmandry
FYI: @wesleywiser
2020-09-04 01:31:07 +00:00
bors
af3c6e733a Auto merge of #73996 - da-x:short-unique-paths, r=petrochenkov
diagnostics: shorten paths of unique symbols

This is a step towards implementing a fix for #50310, and continuation of the discussion in [Pre-RFC: Nicer Types In Diagnostics - compiler - Rust Internals](https://internals.rust-lang.org/t/pre-rfc-nicer-types-in-diagnostics/11139). Impressed upon me from previous discussion in #21934 that an RFC for this is not needed, and I should just come up with code.

The recent improvements to `use` suggestions that I've contributed have given rise to this implementation. Contrary to previous suggestions, it's rather simple logic, and I believe it only reduces the amount of cognitive load that a developer would need when reading type errors.

-----

If a symbol name can only be imported from one place, and as long as it was not glob-imported anywhere in the current crate, we can trim its printed path to the last component.

This has wide implications on error messages with types, for example, shortening `std::vec::Vec` to just `Vec`, as long as there is no other `Vec` importable from anywhere.
2020-09-03 23:27:45 +00:00
bors
0d0f6b1130 Auto merge of #70793 - the8472:in-place-iter-collect, r=Amanieu
specialize some collection and iterator operations to run in-place

This is a rebase and update of #66383 which was closed due inactivity.

Recent rustc changes made the compile time regressions disappear, at least for webrender-wrench. Running a stage2 compile and the rustc-perf suite takes hours on the hardware I have at the moment, so I can't do much more than that.

![Screenshot_2020-04-05 rustc performance data](https://user-images.githubusercontent.com/1065730/78462657-5d60f100-76d4-11ea-8a0b-4f3962707c38.png)

In the best case of the `vec::bench_in_place_recycle` synthetic microbenchmark these optimizations can provide a 15x speedup over the regular implementation which allocates a new vec for every benchmark iteration. [Benchmark results](https://gist.github.com/the8472/6d999b2d08a2bedf3b93f12112f96e2f). In real code the speedups are tiny, but it also depends on the allocator used, a system allocator that uses a process-wide mutex will benefit more than one with thread-local pools.

## What was changed

* `SpecExtend` which covered `from_iter` and `extend` specializations was split into separate traits
* `extend` and `from_iter` now reuse the `append_elements` if passed iterators are from slices.
* A preexisting `vec.into_iter().collect::<Vec<_>>()` optimization that passed through the original vec has been generalized further to also cover cases where the original has been partially drained.
* A chain of *Vec<T> / BinaryHeap<T> / Box<[T]>* `IntoIter`s  through various iterator adapters collected into *Vec<U>* and *BinaryHeap<U>* will be performed in place as long as `T` and `U` have the same alignment and size and aren't ZSTs.
* To enable above specialization the unsafe, unstable `SourceIter` and `InPlaceIterable` traits have been added. The first allows reaching through the iterator pipeline to grab a pointer to the source memory. The latter is a marker that promises that the read pointer will advance as fast or faster than the write pointer and thus in-place operation is possible in the first place.
* `vec::IntoIter` implements `TrustedRandomAccess` for `T: Copy` to allow in-place collection when there is a `Zip` adapter in the iterator. TRA had to be made an unstable public trait to support this.

## In-place collectible adapters

* `Map`
* `MapWhile`
* `Filter`
* `FilterMap`
* `Fuse`
* `Skip`
* `SkipWhile`
* `Take`
* `TakeWhile`
* `Enumerate`
* `Zip` (left hand side only, `Copy` types only)
* `Peek`
* `Scan`
* `Inspect`

## Concerns

`vec.into_iter().filter(|_| false).collect()` will no longer return a vec with 0 capacity, instead it will return its original allocation. This avoids the cost of doing any allocation or deallocation but could lead to large allocations living longer than expected.
If that's not acceptable some resizing policy at the end of the attempted in-place collect would be necessary, which in the worst case could result in one more memcopy than the non-specialized case.

## Possible followup work

* split liballoc/vec.rs to remove `ignore-tidy-filelength`
* try to get trivial chains such as `vec.into_iter().skip(1).collect::<Vec<)>>()` to compile to a `memmove` (currently compiles to a pile of SIMD, see #69187 )
* improve up the traits so they can be reused by other crates, e.g. itertools. I think currently they're only good enough for internal use
* allow iterators sourced from a `HashSet` to be in-place collected into a `Vec`
2020-09-03 21:20:21 +00:00
The8472
2f23a0fcca fix debug assertion
The InPlaceIterable debug assert checks that the write pointer
did not advance beyond the read pointer. But TrustedRandomAccess
never advances the read pointer, thus triggering the assert.
Skip the assert if the source pointer did not change during iteration.
2020-09-03 22:15:47 +02:00
bors
62dad457bc Auto merge of #73819 - euclio:rustdoc-summaries, r=jyn514,GuillaumeGomez
rustdoc: do not use plain summary for trait impls

Fixes #38386.
Fixes #48332.
Fixes #49430.
Fixes #62741.
Fixes #73474.

Unfortunately this is not quite ready to go because the newly-working links trigger a bunch of linkcheck failures. The failures are tough to fix because the links are resolved relative to the implementor, which could be anywhere in the module hierarchy.

(In the current docs, these links end up rendering as uninterpreted markdown syntax, so I don't think these failures are any worse than the status quo. It might be acceptable to just add them to the linkchecker whitelist.)

Ideally this could be fixed with intra-doc links ~~but it isn't working for me: I am currently investigating if it's possible to solve it this way.~~ Opened #73829.

EDIT: This is now ready!
2020-09-03 19:07:38 +00:00
The8472
8e5fe5569b improve comments and naming 2020-09-03 20:59:37 +02:00
The8472
6464586542 add explanation to specialization marker 2020-09-03 20:59:36 +02:00
The8472
acdd441cc3 remove separate no-drop code path since it resulted in more LLVM IR 2020-09-03 20:59:36 +02:00
The8472
435219dd82 remove empty Vec extend optimization
The optimization meant that every extend code path had to emit llvm
IR for from_iter and extend spec_extend, which likely impacts
compile times while only improving a few edge-cases
2020-09-03 20:59:35 +02:00
The8472
7492f76f77 please tidy 2020-09-03 20:59:34 +02:00
The8472
9aeea00222 get things to work under min_specialization by leaning more heavily on #[rustc_unsafe_specialization_marker] 2020-09-03 20:59:34 +02:00
The8472
7badb7a7f8 avoid applying in-place collect specialization in type-length test
the test was sized to match external iteration only, since
vec's in-place collect now uses internal iteration we collect into
a different type now.
Note that any other try-fold based operation such as count() would also
have exceeded the type length limit for that iterator.
2020-09-03 20:59:33 +02:00
The8472
a62cd1b44c fix benchmark compile errors 2020-09-03 20:59:33 +02:00
The8472
bec9f9223c apply required min_specialization attributes 2020-09-03 20:59:32 +02:00
The8472
80638330f2 support in-place collect for MapWhile adapters 2020-09-03 20:59:32 +02:00
The8472
55d1296a55 pacify tidy 2020-09-03 20:59:31 +02:00
The8472
5530858a08 generalize in-place collect to types of same size and alignment 2020-09-03 20:59:31 +02:00
The8472
fa34b39cd6 increase comment verbosity 2020-09-03 20:59:30 +02:00
The8472
872ab780c0 work around compiler overhead around lambdas in generics by extracting them into free functions 2020-09-03 20:59:29 +02:00
The8472
771b8ecc83 extract IntoIter drop/forget used by specialization into separate methods 2020-09-03 20:59:29 +02:00
The8472
6ad133443a add benchmark to cover in-place extend 2020-09-03 20:59:28 +02:00
The8472
a7a8b52e91 remove redundant cast 2020-09-03 20:59:28 +02:00
The8472
470bf54f94 test drops during in-place iteration 2020-09-03 20:59:27 +02:00
The8472
fe350dd82d move unsafety into method, not relevant to caller 2020-09-03 20:59:27 +02:00
The8472
0d2d033415 replace unsafe ptr::write with deref-write, benchmarks show no difference 2020-09-03 20:59:26 +02:00
The8472
9596e5a2f2 pacify tidy 2020-09-03 20:59:26 +02:00
The8472
6ed05fd995 replace drop flag with ManuallyDrop 2020-09-03 20:59:25 +02:00
The8472
ab382b7661 mark as_inner as unsafe and update comments 2020-09-03 20:59:24 +02:00
The8472
2a51e579f5 avoid exposing that binary heap's IntoIter is backed by vec::IntoIter, use a private trait instead 2020-09-03 20:59:24 +02:00
The8472
c731648e77 fix: bench didn't black_box its results 2020-09-03 20:59:23 +02:00
The8472
0856771248 fix build issue due to stabilized feature 2020-09-03 20:59:23 +02:00
The8472
e85cfa4f22 impl TrustedRandomAccess for vec::IntoIter 2020-09-03 20:59:22 +02:00
The8472
e1151844fa bench larger allocations 2020-09-03 20:59:22 +02:00
The8472
fd16202e36 include in-place .zip() in test 2020-09-03 20:59:21 +02:00
The8472
fbb3371e5b remove unecessary feature flag
# Conflicts:
#	library/alloc/src/lib.rs
2020-09-03 20:59:21 +02:00
The8472
70293c658f make tidy happy 2020-09-03 20:59:20 +02:00
The8472
21a17d105c support in-place iteration for most adapters
`Take` is not included since users probably call it with small constants
and it doesn't make sense to hold onto huge allocations in that case
2020-09-03 20:59:20 +02:00
The8472
085eb20a61 move free-standing method into trait impl 2020-09-03 20:59:19 +02:00
The8472
0f122e1119 add in-place iteration for Zip
this picks the left hand side as source since it might be more natural to
consume that as IntoIter source
2020-09-03 20:59:19 +02:00
The8472
3d5e9f1904 bench in-place zip 2020-09-03 20:59:18 +02:00
The8472
2b0b2ae9f6 additional specializations tests 2020-09-03 20:59:17 +02:00
The8472
00a32eb54f fix some in-place-collect edge-cases
- it's an allocation optimization, so don't attempt to do it on ZSTs
- drop the tail of partially exhausted iters
2020-09-03 20:59:17 +02:00
The8472
8c816b96dd remove redundant code 2020-09-03 20:59:16 +02:00
The8472
cc67c8eb91 improve comments 2020-09-03 20:59:16 +02:00
The8472
290fe895ba specialize creating a Vec from a slice iterator where T: Copy
this was already implemented for Extend but not for FromIterator
2020-09-03 20:59:15 +02:00
The8472
dac0edfaaa restore SpecFrom<T, TrustedLen<Item=T>> specialization by nesting
specializations
2020-09-03 20:59:15 +02:00
The8472
582fbb1d62 use From specializations on extend if extended Vec is empty
this enables in-place iteration and allocation reuse in additional cases
2020-09-03 20:59:14 +02:00