Use Metadata::modified instead of FileTime::from_last_modification_ti…
…me in run_cargo
Metadata::modified works in all platforms supported by the filetime
crate. This changes brings rustbuild a tiny bit closer towards dropping
the filetime dependency.
rustc_const_eval: adopt let else in more places
Continuation of #89933, #91018, #91481, #93046, #93590, #94011.
I have extended my clippy lint to also recognize tuple passing and match statements. The diff caused by fixing it is way above 1 thousand lines. Thus, I split it up into multiple pull requests to make reviewing easier. This PR handles rustc_const_eval.
Some improvements to the async docs
The goal here is to make the docs overall a little bit more comprehensive and add more links between the things.
One thing that's not working yet is the links to the keywords. Somehow I couldn't get them to work.
r? ````@GuillaumeGomez```` do you know how I could get the keyword links to work?
The previous approach of checking for the reserve-r9 target feature
didn't actually work because LLVM only sets this feature very late when
initializing the per-function subtarget.
The previous approach of checking for the reserve-r9 target feature
didn't actually work because LLVM only sets this feature very late when
initializing the per-function subtarget.
safely `transmute<&List<Ty<'tcx>>, &List<GenericArg<'tcx>>>`
This PR has 3 relevant steps which are is split in distinct commits.
The first commit now interns `List<Ty<'tcx>>` and `List<GenericArg<'tcx>>` together, potentially reusing memory while allowing free conversions between these two using `List<Ty<'tcx>>::as_substs()` and `SubstsRef<'tcx>::try_as_type_list()`.
Using this, we then use `&'tcx List<Ty<'tcx>>` instead of a `SubstsRef<'tcx>` for tuple fields, simplifying a bunch of code.
Finally, as tuple fields and other generic arguments now use a different `TypeFoldable<'tcx>` impl, we optimize the impl for `List<Ty<'tcx>>` improving perf by slightly less than 1% in tuple heavy benchmarks.
Revert #93800, fixing CI time regression
This reverts commit a240ccd81c (merge commit of #93800), reversing
changes made to 393fdc1048.
This PR was likely responsible for a relatively large regression in
dist-x86_64-msvc-alt builder times, from approximately 1.7 to 2.8 hours,
bringing that builder into the pool of the slowest builders we currently have.
This seems to be limited to the alt builder due to needing parallel-compiler
enabled, likely leading to slow LLVM compilation for some reason. See some
investigation in [this Zulip stream](https://rust-lang.zulipchat.com/#narrow/stream/242791-t-infra/topic/msvc.28.3F.29.20builders.20running.20much.20slower).
cc `@lcnr` `@oli-obk` `@b-naber` (per original PRs review/author)
We can re-apply this PR once the regression is fixed, but it is sufficiently large that I don't think keeping this on master is viable in the meantime unless there's a very strong case to be made for it. Alternatively, we can disable that builder (it's not critical since it's an alt build), but that obviously carries its own costs.
The previous implementation was written before types were properly
normalized for code generation and had to assume a more complicated
relationship between types and their debuginfo -- generating separate
identifiers for debuginfo nodes that were based on normalized types.
Since types are now already normalized, we can use them as identifiers
for debuginfo nodes.
Normalize obligation and expected trait_refs in confirm_poly_trait_refs
Consolidate normalization the obligation and expected trait refs in `confirm_poly_trait_refs`. Also, _always_ normalize these trait refs -- we were already normalizing the obligation trait ref when confirming closure and generator candidates, but this does it for fn pointer confirmation as well.
This presumably does more work in the case that the obligation's trait ref is already normalized, but we can see from the perf runs in #94070, it actually (paradoxically, perhaps) improves performance when paired with logic that normalizes projections in fulfillment loop.
Like I previously did for `reverse`, this leaves it to LLVM to pick how to vectorize it, since it can know better the chunk size to use, compared to the "32 bytes always" approach we currently have.
It does still need logic to type-erase where appropriate, though, as while LLVM is now smart enough to vectorize over slices of things like `[u8; 4]`, it fails to do so over slices of `[u8; 3]`.
As a bonus, this also means one no longer gets the spurious `memcpy`(s?) at the end up swapping a slice of `__m256`s: <https://rust.godbolt.org/z/joofr4v8Y>
Don't lint `match` expressions with `cfg`ed arms
Somehow there are no open issues related to this for any of the affected lints. At least none that I could fine from a quick search.
changelog: Don't lint `match` expressions with `cfg`ed arms in many cases
The #[allow(...)] directive was tested for the body and the pattern,
but non-presence of it wasn't tested. Furthermore, it wasn't tested
for the expression. We add expression tests as well as ones checking
the non-presence of the directive.
This reverts commit a240ccd81c, reversing
changes made to 393fdc1048.
This PR was likely responsible for a relatively large regression in
dist-x86_64-msvc-alt builder times, from approximately 1.7 to 2.8 hours,
bringing that builder into the pool of the slowest builders we currently have.
This seems to be limited to the alt builder due to needing parallel-compiler
enabled, likely leading to slow LLVM compilation for some reason.