Commit Graph

2321 Commits

Author SHA1 Message Date
Ralf Jung
921a5ef6d7 try to get rid of mir::Const::normalize 2024-09-28 21:15:18 +02:00
Michael Goulet
3209943604 Add a debug assertion in codegen that unsize casts of the same principal trait def id are truly NOPs 2024-09-25 11:13:59 -04:00
Michael Goulet
8fc8e03150 Validate unsize coercion in MIR validation 2024-09-25 11:10:38 -04:00
bors
4c62024cd5 Auto merge of #130803 - cuviper:file-buffered, r=joshtriplett
Add `File` constructors that return files wrapped with a buffer

In addition to the light convenience, these are intended to raise visibility that buffering is something you should consider when opening a file, since unbuffered I/O is a common performance footgun to Rust newcomers.

ACP: https://github.com/rust-lang/libs-team/issues/446
Tracking Issue: #130804
2024-09-25 04:57:12 +00:00
Josh Stone
0999b019f8 Dogfood feature(file_buffered) 2024-09-24 14:25:16 -07:00
Lukas Markeffsky
bd31e3ed70 be even more precise about "cast" vs "coercion" 2024-09-24 23:12:02 +02:00
Lukas Markeffsky
46ecb23198 unify dyn* coercions with other pointer coercions 2024-09-24 22:17:55 +02:00
许杰友 Jieyou Xu (Joe)
16a02664e6 Revert "Auto merge of #129047 - DianQK:early_otherwise_branch_scalar, r=cjgillot"
This reverts commit a772336fb3, reversing
changes made to 702987f75b.

It seems Apply EarlyOtherwiseBranch to scalar value #129047 may have
lead to several nightly regressions:

- https://github.com/rust-lang/rust/issues/130769
- https://github.com/rust-lang/rust/issues/130774
- https://github.com/rust-lang/rust/issues/130771

And since this is a mir-opt ICE that seems to quite easy to trigger with
real-world crates being affected, let's revert for now and reland the
mir-opt later.
2024-09-24 08:44:26 +00:00
bors
a772336fb3 Auto merge of #129047 - DianQK:early_otherwise_branch_scalar, r=cjgillot
Apply `EarlyOtherwiseBranch` to scalar value

In the future, I'm thinking of hoisting discriminant via GVN so that we only need to write very little code here.

r? `@cjgillot`
2024-09-23 07:22:29 +00:00
Michael Goulet
c682aa162b Reformat using the new identifier sorting from rustfmt 2024-09-22 19:11:29 -04:00
DianQK
e3a9eaf928
Apply EarlyOtherwiseBranch to scalar value 2024-09-18 21:42:07 +08:00
Matthias Krüger
ddcb9c132a
Rollup merge of #130201 - compiler-errors:foreign-synthetic-body, r=lcnr
Encode `coroutine_by_move_body_def_id` in crate metadata

We synthesize the MIR for a by-move body for the `FnOnce` implementation of async closures. It can be accessed with the `coroutine_by_move_body_def_id` query. We weren't encoding this query in the metadata though, nor were we properly recording that synthetic MIR in `mir_keys`, so the `optimized_mir` wasn't getting encoded either!

Stacked on top is a fix to consider `DefKind::SyntheticCoroutineBody` to return true in several places I missed. Specifically, we should consider the def-kind in `fn DefKind::is_fn_like()`, since that's what we were using to make sure we ensure `query mir_inliner_callees` before the MIR gets stolen for the body. This led to some CI failures that were caught by miri but which I added a test for.
2024-09-17 17:28:32 +02:00
bors
46b0f8bafc Auto merge of #130455 - compiler-errors:inline-ordering, r=saethlin
Remove semi-nondeterminism of `DefPathHash` ordering from inliner

Déjà vu or something because I kinda thought I had put this PR up before. I recall a discussion somewhere where I think it was `@saethlin` mentioning that this check was no longer needed since we have "proper" cycle detection. Putting that up as a PR now.

This may slighlty negatively affect inlining, since the cycle breaking here means that we still inlined some cycles when the def path hashes were ordered in certain ways, this leads to really bad nondeterminism that makes minimizing ICEs and putting up inliner bugfixes difficult.

r? `@cjgillot` or `@saethlin` or someone else idk
2024-09-17 09:35:10 +00:00
Michael Goulet
4beb1cf9e5 Fix a couple more DefKind discrepancies between DefKind::Closure and DefKind::SyntheticCoroutineBody 2024-09-16 22:09:42 -04:00
Matthias Krüger
1a7fdc7d5d
Rollup merge of #130380 - Zalathar:counters, r=jieyouxu
coverage: Clarify some parts of coverage counter creation

This is a series of semi-related changes that are trying to make the `counters` module easier to read, understand, and modify.

For example, the existing code happens to avoid ever using the count for a `TerminatorKind::Yield` node as the count for its sole out-edge (since doing so would be incorrect), but doesn't do so explicitly, so seemingly-innocent changes can result in baffling test failures.

This PR also takes the opportunity to simplify some debug-logging code that was making its surrounding code disproportionately hard to read.

There should be no changes to the resulting coverage instrumentation/mappings, as demonstrated by the absence of changes to the coverage test suite.
2024-09-17 03:58:46 +02:00
Michael Goulet
8f97231d34 Remove semi-nondeterminism of DefPathHash ordering from inliner 2024-09-16 21:41:15 -04:00
Michael Goulet
af1ca7794a Record synthetic MIR bodies in mir_keys 2024-09-16 20:06:16 -04:00
Michael Goulet
57a7e514a4 Don't ICE when generating Fn shim for async closure with borrowck error 2024-09-16 10:57:58 -04:00
Zalathar
2a3e17c6d5 coverage: Remove unnecessary bcb_successors
Given that we directly access the graph predecessors/successors in so many
other places, and sometimes must do so to satisfy the borrow checker, there is
little value in having this trivial helper method.
2024-09-15 12:51:57 +10:00
Zalathar
669327f575 coverage: Replace bcb_has_multiple_in_edges with sole_predecessor
This does a better job of expressing the special cases that occur when a node
in the coverage graph has exactly one in-edge.
2024-09-15 12:51:57 +10:00
Zalathar
e24310b07c coverage: Simplify choosing an out-edge to be given a counter expression
By building the list of candidate edges up-front, and making the
candidate-selection method fallible, we can remove a few pieces of awkward
code.
2024-09-15 12:51:57 +10:00
Zalathar
771659d264 coverage: Track whether a node's count is the sum of its out-edges 2024-09-15 12:51:57 +10:00
Zalathar
6251d16e4c coverage: Streamline creation of physical edge counters 2024-09-15 12:51:57 +10:00
Zalathar
7cb85862b9 coverage: Streamline creation of physical node counters
- Look up the node's predecessors only once
- Get rid of some overly verbose logging
- Explain why some nodes need physical counters
- Extract a helper method to create and set a physical node counter
2024-09-15 12:51:56 +10:00
Zalathar
13b814fc17 coverage: Tweak comments in graph 2024-09-15 12:25:12 +10:00
bors
5fe0e40e05 Auto merge of #130357 - fmease:rollup-j3ej4q0, r=fmease
Rollup of 6 pull requests

Successful merges:

 - #130017 (coverage: Extract `executor::block_on` from several async coverage tests)
 - #130268 (simd_shuffle: require index argument to be a vector)
 - #130290 (Stabilize entry_insert)
 - #130294 (Lifetime cleanups)
 - #130343 (docs: Enable required feature for 'closure_returning_async_block' lint)
 - #130349 (Fix `Parser::break_up_float`'s right span)

r? `@ghost`
`@rustbot` modify labels: rollup
2024-09-14 16:18:12 +00:00
León Orell Valerian Liehr
03e8b6bbfa
Rollup merge of #130294 - nnethercote:more-lifetimes, r=lcnr
Lifetime cleanups

The last commit is very opinionated, let's see how we go.

r? `@oli-obk`
2024-09-14 18:12:13 +02:00
bors
e7386b361d Auto merge of #128299 - DianQK:clone-copy, r=cjgillot
Simplify the canonical clone method and the copy-like forms to copy

Fixes #128081.

The optimized clone method ends up as the following MIR:

```
_2 = copy ((*_1).0: i32);
_3 = copy ((*_1).1: u64);
_4 = copy ((*_1).2: [i8; 3]);
_0 = Foo { a: move _2, b: move _3, c: move _4 };
```

We can transform this to:

```
_0 = copy (*_1);
```

r? `@cjgillot`
2024-09-14 13:30:30 +00:00
DianQK
c16c22cc9c
Simplify the canonical clone method to copy
The optimized clone method ends up as the following MIR:

```
_2 = copy ((*_1).0: i32);
_3 = copy ((*_1).1: u64);
_4 = copy ((*_1).2: [i8; 3]);
_0 = Foo { a: move _2, b: move _3, c: move _4 };
```

We can transform this to:

```
_0 = copy (*_1);
```
2024-09-14 13:30:35 +08:00
Stuart Cook
04e744e77d
Rollup merge of #130199 - compiler-errors:by-move, r=cjgillot
Don't call closure_by_move_body_def_id on FnOnce async closures in MIR validation

Refactors the check in #129847 to not unncessarily call the `closure_by_move_body_def_id` query for async closures that don't *need* a by-move body.

Fixes #130167
2024-09-14 11:53:12 +10:00
Matthias Krüger
c5e0f9ae42
Rollup merge of #130297 - nnethercote:dataflow-cleanups, r=cjgillot
Dataflow cleanups

r? `@cjgillot`
2024-09-13 18:25:45 +02:00
Nicholas Nethercote
bb943f93ff Rename FlowState as Domain.
Because that's what it is; no point having a different name for it.
2024-09-13 16:27:24 +10:00
Nicholas Nethercote
8d32578fe1 Rename and reorder lots of lifetimes.
- Replace non-standard names like 's, 'p, 'rg, 'ck, 'parent, 'this, and
  'me with vanilla 'a. These are cases where the original name isn't
  really any more informative than 'a.
- Replace names like 'cx, 'mir, and 'body with vanilla 'a when the lifetime
  applies to multiple fields and so the original lifetime name isn't
  really accurate.
- Put 'tcx last in lifetime lists, and 'a before 'b.
2024-09-13 15:46:20 +10:00
Matthias Krüger
7cae463bda
Rollup merge of #130263 - Zalathar:sums, r=compiler-errors
coverage: Simplify creation of sum counters

A small and self-contained improvement, extracted from some larger changes that I'm still working on.

Ultimately I want to avoid creating these sum counter-expressions in some cases (in favour of just adding physical counters directly to the nodes we care about), so a good incremental move towards that is splitting the “gather edge counters” step out from the ”build a sum of those counters” step.

Creating an extra intermediate vector should have negligible cost (and coverage isn't exercised by the benchmark suite anyway). The removed logging is redundant with the `#[instrument(..)]` logging we already have on the underlying method calls.
2024-09-12 19:03:42 +02:00
Matthias Krüger
4428d6f363
Rollup merge of #130101 - RalfJung:const-cleanup, r=fee1-dead
some const cleanup: remove unnecessary attributes, add const-hack indications

I learned that we use `FIXME(const-hack)` on top of the "const-hack" label. That seems much better since it marks the right place in the code and moves around with the code. So I went through the PRs with that label and added appropriate FIXMEs in the code. IMO this means we can then remove the label -- Cc ``@rust-lang/wg-const-eval.``

I also noticed some const stability attributes that don't do anything useful, and removed them.

r? ``@fee1-dead``
2024-09-12 19:03:41 +02:00
Zalathar
2344133ba6 coverage: Simplify creation of sum counters 2024-09-12 15:50:32 +10:00
Zalathar
a7a3595668 coverage: Separate creation of edge counters from building their sum 2024-09-12 15:50:32 +10:00
Michael Goulet
954419aab0 Simplify some nested if statements 2024-09-11 13:45:23 -04:00
Matthias Krüger
98222a524d
Rollup merge of #130184 - Zalathar:counters, r=compiler-errors
coverage: Clean up terminology in counter creation

Some of the terminology in this module is confusing, or has drifted out of sync with other parts of the coverage code.

This PR therefore renames some variables and methods, and adjusts comments and debug logging statements, to make things clearer and more consistent.

No functional changes, other than some small tweaks to debug logging.
2024-09-10 17:35:15 +02:00
Michael Goulet
5cf117ed05 Don't call closure_by_move_body_def_id on FnOnce async closures in MIR validation 2024-09-10 10:55:05 -04:00
Zalathar
10cd5e8386 coverage: Avoid referring to "operands" in counter creation 2024-09-10 21:05:29 +10:00
Zalathar
8be70c7b2c coverage: Avoid referring to out-edges as "branches"
This makes the graph terminology a bit more consistent, and avoids potential
confusion with branch coverage.
2024-09-10 21:05:28 +10:00
Zalathar
96d545a33b coverage: Avoid referring to "coverage spans" in counter creation
The counter-creation code needs to know which BCB nodes require counters, but
isn't interested in why, so treat that as an opaque detail.
2024-09-10 21:00:50 +10:00
Nicholas Nethercote
8949b443d5 Make check_live_drops into a MirLint.
It's a thin wrapper around `check_live_drops`, but it's enough to fix
the FIXME comment.
2024-09-10 09:11:17 +10:00
Nicholas Nethercote
7023402691 Remove references from some structs.
In all cases the struct can own the relevant thing instead of having a
reference to it. This makes the code simpler, and in some cases removes
a struct lifetime.
2024-09-10 09:11:17 +10:00
Nicholas Nethercote
d1c55a305e Use IndexVec::from_raw to construct a const IndexVec. 2024-09-10 09:11:17 +10:00
Nicholas Nethercote
51e1c3958d Add a useful comment about PromoteTemps.
This was non-obvious to me.
2024-09-10 08:54:45 +10:00
Nicholas Nethercote
e26692d559 Add a useful comment. 2024-09-10 08:54:22 +10:00
Nicholas Nethercote
baa16d2471 Clarify a comment.
The "as" is equivalent to "because", but I originally read it more like
"when", which was confusing.
2024-09-10 08:54:22 +10:00
Nicholas Nethercote
48064d4498 Inline and remove some functions.
These are all functions with a single callsite, where having a separate
function does nothing to help with readability. These changes make the
code a little shorter and easier to read.
2024-09-10 08:54:17 +10:00