Dont ICE when computing coverage of synthetic async closure body
I'm not totally certain if this is *right*, but at least it doesn't ICE.
The issue is that we end up generating two MIR bodies for each async closure, since the `FnOnce` and `Fn`/`FnMut` implementations have different borrowing behavior of their captured variables. They should ideally both contribute to the coverage, since those MIR bodies are (*to the user*) the same code and should have no behavioral differences.
This PR at least suppresses the ICEs, and then I guess worst case we can fix this the right way later.
r? Zalathar or re-roll
Fixes#131190
Make destructors on `extern "C"` frames to be executed
This would make the example in #123231 print "Noisy Drop". I didn't mark this as fixing the issue because the behaviour is yet to be spec'ed.
Tracking:
- https://github.com/rust-lang/rust/issues/74990
Don't check unsize goal in MIR validation when opaques remain
Similarly to `mir_assign_valid_types`, let's just skip when there are opaques. Fixes#130921.
coverage: Multiple small tweaks to counter creation
I've been experimenting with some larger changes to how coverage counters are assigned to parts of the control-flow graph, and while none of that is ready yet, along the way I've repeatedly found myself wanting these smaller tweaks as a base.
There are no changes to compiler output.
Don't use Immediate::offset to transmute pointers to integers
This applies the relatively new `assert_matches_abi` check in the `offset` operation on immediates, which makes sure that if offsets are used to alter the layout (which is possible because the field layout is arbitrarily picked by the caller), this is not done in a way that breaks the invariant of the `Immediate` type.
This leads to ICEs in a GVN mir-opt test, so the second commit fixes GVN.
Fixes https://github.com/rust-lang/rust/issues/131064.
This makes it possible for other parts of counter-assignment to check whether a
node is guaranteed to end up with some kind of counter.
Switching from `impl Fn` to a concrete `&BitSet` just avoids the hassle of
trying to store a closure in a struct field, and currently there's no
foreseeable need for this information to not be a bitset.
Disable jump threading `UnOp::Not` for non-bool
Fix#131195, where jumpthreading was optimizing `!a == b` into `a != b` for non-bool, where this is definitely not true.
This code can sometimes witness malformed coverage attributes in builds that
are going to fail, so use `span_delayed_bug` to avoid an inappropriate ICE in
that case.
Add `File` constructors that return files wrapped with a buffer
In addition to the light convenience, these are intended to raise visibility that buffering is something you should consider when opening a file, since unbuffered I/O is a common performance footgun to Rust newcomers.
ACP: https://github.com/rust-lang/libs-team/issues/446
Tracking Issue: #130804
Apply `EarlyOtherwiseBranch` to scalar value
In the future, I'm thinking of hoisting discriminant via GVN so that we only need to write very little code here.
r? `@cjgillot`
Encode `coroutine_by_move_body_def_id` in crate metadata
We synthesize the MIR for a by-move body for the `FnOnce` implementation of async closures. It can be accessed with the `coroutine_by_move_body_def_id` query. We weren't encoding this query in the metadata though, nor were we properly recording that synthetic MIR in `mir_keys`, so the `optimized_mir` wasn't getting encoded either!
Stacked on top is a fix to consider `DefKind::SyntheticCoroutineBody` to return true in several places I missed. Specifically, we should consider the def-kind in `fn DefKind::is_fn_like()`, since that's what we were using to make sure we ensure `query mir_inliner_callees` before the MIR gets stolen for the body. This led to some CI failures that were caught by miri but which I added a test for.
Remove semi-nondeterminism of `DefPathHash` ordering from inliner
Déjà vu or something because I kinda thought I had put this PR up before. I recall a discussion somewhere where I think it was `@saethlin` mentioning that this check was no longer needed since we have "proper" cycle detection. Putting that up as a PR now.
This may slighlty negatively affect inlining, since the cycle breaking here means that we still inlined some cycles when the def path hashes were ordered in certain ways, this leads to really bad nondeterminism that makes minimizing ICEs and putting up inliner bugfixes difficult.
r? `@cjgillot` or `@saethlin` or someone else idk
coverage: Clarify some parts of coverage counter creation
This is a series of semi-related changes that are trying to make the `counters` module easier to read, understand, and modify.
For example, the existing code happens to avoid ever using the count for a `TerminatorKind::Yield` node as the count for its sole out-edge (since doing so would be incorrect), but doesn't do so explicitly, so seemingly-innocent changes can result in baffling test failures.
This PR also takes the opportunity to simplify some debug-logging code that was making its surrounding code disproportionately hard to read.
There should be no changes to the resulting coverage instrumentation/mappings, as demonstrated by the absence of changes to the coverage test suite.
Given that we directly access the graph predecessors/successors in so many
other places, and sometimes must do so to satisfy the borrow checker, there is
little value in having this trivial helper method.