Commit Graph

1145 Commits

Author SHA1 Message Date
bors
5c6a7e71cd Auto merge of #114993 - RalfJung:panic-nounwind, r=fee1-dead
interpret/miri: call the panic_nounwind machinery the same way codegen does
2023-08-20 22:01:18 +00:00
Ralf Jung
ac3bca24b7 interpret: have assert_* intrinsics call the panic machinery instead of a direct abort 2023-08-20 15:52:40 +02:00
Ralf Jung
818ec8e23a give some unwind-related terminators a more clear name 2023-08-20 15:52:38 +02:00
bors
0510a1526d Auto merge of #114791 - Zalathar:bcb-counter, r=cjgillot
coverage: Give the instrumentor its own counter type, separate from MIR

Within the MIR representation of coverage data, `CoverageKind` is an important part of `StatementKind::Coverage`, but the `InstrumentCoverage` pass also uses it heavily as an internal data structure. This means that any change to `CoverageKind` also needs to update all of the internal parts of `InstrumentCoverage` that manipulate it directly, making the MIR representation difficult to modify.

---

This change fixes that by giving the instrumentor its own `BcbCounter` type for internal use, which is then converted to a `CoverageKind` when injecting coverage information into MIR.

The main change is mostly mechanical, because the initial `BcbCounter` is drop-in compatible with `CoverageKind`, minus the unnecessary `CoverageKind::Unreachable` variant.

I've then removed the `function_source_hash` field from `BcbCounter::Counter`, as a small example of how the two types can now usefully differ from each other. Every counter in a MIR-level function should have the same source hash, so we can supply the hash during the conversion to `CoverageKind::Counter` instead.

---

*Background:* BCB stands for “basic coverage block”, which is a node in the simplified control-flow graph used by coverage instrumentation. The instrumentor pass uses the function's actual MIR control-flow graph to build a simplified BCB graph, then assigns coverage counters and counter expressions to various nodes/edges in that simplified graph, and then finally injects corresponding coverage information into the underlying MIR.
2023-08-20 13:37:47 +00:00
bors
ff55fa3026 Auto merge of #113124 - nbdd0121:eh_frame, r=cjgillot
Add MIR validation for unwind out from nounwind functions + fixes to make validation pass

`@Nilstrieb`  This is the MIR validation you asked in https://github.com/rust-lang/rust/pull/112403#discussion_r1222739722.

Two passes need to be fixed to get the validation to pass:
* `RemoveNoopLandingPads` currently unconditionally introduce a resume block (even there is none to begin with!), changed to not do that
* Generator state transform introduces a `assert` which may unwind, and its drop elaboration also introduces many new `UnwindAction`s, so in this case run the AbortUnwindingCalls after the transformation.

I believe this PR should also fix Rust-for-Linux/linux#1016, cc `@ojeda`

r? `@Nilstrieb`
2023-08-20 09:58:52 +00:00
Zalathar
72f4c78dc6 coverage: Don't store function_source_hash in BcbCounter::Counter
This shows one small benefit of separating `BcbCounter` from `CoverageKind`.
The function source hash will be the same for all counters within a function,
so instead of passing it through `CoverageCounters` and storing it in every
counter, we can just supply it during the final conversion to `CoverageKind`.
2023-08-20 12:02:40 +10:00
Zalathar
fbab055e77 coverage: Give the instrumentor its own counter type, separate from MIR
This splits off `BcbCounter` from MIR's `CoverageKind`, allowing the two types
to evolve in different directions as necessary.
2023-08-20 12:02:40 +10:00
Zalathar
629437eec7 coverage: Move a debug print into make_code_region 2023-08-20 12:02:40 +10:00
Zalathar
cad50f40e5 coverage: Remove a useless let () = 2023-08-20 12:02:40 +10:00
Matthias Krüger
4cd3b0b71c use static arrays instead of vectors 2023-08-19 18:49:58 +02:00
Gary Guo
0a7202d476 Change generator_drop's instance to that of generator for dump_mir
Otherwise the file name generated for generator_drop will become

core.ptr-drop_in_place.[generator@<FILEPATH>_<NUMBERS>].generator_drop.0.mir

instead of main-{closure#0}.generator_drop.0.mir which breaks a mir-opt
test.
2023-08-18 16:40:18 +01:00
Gary Guo
907e431f93 Perform MIR validation on drop glue of generator 2023-08-18 13:51:42 +01:00
Gary Guo
cec8e09edf Run AbortUnwindingCalls after generator transform 2023-08-18 13:51:42 +01:00
Gary Guo
cfbf1bf7cd Do not create new resume block if there isn't one already 2023-08-18 13:51:42 +01:00
bors
0f7f6b7061 Auto merge of #114948 - compiler-errors:normalize-before-freeze, r=lcnr
Normalize before checking if local is freeze in `deduced_param_attrs`

Not normalizing the local type eagerly results in possibly exponential amounts of normalization happening downstream in `is_freeze_raw`.

Fixes #113372
2023-08-18 08:15:57 +00:00
Michael Goulet
20c648c582 Normalize before checking if local is freeze in deduced_param_attrs 2023-08-17 14:33:24 -07:00
Camille GILLOT
933b618360 Revert "Implement references VarDebugInfo."
This reverts commit 2ec0071913.
2023-08-17 17:02:04 +00:00
Camille GILLOT
5b2524eb03 Do not pre-compute reachable blocks. 2023-08-16 19:40:46 +00:00
Camille GILLOT
94c5ea350f Update doc comment. 2023-08-16 18:15:49 +00:00
Camille GILLOT
b8fed2f21c Make dataflow const-prop handle_switch_int monotonic. 2023-08-16 18:12:18 +00:00
Camille GILLOT
388f6a6413 Make TerminatorEdge plural. 2023-08-16 18:12:18 +00:00
Camille GILLOT
6cf15d4cb5 Rename MaybeUnreachable. 2023-08-16 18:12:18 +00:00
Camille GILLOT
f19cd3f2e1 Use TerminatorEdge for dataflow-const-prop. 2023-08-16 18:12:18 +00:00
Camille GILLOT
3acfa092db Only run MaybeInitializedPlaces once for drop elaboration. 2023-08-16 18:12:18 +00:00
Zalathar
5ca30c4646 Store BCB counters externally, not directly in the BCB graph
Storing coverage counter information in `CoverageCounters` has a few advantages
over storing it directly inside BCB graph nodes:

- The graph doesn't need to be mutable when making the counters, making it
easier to see that the graph itself is not modified during this step.

- All of the counter data is clearly visible in one place.

- It becomes possible to use a representation that doesn't correspond 1:1 to
graph nodes, e.g. storing all the edge counters in a single hashmap instead of
several.
2023-08-13 12:18:06 +10:00
Zalathar
5302c9d451 Accumulate intermediate expressions into CoverageCounters
This avoids the need to pass around a separate vector to accumulate into, and
avoids the need to create a fake empty vector when failure occurs.
2023-08-13 12:18:06 +10:00
Zalathar
c74db79c3b Rename helper struct BcbCounters to MakeBcbCounters
This avoids confusion with data structures that actually hold BCB counter
information.
2023-08-13 12:18:06 +10:00
Matthias Krüger
7d78885a8e
Rollup merge of #111891 - rustbox:feat/riscv-isr-cconv, r=jackh726
feat: `riscv-interrupt-{m,s}` calling conventions

Similar to prior support added for the mips430, avr, and x86 targets this change implements the rough equivalent of clang's [`__attribute__((interrupt))`][clang-attr] for riscv targets, enabling e.g.

```rust
static mut CNT: usize = 0;

pub extern "riscv-interrupt-m" fn isr_m() {
    unsafe {
        CNT += 1;
    }
}
```

to produce highly effective assembly like:

```asm
pub extern "riscv-interrupt-m" fn isr_m() {
420003a0:       1141                    addi    sp,sp,-16
    unsafe {
        CNT += 1;
420003a2:       c62a                    sw      a0,12(sp)
420003a4:       c42e                    sw      a1,8(sp)
420003a6:       3fc80537                lui     a0,0x3fc80
420003aa:       63c52583                lw      a1,1596(a0) # 3fc8063c <_ZN12esp_riscv_rt3CNT17hcec3e3a214887d53E.0>
420003ae:       0585                    addi    a1,a1,1
420003b0:       62b52e23                sw      a1,1596(a0)
    }
}
420003b4:       4532                    lw      a0,12(sp)
420003b6:       45a2                    lw      a1,8(sp)
420003b8:       0141                    addi    sp,sp,16
420003ba:       30200073                mret
```

(disassembly via `riscv64-unknown-elf-objdump -C -S --disassemble ./esp32c3-hal/target/riscv32imc-unknown-none-elf/release/examples/gpio_interrupt`)

This outcome is superior to hand-coded interrupt routines which, lacking visibility into any non-assembly body of the interrupt handler, have to be very conservative and save the [entire CPU state to the stack frame][full-frame-save]. By instead asking LLVM to only save the registers that it uses, we defer the decision to the tool with the best context: it can more accurately account for the cost of spills if it knows that every additional register used is already at the cost of an implicit spill.

At the LLVM level, this is apparently [implemented by] marking every register as "[callee-save]," matching the semantics of an interrupt handler nicely (it has to leave the CPU state just as it found it after its `{m|s}ret`).

This approach is not suitable for every interrupt handler, as it makes no attempt to e.g. save the state in a user-accessible stack frame. For a full discussion of those challenges and tradeoffs, please refer to [the interrupt calling conventions RFC][rfc].

Inside rustc, this implementation differs from prior art because LLVM does not expose the "all-saved" function flavor as a calling convention directly, instead preferring to use an attribute that allows for differentiating between "machine-mode" and "superivsor-mode" interrupts.

Finally, some effort has been made to guide those who may not yet be aware of the differences between machine-mode and supervisor-mode interrupts as to why no `riscv-interrupt` calling convention is exposed through rustc, and similarly for why `riscv-interrupt-u` makes no appearance (as it would complicate future LLVM upgrades).

[clang-attr]: https://clang.llvm.org/docs/AttributeReference.html#interrupt-risc-v
[full-frame-save]: 9281af2ecf/src/lib.rs (L440-L469)
[implemented by]: b7fb2a3fec/llvm/lib/Target/RISCV/RISCVRegisterInfo.cpp (L61-L67)
[callee-save]: 973f1fe7a8/llvm/lib/Target/RISCV/RISCVCallingConv.td (L30-L37)
[rfc]: https://github.com/rust-lang/rfcs/pull/3246
2023-08-09 22:59:58 +02:00
Seth Pellegrino
897c7bb23b feat: riscv-interrupt-{m,s} calling conventions
Similar to prior support added for the mips430, avr, and x86 targets
this change implements the rough equivalent of clang's
[`__attribute__((interrupt))`][clang-attr] for riscv targets, enabling
e.g.

```rust
static mut CNT: usize = 0;

pub extern "riscv-interrupt-m" fn isr_m() {
    unsafe {
        CNT += 1;
    }
}
```

to produce highly effective assembly like:

```asm
pub extern "riscv-interrupt-m" fn isr_m() {
420003a0:       1141                    addi    sp,sp,-16
    unsafe {
        CNT += 1;
420003a2:       c62a                    sw      a0,12(sp)
420003a4:       c42e                    sw      a1,8(sp)
420003a6:       3fc80537                lui     a0,0x3fc80
420003aa:       63c52583                lw      a1,1596(a0) # 3fc8063c <_ZN12esp_riscv_rt3CNT17hcec3e3a214887d53E.0>
420003ae:       0585                    addi    a1,a1,1
420003b0:       62b52e23                sw      a1,1596(a0)
    }
}
420003b4:       4532                    lw      a0,12(sp)
420003b6:       45a2                    lw      a1,8(sp)
420003b8:       0141                    addi    sp,sp,16
420003ba:       30200073                mret
```

(disassembly via `riscv64-unknown-elf-objdump -C -S --disassemble ./esp32c3-hal/target/riscv32imc-unknown-none-elf/release/examples/gpio_interrupt`)

This outcome is superior to hand-coded interrupt routines which, lacking
visibility into any non-assembly body of the interrupt handler, have to
be very conservative and save the [entire CPU state to the stack
frame][full-frame-save]. By instead asking LLVM to only save the
registers that it uses, we defer the decision to the tool with the best
context: it can more accurately account for the cost of spills if it
knows that every additional register used is already at the cost of an
implicit spill.

At the LLVM level, this is apparently [implemented by] marking every
register as "[callee-save]," matching the semantics of an interrupt
handler nicely (it has to leave the CPU state just as it found it after
its `{m|s}ret`).

This approach is not suitable for every interrupt handler, as it makes
no attempt to e.g. save the state in a user-accessible stack frame. For
a full discussion of those challenges and tradeoffs, please refer to
[the interrupt calling conventions RFC][rfc].

Inside rustc, this implementation differs from prior art because LLVM
does not expose the "all-saved" function flavor as a calling convention
directly, instead preferring to use an attribute that allows for
differentiating between "machine-mode" and "superivsor-mode" interrupts.

Finally, some effort has been made to guide those who may not yet be
aware of the differences between machine-mode and supervisor-mode
interrupts as to why no `riscv-interrupt` calling convention is exposed
through rustc, and similarly for why `riscv-interrupt-u` makes no
appearance (as it would complicate future LLVM upgrades).

[clang-attr]: https://clang.llvm.org/docs/AttributeReference.html#interrupt-risc-v
[full-frame-save]: 9281af2ecf/src/lib.rs (L440-L469)
[implemented by]: b7fb2a3fec/llvm/lib/Target/RISCV/RISCVRegisterInfo.cpp (L61-L67)
[callee-save]: 973f1fe7a8/llvm/lib/Target/RISCV/RISCVCallingConv.td (L30-L37)
[rfc]: https://github.com/rust-lang/rfcs/pull/3246
2023-08-08 18:09:56 -07:00
cedihegi
0166092f87 Added comment on reason for method being public 2023-08-08 17:36:30 +02:00
cedihegi
15d408c6b0 Allow reimplementation of drops_elaborated query
Make module inner and function run_analysis_to_runtime_passes in
rustc_mir_transform public to allow re-implementing the query from the
rust compiler interface.
2023-08-08 16:30:43 +02:00
bors
84ec2633de Auto merge of #113902 - Enselic:lint-recursive-drop, r=oli-obk
Make `unconditional_recursion` warning detect recursive drops

Closes #55388

Also closes #50049 unless we want to keep it for the second example which this PR does not solve, but I think it is better to track that work in #57965.

r? `@oli-obk` since you are the mentor for #55388

Unresolved questions:
- [x] There are two false positives that must be fixed before merging (see diff). I suspect the best way to solve them is to perform analysis after drop elaboration instead of before, as now, but I have not explored that any further yet. Could that be an option? **Answer:** Yes, that solved the problem.

`@rustbot` label +T-compiler +C-enhancement +A-lint
2023-08-07 13:39:28 +00:00
bors
f3623871cf Auto merge of #114502 - cjgillot:steal-ctfe, r=oli-obk
Steal MIR for CTFE when possible.

Some bodies, like constants, have CTFE MIR but no optimized MIR.
In that case, have `mir_for_ctfe` steal the MIR instead of cloning it.
2023-08-06 22:02:12 +00:00
Matthias Krüger
13de583583
Rollup merge of #114505 - ouz-a:cleanup_mir, r=RalfJung
Add documentation to has_deref

Documentation of `has_deref` needed some polish to be more clear about where it should be used and what's it's purpose.

cc https://github.com/rust-lang/rust/issues/114401

r? `@RalfJung`
2023-08-06 17:26:29 +02:00
ouz-a
6df546281b cleanup misinformation regarding has_deref 2023-08-06 17:29:09 +03:00
Camille GILLOT
02e10a054e Steal MIR for CTFE when possible. 2023-08-05 21:16:55 +00:00
bors
1cabb8ed23 Auto merge of #114459 - cjgillot:simplify-ctfe, r=oli-obk
Do not run ConstProp on mir_for_ctfe.

This pass does not seem to be useful any more. The const-prop lints are now run by `tcx.mir_drops_elaborated_and_const_checked`, and the const-prop opt should never emit any diagnostic.
2023-08-05 09:08:34 +00:00
Camille GILLOT
e2230985b3 Do not run ConstProp on mir_for_ctfe. 2023-08-05 06:21:33 +00:00
Matthias Krüger
f36a9b5e18
Rollup merge of #113534 - oli-obk:simd_shuffle_dehackify, r=workingjubilee
Forbid old-style `simd_shuffleN` intrinsics

Don't merge before https://github.com/rust-lang/packed_simd/pull/350 has made its way to crates.io

We used to support specifying the lane length of simd_shuffle ops by attaching the lane length to the name of the intrinsic (like `simd_shuffle16`). After this PR, you cannot do that anymore, and need to instead either rely on inference of the `idx` argument type or specify it as `simd_shuffle::<_, [u32; 16], _>`.

r? `@workingjubilee`
2023-08-04 07:25:45 +02:00
Michael Goulet
3c9549b349 Explicitly don't inline user-written rust-call fns 2023-08-03 18:35:56 +00:00
Michael Goulet
0391af0e1f Only unpack tupled args in inliner if we expect args to be unpacked 2023-08-03 18:35:56 +00:00
Oli Scherer
4457ef2c6d Forbid old-style simd_shuffleN intrinsics 2023-08-03 09:29:00 +00:00
Michael Goulet
99969d282b Use upvar_tys in more places, make it a list 2023-08-01 23:19:31 +00:00
Zalathar
3920e07f0b Make coverage counter IDs count up from 0, not 1
Operand types are now tracked explicitly, so there is no need to reserve ID 0
for the special always-zero counter.

As part of the renumbering, this change fixes an off-by-one error in the way
counters were counted by the `coverageinfo` query. As a result, functions
should now have exactly the number of counters they actually need, instead of
always having an extra counter that is never used.
2023-08-01 11:29:55 +10:00
Zalathar
f103db894f Make coverage expression IDs count up from 0, not down from u32::MAX
Operand types are now tracked explicitly, so there is no need for expression
IDs to avoid counter IDs by descending from `u32::MAX`. Instead they can just
count up from 0, and can be used directly as indices when necessary.
2023-08-01 11:29:55 +10:00
Zalathar
1a014d42f4 Replace ExpressionOperandId with enum Operand
Because the three kinds of operand are now distinguished explicitly, we no
longer need fiddly code to disambiguate counter IDs and expression IDs based on
the total number of counters/expressions in a function.

This does increase the size of operands from 4 bytes to 8 bytes, but that
shouldn't be a big deal since they are mostly stored inside boxed structures,
and the current coverage code is not particularly size-optimized anyway.
2023-08-01 11:29:55 +10:00
Matthias Krüger
23815467a2 inline format!() args up to and including rustc_middle 2023-07-30 13:18:33 +02:00
Zalathar
8745fdc448 Replace a lazy RefCell<Option<T>> with OnceCell<T> 2023-07-28 12:55:13 +10:00
Matthias Krüger
fa21a8c6f8
Rollup merge of #114075 - matthiaskrgr:fmt_args_rustc_3, r=wesleywiser
inline format!() args from rustc_codegen_llvm to the end (4)

r? `@WaffleLapkin`
2023-07-27 06:04:13 +02:00
Matthias Krüger
c64ef5e070 inline format!() args from rustc_codegen_llvm to the end (4)
r? @WaffleLapkin
2023-07-25 23:20:28 +02:00