Insert alignment checks for pointer dereferences when debug assertions are enabled
Closes https://github.com/rust-lang/rust/issues/54915
- [x] Jake tells me this sounds like a place to use `MirPatch`, but I can't figure out how to insert a new basic block with a new terminator in the middle of an existing basic block, using `MirPatch`. (if nobody else backs up this point I'm checking this as "not actually a good idea" because the code looks pretty clean to me after rearranging it a bit)
- [x] Using `CastKind::PointerExposeAddress` is definitely wrong, we don't want to expose. Calling a function to get the pointer address seems quite excessive. ~I'll see if I can add a new `CastKind`.~ `CastKind::Transmute` to the rescue!
- [x] Implement a more helpful panic message like slice bounds checking.
r? `@oli-obk`
And while doing the updates for that, also uses `FieldIdx` in `ProjectionKind::Field` and `TypeckResults::field_indices`.
There's more places that could use it (like `rustc_const_eval` and `LayoutS`), but I tried to keep this PR from exploding to *even more* places.
Part 2/? of https://github.com/rust-lang/compiler-team/issues/606
Move `mir::Field` → `abi::FieldIdx`
The first PR for https://github.com/rust-lang/compiler-team/issues/606
This is just the move-and-rename, because it's plenty big already. Future PRs will start using `FieldIdx` more broadly, and concomitantly removing `FieldIdx::new`s.
Support TLS access into dylibs on Windows
This allows access to `#[thread_local]` in upstream dylibs on Windows by introducing a MIR shim to return the address of the thread local. Accesses that go into an upstream dylib will call the MIR shim to get the address of it.
`convert_tls_rvalues` is introduced in `rustc_codegen_ssa` which rewrites MIR TLS accesses to dummy calls which are replaced with calls to the MIR shims when the dummy calls are lowered to backend calls.
A new `dll_tls_export` target option enables this behavior with a `false` value which is set for Windows platforms.
This fixes https://github.com/rust-lang/rust/issues/84933.
Make init mask lazy for fully initialized/uninitialized const allocations
There are a few optimization opportunities in the `InitMask` and related const `Allocation`s (e.g. by taking advantage of the fact that it's a bitset that represents initialization, which is often entirely initialized or uninitialized in a single call, or gradually built up, etc).
There's a few overwrites to the same state, multiple writes in a row to the same indices, the RLE scheme for `memcpy` doesn't always compress, etc.
Here, we start with:
- avoiding materializing the bitset's blocks if the allocation is fully initialized/uninitialized
- dealloc blocks when fully overwriting, including when participating in `memcpy`s
- take care of the fixme about allocating blocks of 0s before overwriting them to the expected value
- expanding unit test coverage of the init mask
This should be most visible on benchmarks and crates where const allocations dominate the runtime (like `ctfe-stress-5` of course), but I was especially looking at the worst cases from #93215.
This first change allows the majority of `set_range` calls to stay with a lazy init mask when bootstrapping rustc (not that the init mask is a big part of the process in cpu time or memory usage).
r? `@oli-obk`
I have another in-progress branch where I'll switch the singular initialized/uninitialized value to a watermark, recording the point after which everything is uninitialized. That will take care of cases where full initialization is monotonic and done in multiple steps (e.g. an array of a type without padding), which should then allow the vast majority of const allocations' init masks to stay lazy during bootstrapping (though interestingly I've seen such gradual initialization in both left-to-right and right-to-left directions, and I don't think a single watermark can handle both).
The first PR for https://github.com/rust-lang/compiler-team/issues/606
This is just the move-and-rename, because it's plenty big-and-bitrotty already. Future PRs will start using `FieldIdx` more broadly, and concomitantly removing `FieldIdx::new`s.
Avoid materializing bits in the InitMask bitset when a single value
would be enough: when the mask represents a fully initialized or fully
uninitialized const allocation.
Updates `interpret`, `codegen_ssa`, and `codegen_cranelift` to consume the new cast instead of the intrinsic.
Includes `CastTransmute` for custom MIR building, to be able to test the extra UB.
not *all* retags might be explicit in Runtime MIR
In https://github.com/rust-lang/rust/pull/105317 I made Miri treat `Rvalue::Ref/AddrOf` as implicit retagging sites. This updates the MIR docs accordingly.
For `Rvalue::Ref` I think this makes a lot more sense: creating a new reference is their entire point, so we can avoid bloating the MIR with retags. Also this seems to be the best way to handle cases like `*ptr = &[mut] ...`, where doing a retag is somewhat questionable since maybe `*ptr` points to another place now?
For `Rvalue::AddrOf`, Stacked Borrows needs this because even raw ptrs need some retagging, but Tree Borrows doesn't do ant retagging here and I hope we'll end up with a model where raw pointers don't get retagged.
Tweak implementation of overflow checking assertions
Extract and reuse logic controlling behaviour of overflow checking assertions instead of duplicating it three times.
r? `@cjgillot`
Wrap the whole LocalInfo in ClearCrossCrate.
MIR contains a lot of information about locals. The primary purpose of this information is the quality of borrowck diagnostics.
This PR aims to drop this information after MIR analyses are finished, ie. starting from post-cleanup runtime MIR.
Implement checked Shl/Shr at MIR building.
This does not require any special handling by codegen backends,
as the overflow behaviour is entirely determined by the rhs (shift amount).
This allows MIR ConstProp to remove the overflow check for constant shifts.
~There is an existing different behaviour between cg_llvm and cg_clif (cc `@bjorn3).`
I took cg_llvm's one as reference: overflow if `rhs < 0 || rhs > number_of_bits_in_lhs_ty`.~
EDIT: `cg_llvm` and `cg_clif` implement the overflow check differently. This PR uses `cg_llvm`'s implementation based on a `BitAnd` instead of `cg_clif`'s one based on an unsigned comparison.
Do not implement HashStable for HashSet (MCP 533)
This PR removes all occurrences of `HashSet` in query results, replacing it either with `FxIndexSet` or with `UnordSet`, and then removes the `HashStable` implementation of `HashSet`. This is part of implementing [MCP 533](https://github.com/rust-lang/compiler-team/issues/533), that is, removing the `HashStable` implementations of all collection types with unstable iteration order.
The changes are mostly mechanical. The only place where additional sorting is happening is in Miri's override implementation of the `exported_symbols` query.
fix multiple issues when promoting type-test subject
Multiple interdependent fixes. See linked issues for a short description of each.
When Promoting a type-test `T: 'a` from within the closure back to its parent function, there are a couple pre-existing bugs and limitations. They were exposed by the recent changes to opaque types because the type-test subject (`T`) is no longer a simple ParamTy.
Commit 1:
Fixes#108635Fixes#107426
Commit 2:
Fixes#108639
Commit 3:
Fixes#107516
rustc_middle: Remove trait `DefIdTree`
This trait was a way to generalize over both `TyCtxt` and `Resolver`, but now `Resolver` has access to `TyCtxt`, so this trait is no longer necessary.
Support allocations with non-Box<[u8]> bytes
This is prep work for allowing miri to support passing pointers to C code, which will require `Allocation`s to be correctly aligned. Currently, it just makes `Allocation` generic and plumbs the necessary changes through the right places.
The follow-up to this will be adding a type in the miri interpreter which correctly aligns the bytes, using that for the Miri engine, then allowing Miri to pass pointers into these allocations to C calls.
Based off of #100467, credit to ```@emarteca``` for the code