2017-12-01 18:57:45 +00:00
//! Mono Item Collection
2019-05-17 01:20:14 +00:00
//! ====================
2015-11-02 13:46:39 +00:00
//!
2020-06-21 02:53:51 +00:00
//! This module is responsible for discovering all items that will contribute
2015-11-02 13:46:39 +00:00
//! to code generation of the crate. The important part here is that it not only
//! needs to find syntax-level items (functions, structs, etc) but also all
//! their monomorphized instantiations. Every non-generic, non-const function
//! maps to one LLVM artifact. Every generic function can produce
//! from zero to N artifacts, depending on the sets of type arguments it
//! is instantiated with.
//! This also applies to generic items from other crates: A generic definition
//! in crate X might produce monomorphizations that are compiled into crate Y.
//! We also have to collect these here.
//!
2017-12-01 18:57:45 +00:00
//! The following kinds of "mono items" are handled here:
2015-11-02 13:46:39 +00:00
//!
//! - Functions
//! - Methods
//! - Closures
//! - Statics
//! - Drop glue
//!
//! The following things also result in LLVM artifacts, but are not collected
//! here, since we instantiate them locally on demand when needed in a given
//! codegen unit:
//!
//! - Constants
//! - Vtables
//! - Object Shims
//!
//!
//! General Algorithm
//! -----------------
//! Let's define some terms first:
//!
2017-12-01 18:57:45 +00:00
//! - A "mono item" is something that results in a function or global in
//! the LLVM IR of a codegen unit. Mono items do not stand on their
//! own, they can reference other mono items. For example, if function
//! `foo()` calls function `bar()` then the mono item for `foo()`
//! references the mono item for function `bar()`. In general, the
//! definition for mono item A referencing a mono item B is that
2015-11-02 13:46:39 +00:00
//! the LLVM artifact produced for A references the LLVM artifact produced
//! for B.
//!
2017-12-01 18:57:45 +00:00
//! - Mono items and the references between them form a directed graph,
//! where the mono items are the nodes and references form the edges.
//! Let's call this graph the "mono item graph".
2015-11-02 13:46:39 +00:00
//!
2017-12-01 18:57:45 +00:00
//! - The mono item graph for a program contains all mono items
2015-11-02 13:46:39 +00:00
//! that are needed in order to produce the complete LLVM IR of the program.
//!
//! The purpose of the algorithm implemented in this module is to build the
2017-12-01 18:57:45 +00:00
//! mono item graph for the current crate. It runs in two phases:
2015-11-02 13:46:39 +00:00
//!
//! 1. Discover the roots of the graph by traversing the HIR of the crate.
//! 2. Starting from the roots, find neighboring nodes by inspecting the MIR
//! representation of the item corresponding to a given node, until no more
//! new nodes are found.
//!
//! ### Discovering roots
//!
2021-04-02 16:33:34 +00:00
//! The roots of the mono item graph correspond to the public non-generic
2015-11-02 13:46:39 +00:00
//! syntactic items in the source code. We find them by walking the HIR of the
2021-04-02 16:55:23 +00:00
//! crate, and whenever we hit upon a public function, method, or static item,
//! we create a mono item consisting of the items DefId and, since we only
//! consider non-generic items, an empty type-substitution set. (In eager
//! collection mode, during incremental compilation, all non-generic functions
//! are considered as roots, as well as when the `-Clink-dead-code` option is
2021-04-02 17:21:23 +00:00
//! specified. Functions marked `#[no_mangle]` and functions called by inlinable
//! functions also always act as roots.)
2015-11-02 13:46:39 +00:00
//!
//! ### Finding neighbor nodes
2017-12-01 18:57:45 +00:00
//! Given a mono item node, we can discover neighbors by inspecting its
2015-11-02 13:46:39 +00:00
//! MIR. We walk the MIR and any time we hit upon something that signifies a
2017-12-01 18:57:45 +00:00
//! reference to another mono item, we have found a neighbor. Since the
//! mono item we are currently at is always monomorphic, we also know the
2015-11-02 13:46:39 +00:00
//! concrete type arguments of its neighbors, and so all neighbors again will be
//! monomorphic. The specific forms a reference to a neighboring node can take
//! in MIR are quite diverse. Here is an overview:
//!
//! #### Calling Functions/Methods
2017-12-01 18:57:45 +00:00
//! The most obvious form of one mono item referencing another is a
2015-11-02 13:46:39 +00:00
//! function or method call (represented by a CALL terminator in MIR). But
//! calls are not the only thing that might introduce a reference between two
2017-12-01 18:57:45 +00:00
//! function mono items, and as we will see below, they are just a
2020-06-21 15:43:30 +00:00
//! specialization of the form described next, and consequently will not get any
2015-11-02 13:46:39 +00:00
//! special treatment in the algorithm.
//!
//! #### Taking a reference to a function or method
//! A function does not need to actually be called in order to be a neighbor of
//! another function. It suffices to just take a reference in order to introduce
//! an edge. Consider the following example:
//!
2022-04-15 22:04:34 +00:00
//! ```
//! # use core::fmt::Display;
2015-11-02 13:46:39 +00:00
//! fn print_val<T: Display>(x: T) {
//! println!("{}", x);
//! }
//!
2022-04-15 22:04:34 +00:00
//! fn call_fn(f: &dyn Fn(i32), x: i32) {
2015-11-02 13:46:39 +00:00
//! f(x);
//! }
//!
//! fn main() {
//! let print_i32 = print_val::<i32>;
//! call_fn(&print_i32, 0);
//! }
//! ```
//! The MIR of none of these functions will contain an explicit call to
2017-12-01 18:57:45 +00:00
//! `print_val::<i32>`. Nonetheless, in order to mono this program, we need
2015-11-02 13:46:39 +00:00
//! an instance of this function. Thus, whenever we encounter a function or
//! method in operand position, we treat it as a neighbor of the current
2017-12-01 18:57:45 +00:00
//! mono item. Calls are just a special case of that.
2015-11-02 13:46:39 +00:00
//!
//! #### Closures
//! In a way, closures are a simple case. Since every closure object needs to be
//! constructed somewhere, we can reliably discover them by observing
//! `RValue::Aggregate` expressions with `AggregateKind::Closure`. This is also
//! true for closures inlined from other crates.
//!
//! #### Drop glue
2017-12-01 18:57:45 +00:00
//! Drop glue mono items are introduced by MIR drop-statements. The
//! generated mono item will again have drop-glue item neighbors if the
2015-11-02 13:46:39 +00:00
//! type to be dropped contains nested values that also need to be dropped. It
//! might also have a function item neighbor for the explicit `Drop::drop`
//! implementation of its type.
//!
//! #### Unsizing Casts
//! A subtle way of introducing neighbor edges is by casting to a trait object.
//! Since the resulting fat-pointer contains a reference to a vtable, we need to
//! instantiate all object-save methods of the trait, as we need to store
//! pointers to these functions even if they never get called anywhere. This can
//! be seen as a special case of taking a function reference.
//!
//! #### Boxes
//! Since `Box` expression have special compiler support, no explicit calls to
2018-12-22 13:39:45 +00:00
//! `exchange_malloc()` and `box_free()` may show up in MIR, even if the
2015-11-02 13:46:39 +00:00
//! compiler will generate them. We have to observe `Rvalue::Box` expressions
//! and Box-typed drop-statements for that purpose.
//!
//!
//! Interaction with Cross-Crate Inlining
//! -------------------------------------
//! The binary of a crate will not only contain machine code for the items
//! defined in the source code of that crate. It will also contain monomorphic
//! instantiations of any extern generic functions and of functions marked with
2017-12-31 16:17:01 +00:00
//! `#[inline]`.
2017-12-01 18:57:45 +00:00
//! The collection algorithm handles this more or less mono. If it is
//! about to create a mono item for something with an external `DefId`,
2015-11-02 13:46:39 +00:00
//! it will take a look if the MIR for that item is available, and if so just
2016-09-18 06:36:59 +00:00
//! proceed normally. If the MIR is not available, it assumes that the item is
2015-11-02 13:46:39 +00:00
//! just linked to and no node is created; which is exactly what we want, since
//! no machine code should be generated in the current crate for such an item.
//!
//! Eager and Lazy Collection Mode
//! ------------------------------
2017-12-01 18:57:45 +00:00
//! Mono item collection can be performed in one of two modes:
2015-11-02 13:46:39 +00:00
//!
//! - Lazy mode means that items will only be instantiated when actually
//! referenced. The goal is to produce the least amount of machine code
//! possible.
//!
//! - Eager mode is meant to be used in conjunction with incremental compilation
2017-12-01 18:57:45 +00:00
//! where a stable set of mono items is more important than a minimal
2015-11-02 13:46:39 +00:00
//! one. Thus, eager mode will instantiate drop-glue for every drop-able type
2020-06-21 02:53:51 +00:00
//! in the crate, even if no drop call for that type exists (yet). It will
2015-11-02 13:46:39 +00:00
//! also instantiate default implementations of trait methods, something that
//! otherwise is only done on demand.
//!
//!
//! Open Issues
//! -----------
//! Some things are not yet fully implemented in the current version of this
//! module.
//!
//! ### Const Fns
2017-12-01 18:57:45 +00:00
//! Ideally, no mono item should be generated for const fns unless there
2015-11-02 13:46:39 +00:00
//! is a call to them that cannot be evaluated at compile time. At the moment
2017-12-01 18:57:45 +00:00
//! this is not implemented however: a mono item will be produced
2015-11-02 13:46:39 +00:00
//! regardless of whether it is actually needed or not.
2020-03-29 15:19:48 +00:00
use rustc_data_structures ::fx ::{ FxHashMap , FxHashSet } ;
use rustc_data_structures ::sync ::{ par_iter , MTLock , MTRef , ParallelIterator } ;
use rustc_hir as hir ;
2022-04-07 23:59:02 +00:00
use rustc_hir ::def ::DefKind ;
2022-04-30 09:04:15 +00:00
use rustc_hir ::def_id ::{ DefId , DefIdMap , LocalDefId } ;
2020-08-18 10:47:27 +00:00
use rustc_hir ::lang_items ::LangItem ;
2020-03-29 15:19:48 +00:00
use rustc_index ::bit_set ::GrowableBitSet ;
2020-03-29 14:41:09 +00:00
use rustc_middle ::mir ::interpret ::{ AllocId , ConstValue } ;
use rustc_middle ::mir ::interpret ::{ ErrorHandled , GlobalAlloc , Scalar } ;
use rustc_middle ::mir ::mono ::{ InstantiationMode , MonoItem } ;
use rustc_middle ::mir ::visit ::Visitor as MirVisitor ;
use rustc_middle ::mir ::{ self , Local , Location } ;
use rustc_middle ::ty ::adjustment ::{ CustomCoerceUnsized , PointerCast } ;
2021-05-27 20:28:04 +00:00
use rustc_middle ::ty ::print ::with_no_trimmed_paths ;
2020-03-23 01:57:04 +00:00
use rustc_middle ::ty ::subst ::{ GenericArgKind , InternalSubsts } ;
2021-06-14 10:02:53 +00:00
use rustc_middle ::ty ::{ self , GenericParamDefKind , Instance , Ty , TyCtxt , TypeFoldable , VtblEntry } ;
2021-03-30 14:26:40 +00:00
use rustc_middle ::{ middle ::codegen_fn_attrs ::CodegenFnAttrFlags , mir ::visit ::TyContext } ;
2020-03-11 11:49:08 +00:00
use rustc_session ::config ::EntryFnType ;
2021-03-26 13:39:04 +00:00
use rustc_session ::lint ::builtin ::LARGE_ASSIGNMENTS ;
2021-06-25 23:48:26 +00:00
use rustc_session ::Limit ;
2020-06-22 02:32:35 +00:00
use rustc_span ::source_map ::{ dummy_spanned , respan , Span , Spanned , DUMMY_SP } ;
2021-03-26 13:39:04 +00:00
use rustc_target ::abi ::Size ;
2018-12-19 04:39:33 +00:00
use std ::iter ;
2020-10-10 20:33:22 +00:00
use std ::ops ::Range ;
2020-09-16 02:45:58 +00:00
use std ::path ::PathBuf ;
2018-12-19 04:39:33 +00:00
2019-10-20 04:54:53 +00:00
#[ derive(PartialEq) ]
2017-10-25 15:05:19 +00:00
pub enum MonoItemCollectionMode {
2015-11-02 13:46:39 +00:00
Eager ,
2019-12-24 22:38:22 +00:00
Lazy ,
2015-11-02 13:46:39 +00:00
}
2017-12-01 18:57:45 +00:00
/// Maps every mono item to all mono items it references in its
2016-04-22 18:07:23 +00:00
/// body.
2016-05-09 18:26:15 +00:00
pub struct InliningMap < ' tcx > {
2017-12-01 18:57:45 +00:00
// Maps a source mono item to the range of mono items
2017-07-12 15:35:30 +00:00
// accessed by it.
2020-10-10 20:33:22 +00:00
// The range selects elements within the `targets` vecs.
index : FxHashMap < MonoItem < ' tcx > , Range < usize > > ,
2017-10-25 15:04:24 +00:00
targets : Vec < MonoItem < ' tcx > > ,
2017-07-12 15:35:30 +00:00
2017-12-01 18:57:45 +00:00
// Contains one bit per mono item in the `targets` field. That bit
// is true if that mono item needs to be inlined into every CGU.
Merge indexed_set.rs into bitvec.rs, and rename it bit_set.rs.
Currently we have two files implementing bitsets (and 2D bit matrices).
This commit combines them into one, taking the best features from each.
This involves renaming a lot of things. The high level changes are as
follows.
- bitvec.rs --> bit_set.rs
- indexed_set.rs --> (removed)
- BitArray + IdxSet --> BitSet (merged, see below)
- BitVector --> GrowableBitSet
- {,Sparse,Hybrid}IdxSet --> {,Sparse,Hybrid}BitSet
- BitMatrix --> BitMatrix
- SparseBitMatrix --> SparseBitMatrix
The changes within the bitset types themselves are as follows.
```
OLD OLD NEW
BitArray<C> IdxSet<T> BitSet<T>
-------- ------ ------
grow - grow
new - (remove)
new_empty new_empty new_empty
new_filled new_filled new_filled
- to_hybrid to_hybrid
clear clear clear
set_up_to set_up_to set_up_to
clear_above - clear_above
count - count
contains(T) contains(&T) contains(T)
contains_all - superset
is_empty - is_empty
insert(T) add(&T) insert(T)
insert_all - insert_all()
remove(T) remove(&T) remove(T)
words words words
words_mut words_mut words_mut
- overwrite overwrite
merge union union
- subtract subtract
- intersect intersect
iter iter iter
```
In general, when choosing names I went with:
- names that are more obvious (e.g. `BitSet` over `IdxSet`).
- names that are more like the Rust libraries (e.g. `T` over `C`,
`insert` over `add`);
- names that are more set-like (e.g. `union` over `merge`, `superset`
over `contains_all`, `domain_size` over `num_bits`).
Also, using `T` for index arguments seems more sensible than `&T` --
even though the latter is standard in Rust collection types -- because
indices are always copyable. It also results in fewer `&` and `*`
sigils in practice.
2018-09-14 05:07:25 +00:00
inlines : GrowableBitSet < usize > ,
2016-04-22 18:07:23 +00:00
}
2022-05-26 01:36:54 +00:00
/// Struct to store mono items in each collecting and if they should
/// be inlined. We call `instantiation_mode` to get their inlining
/// status when inserting new elements, which avoids calling it in
/// `inlining_map.lock_mut()`. See the `collect_items_rec` implementation
/// below.
struct MonoItems < ' tcx > {
// If this is false, we do not need to compute whether items
// will need to be inlined.
compute_inlining : bool ,
// The TyCtxt used to determine whether the a item should
// be inlined.
tcx : TyCtxt < ' tcx > ,
// The collected mono items. The bool field in each element
// indicates whether this element should be inlined.
items : Vec < ( Spanned < MonoItem < ' tcx > > , bool /* inlined */ ) > ,
}
impl < ' tcx > MonoItems < ' tcx > {
#[ inline ]
fn push ( & mut self , item : Spanned < MonoItem < ' tcx > > ) {
self . extend ( [ item ] ) ;
}
#[ inline ]
fn extend < T : IntoIterator < Item = Spanned < MonoItem < ' tcx > > > > ( & mut self , iter : T ) {
self . items . extend ( iter . into_iter ( ) . map ( | mono_item | {
let inlined = if ! self . compute_inlining {
false
} else {
mono_item . node . instantiation_mode ( self . tcx ) = = InstantiationMode ::LocalCopy
} ;
( mono_item , inlined )
} ) )
}
}
2016-05-09 18:26:15 +00:00
impl < ' tcx > InliningMap < ' tcx > {
fn new ( ) -> InliningMap < ' tcx > {
InliningMap {
2018-10-16 08:44:26 +00:00
index : FxHashMap ::default ( ) ,
2016-04-22 18:07:23 +00:00
targets : Vec ::new ( ) ,
Merge indexed_set.rs into bitvec.rs, and rename it bit_set.rs.
Currently we have two files implementing bitsets (and 2D bit matrices).
This commit combines them into one, taking the best features from each.
This involves renaming a lot of things. The high level changes are as
follows.
- bitvec.rs --> bit_set.rs
- indexed_set.rs --> (removed)
- BitArray + IdxSet --> BitSet (merged, see below)
- BitVector --> GrowableBitSet
- {,Sparse,Hybrid}IdxSet --> {,Sparse,Hybrid}BitSet
- BitMatrix --> BitMatrix
- SparseBitMatrix --> SparseBitMatrix
The changes within the bitset types themselves are as follows.
```
OLD OLD NEW
BitArray<C> IdxSet<T> BitSet<T>
-------- ------ ------
grow - grow
new - (remove)
new_empty new_empty new_empty
new_filled new_filled new_filled
- to_hybrid to_hybrid
clear clear clear
set_up_to set_up_to set_up_to
clear_above - clear_above
count - count
contains(T) contains(&T) contains(T)
contains_all - superset
is_empty - is_empty
insert(T) add(&T) insert(T)
insert_all - insert_all()
remove(T) remove(&T) remove(T)
words words words
words_mut words_mut words_mut
- overwrite overwrite
merge union union
- subtract subtract
- intersect intersect
iter iter iter
```
In general, when choosing names I went with:
- names that are more obvious (e.g. `BitSet` over `IdxSet`).
- names that are more like the Rust libraries (e.g. `T` over `C`,
`insert` over `add`);
- names that are more set-like (e.g. `union` over `merge`, `superset`
over `contains_all`, `domain_size` over `num_bits`).
Also, using `T` for index arguments seems more sensible than `&T` --
even though the latter is standard in Rust collection types -- because
indices are always copyable. It also results in fewer `&` and `*`
sigils in practice.
2018-09-14 05:07:25 +00:00
inlines : GrowableBitSet ::with_capacity ( 1024 ) ,
2016-04-22 18:07:23 +00:00
}
}
2022-05-26 01:36:54 +00:00
fn record_accesses < ' a > (
& mut self ,
source : MonoItem < ' tcx > ,
new_targets : & ' a [ ( Spanned < MonoItem < ' tcx > > , bool ) ] ,
) where
' tcx : ' a ,
{
2016-04-22 18:07:23 +00:00
let start_index = self . targets . len ( ) ;
2017-07-12 15:37:58 +00:00
let new_items_count = new_targets . len ( ) ;
2017-07-12 15:35:30 +00:00
let new_items_count_total = new_items_count + self . targets . len ( ) ;
self . targets . reserve ( new_items_count ) ;
Improve handling of type bounds in `bit_set.rs`.
Currently, `BitSet` doesn't actually know its own domain size; it just
knows how many words it contains. To improve things, this commit makes
the following changes.
- It changes `BitSet` and `SparseBitSet` to store their own domain size,
and do more precise bounds and same-size checks with it. It also
changes the signature of `BitSet::to_string()` (and puts it within
`impl ToString`) now that the domain size need not be passed in from
outside.
- It uses `derive(RustcDecodable, RustcEncodable)` for `BitSet`. This
required adding code to handle `PhantomData` in `libserialize`.
- As a result, it removes the domain size from `HybridBitSet`, making a
lot of that code nicer.
- Both set_up_to() and clear_above() were overly general, working with
arbitrary sizes when they are only needed for the domain size. The
commit removes the former, degeneralizes the latter, and removes the
(overly general) tests.
- Changes `GrowableBitSet::grow()` to `ensure()`, fixing a bug where a
(1-based) domain size was confused with a (0-based) element index.
- Changes `BitMatrix` to store its row count, and do more precise bounds
checks with it.
- Changes `ty_params` in `select.rs` from a `BitSet` to a
`GrowableBitSet` because it repeatedly failed the new, more precise
bounds checks. (Changing the type was simpler than computing an
accurate domain size.)
- Various other minor improvements.
2018-09-18 07:03:24 +00:00
self . inlines . ensure ( new_items_count_total ) ;
2017-07-12 15:35:30 +00:00
2022-05-26 01:36:54 +00:00
for ( i , ( Spanned { node : mono_item , .. } , inlined ) ) in new_targets . into_iter ( ) . enumerate ( ) {
self . targets . push ( * mono_item ) ;
if * inlined {
2017-07-12 15:35:30 +00:00
self . inlines . insert ( i + start_index ) ;
}
}
2016-04-22 18:07:23 +00:00
let end_index = self . targets . len ( ) ;
2020-10-10 20:33:22 +00:00
assert! ( self . index . insert ( source , start_index .. end_index ) . is_none ( ) ) ;
2016-04-22 18:07:23 +00:00
}
// Internally iterate over all items referenced by `source` which will be
// made available for inlining.
2017-10-25 15:04:24 +00:00
pub fn with_inlining_candidates < F > ( & self , source : MonoItem < ' tcx > , mut f : F )
2019-12-24 22:38:22 +00:00
where
F : FnMut ( MonoItem < ' tcx > ) ,
2017-07-12 15:35:30 +00:00
{
2020-10-10 20:33:22 +00:00
if let Some ( range ) = self . index . get ( & source ) {
for ( i , candidate ) in self . targets [ range . clone ( ) ] . iter ( ) . enumerate ( ) {
if self . inlines . contains ( range . start + i ) {
2017-07-12 15:35:30 +00:00
f ( * candidate ) ;
}
2016-04-22 18:07:23 +00:00
}
}
}
2017-07-12 15:35:30 +00:00
// Internally iterate over all items and the things each accesses.
pub fn iter_accesses < F > ( & self , mut f : F )
2019-12-24 22:38:22 +00:00
where
F : FnMut ( MonoItem < ' tcx > , & [ MonoItem < ' tcx > ] ) ,
2017-07-12 15:35:30 +00:00
{
2020-10-10 20:33:22 +00:00
for ( & accessor , range ) in & self . index {
f ( accessor , & self . targets [ range . clone ( ) ] )
2017-07-12 15:35:30 +00:00
}
}
2016-04-22 18:07:23 +00:00
}
2016-03-24 15:40:49 +00:00
2022-02-16 09:56:01 +00:00
#[ instrument(skip(tcx, mode), level = " debug " ) ]
2019-06-21 21:49:03 +00:00
pub fn collect_crate_mono_items (
tcx : TyCtxt < '_ > ,
2019-06-11 21:11:55 +00:00
mode : MonoItemCollectionMode ,
2019-06-21 21:49:03 +00:00
) -> ( FxHashSet < MonoItem < '_ > > , InliningMap < '_ > ) {
2019-10-08 12:05:41 +00:00
let _prof_timer = tcx . prof . generic_activity ( " monomorphization_collector " ) ;
2020-01-07 20:34:08 +00:00
let roots =
tcx . sess . time ( " monomorphization_collector_root_collections " , | | collect_roots ( tcx , mode ) ) ;
2015-11-02 13:46:39 +00:00
2019-07-06 07:48:03 +00:00
debug! ( " building mono item graph, beginning at roots " ) ;
2018-06-08 15:49:21 +00:00
2018-10-16 08:44:26 +00:00
let mut visited = MTLock ::new ( FxHashSet ::default ( ) ) ;
2018-06-08 15:49:21 +00:00
let mut inlining_map = MTLock ::new ( InliningMap ::new ( ) ) ;
2021-07-04 18:02:51 +00:00
let recursion_limit = tcx . recursion_limit ( ) ;
2018-06-08 15:49:21 +00:00
{
let visited : MTRef < '_ , _ > = & mut visited ;
let inlining_map : MTRef < '_ , _ > = & mut inlining_map ;
2020-01-07 20:34:08 +00:00
tcx . sess . time ( " monomorphization_collector_graph_walk " , | | {
2018-06-08 15:49:21 +00:00
par_iter ( roots ) . for_each ( | root | {
2018-07-21 19:15:11 +00:00
let mut recursion_depths = DefIdMap ::default ( ) ;
2020-06-22 02:32:35 +00:00
collect_items_rec (
tcx ,
dummy_spanned ( root ) ,
visited ,
& mut recursion_depths ,
2021-06-25 23:48:26 +00:00
recursion_limit ,
2020-06-22 02:32:35 +00:00
inlining_map ,
) ;
2018-06-08 15:49:21 +00:00
} ) ;
2018-01-21 18:46:14 +00:00
} ) ;
2018-06-08 15:49:21 +00:00
}
2015-11-02 13:46:39 +00:00
2018-01-21 18:46:14 +00:00
( visited . into_inner ( ) , inlining_map . into_inner ( ) )
2015-11-02 13:46:39 +00:00
}
// Find all non-generic items by walking the HIR. These items serve as roots to
// start monomorphizing from.
2022-02-16 09:56:01 +00:00
#[ instrument(skip(tcx, mode), level = " debug " ) ]
2019-06-21 21:49:03 +00:00
fn collect_roots ( tcx : TyCtxt < '_ > , mode : MonoItemCollectionMode ) -> Vec < MonoItem < '_ > > {
2019-07-06 07:48:03 +00:00
debug! ( " collecting roots " ) ;
2022-05-26 01:36:54 +00:00
let mut roots = MonoItems { compute_inlining : false , tcx , items : Vec ::new ( ) } ;
2015-11-02 13:46:39 +00:00
{
2021-05-11 10:00:59 +00:00
let entry_fn = tcx . entry_fn ( ( ) ) ;
2017-10-27 12:30:34 +00:00
2017-12-03 21:16:24 +00:00
debug! ( " collect_roots: entry_fn = {:?} " , entry_fn ) ;
2022-04-07 23:59:02 +00:00
let mut collector = RootCollector { tcx , mode , entry_fn , output : & mut roots } ;
2015-11-02 13:46:39 +00:00
2022-04-07 23:59:02 +00:00
let crate_items = tcx . hir_crate_items ( ( ) ) ;
2018-03-13 14:40:02 +00:00
2022-04-07 23:59:02 +00:00
for id in crate_items . items ( ) {
collector . process_item ( id ) ;
}
for id in crate_items . impl_items ( ) {
collector . process_impl_item ( id ) ;
}
collector . push_extra_entry_roots ( ) ;
2015-11-02 13:46:39 +00:00
}
2018-05-08 13:10:16 +00:00
// We can only codegen items that are instantiable - items all of
2017-06-28 20:50:24 +00:00
// whose predicates hold. Luckily, items that aren't instantiable
2018-05-08 13:10:16 +00:00
// can't actually be used, so we can just skip codegenning them.
2015-11-02 13:46:39 +00:00
roots
2022-05-26 01:36:54 +00:00
. items
2020-06-22 02:32:35 +00:00
. into_iter ( )
2022-05-26 01:36:54 +00:00
. filter_map ( | ( Spanned { node : mono_item , .. } , _ ) | {
mono_item . is_instantiable ( tcx ) . then_some ( mono_item )
} )
2020-06-22 02:32:35 +00:00
. collect ( )
2015-11-02 13:46:39 +00:00
}
2021-05-16 10:34:42 +00:00
/// Collect all monomorphized items reachable from `starting_point`, and emit a note diagnostic if a
/// post-monorphization error is encountered during a collection step.
2022-02-16 09:56:01 +00:00
#[ instrument(skip(tcx, visited, recursion_depths, recursion_limit, inlining_map), level = " debug " ) ]
2019-06-16 09:33:21 +00:00
fn collect_items_rec < ' tcx > (
2019-06-13 21:48:52 +00:00
tcx : TyCtxt < ' tcx > ,
2020-06-22 02:32:35 +00:00
starting_point : Spanned < MonoItem < ' tcx > > ,
2019-06-11 21:11:55 +00:00
visited : MTRef < '_ , MTLock < FxHashSet < MonoItem < ' tcx > > > > ,
recursion_depths : & mut DefIdMap < usize > ,
2021-06-25 23:48:26 +00:00
recursion_limit : Limit ,
2019-06-11 21:11:55 +00:00
inlining_map : MTRef < '_ , MTLock < InliningMap < ' tcx > > > ,
) {
2020-06-22 02:32:35 +00:00
if ! visited . lock_mut ( ) . insert ( starting_point . node ) {
2015-11-02 13:46:39 +00:00
// We've been here already, no need to search again.
return ;
}
2020-08-28 13:31:03 +00:00
debug! ( " BEGIN collect_items_rec({}) " , starting_point . node ) ;
2015-11-02 13:46:39 +00:00
2022-05-26 01:36:54 +00:00
let mut neighbors = MonoItems { compute_inlining : true , tcx , items : Vec ::new ( ) } ;
2015-11-02 13:46:39 +00:00
let recursion_depth_reset ;
2021-05-16 10:34:42 +00:00
//
// Post-monomorphization errors MVP
//
// We can encounter errors while monomorphizing an item, but we don't have a good way of
// showing a complete stack of spans ultimately leading to collecting the erroneous one yet.
// (It's also currently unclear exactly which diagnostics and information would be interesting
// to report in such cases)
//
// This leads to suboptimal error reporting: a post-monomorphization error (PME) will be
// shown with just a spanned piece of code causing the error, without information on where
// it was called from. This is especially obscure if the erroneous mono item is in a
// dependency. See for example issue #85155, where, before minimization, a PME happened two
// crates downstream from libcore's stdarch, without a way to know which dependency was the
// cause.
//
// If such an error occurs in the current crate, its span will be enough to locate the
// source. If the cause is in another crate, the goal here is to quickly locate which mono
// item in the current crate is ultimately responsible for causing the error.
//
// To give at least _some_ context to the user: while collecting mono items, we check the
// error count. If it has changed, a PME occurred, and we trigger some diagnostics about the
// current step of mono items collection.
//
2022-04-30 09:04:15 +00:00
// FIXME: don't rely on global state, instead bubble up errors. Note: this is very hard to do.
2021-05-16 10:34:42 +00:00
let error_count = tcx . sess . diagnostic ( ) . err_count ( ) ;
2020-06-22 02:32:35 +00:00
match starting_point . node {
2018-02-19 12:29:22 +00:00
MonoItem ::Static ( def_id ) = > {
2017-09-12 15:28:17 +00:00
let instance = Instance ::mono ( tcx , def_id ) ;
2017-01-09 14:52:08 +00:00
// Sanity check whether this ended up being collected accidentally
2020-06-22 12:02:32 +00:00
debug_assert! ( should_codegen_locally ( tcx , & instance ) ) ;
2017-01-09 14:52:08 +00:00
2020-06-22 12:57:03 +00:00
let ty = instance . ty ( tcx , ty ::ParamEnv ::reveal_all ( ) ) ;
2020-06-22 02:32:35 +00:00
visit_drop_use ( tcx , ty , true , starting_point . span , & mut neighbors ) ;
2016-05-09 21:00:31 +00:00
2015-11-02 13:46:39 +00:00
recursion_depth_reset = None ;
2016-05-09 21:00:31 +00:00
2020-07-31 11:27:54 +00:00
if let Ok ( alloc ) = tcx . eval_static_initializer ( def_id ) {
Introduce `ConstAllocation`.
Currently some `Allocation`s are interned, some are not, and it's very
hard to tell at a use point which is which.
This commit introduces `ConstAllocation` for the known-interned ones,
which makes the division much clearer. `ConstAllocation::inner()` is
used to get the underlying `Allocation`.
In some places it's natural to use an `Allocation`, in some it's natural
to use a `ConstAllocation`, and in some places there's no clear choice.
I've tried to make things look as nice as possible, while generally
favouring `ConstAllocation`, which is the type that embodies more
information. This does require quite a few calls to `inner()`.
The commit also tweaks how `PartialOrd` works for `Interned`. The
previous code was too clever by half, building on `T: Ord` to make the
code shorter. That caused problems with deriving `PartialOrd` and `Ord`
for `ConstAllocation`, so I changed it to build on `T: PartialOrd`,
which is slightly more verbose but much more standard and avoided the
problems.
2022-03-01 20:15:04 +00:00
for & id in alloc . inner ( ) . relocations ( ) . values ( ) {
2020-07-31 11:27:54 +00:00
collect_miri ( tcx , id , & mut neighbors ) ;
}
2018-01-27 16:15:40 +00:00
}
2015-11-02 13:46:39 +00:00
}
2017-10-25 15:04:24 +00:00
MonoItem ::Fn ( instance ) = > {
2017-01-09 14:52:08 +00:00
// Sanity check whether this ended up being collected accidentally
2020-06-22 12:02:32 +00:00
debug_assert! ( should_codegen_locally ( tcx , & instance ) ) ;
2017-01-09 14:52:08 +00:00
2015-11-02 13:46:39 +00:00
// Keep track of the monomorphization recursion depth
2021-06-25 23:48:26 +00:00
recursion_depth_reset = Some ( check_recursion_limit (
tcx ,
instance ,
starting_point . span ,
recursion_depths ,
recursion_limit ,
) ) ;
2017-09-12 15:28:17 +00:00
check_type_length_limit ( tcx , instance ) ;
2015-11-02 13:46:39 +00:00
2020-03-14 18:13:55 +00:00
rustc_data_structures ::stack ::ensure_sufficient_stack ( | | {
2018-11-02 15:14:24 +00:00
collect_neighbours ( tcx , instance , & mut neighbors ) ;
} ) ;
2015-11-02 13:46:39 +00:00
}
2021-04-11 19:51:28 +00:00
MonoItem ::GlobalAsm ( item_id ) = > {
2017-03-21 15:03:52 +00:00
recursion_depth_reset = None ;
2021-04-11 19:51:28 +00:00
let item = tcx . hir ( ) . item ( item_id ) ;
if let hir ::ItemKind ::GlobalAsm ( asm ) = item . kind {
for ( op , op_sp ) in asm . operands {
match op {
2021-04-26 19:27:27 +00:00
hir ::InlineAsmOperand ::Const { .. } = > {
// Only constants which resolve to a plain integer
// are supported. Therefore the value should not
// depend on any other items.
2021-04-11 19:51:28 +00:00
}
2022-03-01 00:53:25 +00:00
hir ::InlineAsmOperand ::SymFn { anon_const } = > {
2022-05-02 00:00:00 +00:00
let fn_ty =
tcx . typeck_body ( anon_const . body ) . node_type ( anon_const . hir_id ) ;
visit_fn_use ( tcx , fn_ty , false , * op_sp , & mut neighbors ) ;
2022-03-01 00:53:25 +00:00
}
hir ::InlineAsmOperand ::SymStatic { path : _ , def_id } = > {
let instance = Instance ::mono ( tcx , * def_id ) ;
if should_codegen_locally ( tcx , & instance ) {
trace! ( " collecting static {:?} " , def_id ) ;
neighbors . push ( dummy_spanned ( MonoItem ::Static ( * def_id ) ) ) ;
}
}
hir ::InlineAsmOperand ::In { .. }
| hir ::InlineAsmOperand ::Out { .. }
| hir ::InlineAsmOperand ::InOut { .. }
| hir ::InlineAsmOperand ::SplitInOut { .. } = > {
span_bug! ( * op_sp , " invalid operand type for global_asm! " )
}
2021-04-11 19:51:28 +00:00
}
}
} else {
span_bug! ( item . span , " Mismatch between hir::Item type and MonoItem type " )
}
2017-03-21 15:03:52 +00:00
}
2015-11-02 13:46:39 +00:00
}
2021-05-16 10:34:42 +00:00
// Check for PMEs and emit a diagnostic if one happened. To try to show relevant edges of the
2022-04-30 09:04:15 +00:00
// mono item graph.
2021-10-01 16:23:07 +00:00
if tcx . sess . diagnostic ( ) . err_count ( ) > error_count
2022-04-30 09:04:15 +00:00
& & starting_point . node . is_generic_fn ( )
2021-10-01 16:23:07 +00:00
& & starting_point . node . is_user_defined ( )
2021-05-16 10:34:42 +00:00
{
2022-02-16 18:04:48 +00:00
let formatted_item = with_no_trimmed_paths! ( starting_point . node . to_string ( ) ) ;
2021-05-16 10:34:42 +00:00
tcx . sess . span_note_without_error (
starting_point . span ,
2021-05-27 20:28:04 +00:00
& format! ( " the above error was encountered while instantiating ` {} ` " , formatted_item ) ,
2021-05-16 10:34:42 +00:00
) ;
}
2022-05-26 01:36:54 +00:00
inlining_map . lock_mut ( ) . record_accesses ( starting_point . node , & neighbors . items ) ;
2021-05-16 10:34:42 +00:00
2022-05-26 01:36:54 +00:00
for ( neighbour , _ ) in neighbors . items {
2021-06-25 23:48:26 +00:00
collect_items_rec ( tcx , neighbour , visited , recursion_depths , recursion_limit , inlining_map ) ;
2015-11-02 13:46:39 +00:00
}
if let Some ( ( def_id , depth ) ) = recursion_depth_reset {
recursion_depths . insert ( def_id , depth ) ;
}
2020-08-28 13:31:03 +00:00
debug! ( " END collect_items_rec({}) " , starting_point . node ) ;
2015-11-02 13:46:39 +00:00
}
2020-09-16 02:45:58 +00:00
/// Format instance name that is already known to be too long for rustc.
/// Show only the first and last 32 characters to avoid blasting
/// the user's terminal with thousands of lines of type-name.
///
/// If the type name is longer than before+after, it will be written to a file.
2021-12-14 05:00:50 +00:00
fn shrunk_instance_name < ' tcx > (
2020-09-16 02:45:58 +00:00
tcx : TyCtxt < ' tcx > ,
instance : & Instance < ' tcx > ,
before : usize ,
after : usize ,
) -> ( String , Option < PathBuf > ) {
let s = instance . to_string ( ) ;
2020-09-15 23:22:24 +00:00
// Only use the shrunk version if it's really shorter.
// This also avoids the case where before and after slices overlap.
2020-09-16 02:45:58 +00:00
if s . chars ( ) . nth ( before + after + 1 ) . is_some ( ) {
// An iterator of all byte positions including the end of the string.
let positions = | | s . char_indices ( ) . map ( | ( i , _ ) | i ) . chain ( iter ::once ( s . len ( ) ) ) ;
let shrunk = format! (
" {before}...{after} " ,
before = & s [ .. positions ( ) . nth ( before ) . unwrap_or ( s . len ( ) ) ] ,
after = & s [ positions ( ) . rev ( ) . nth ( after ) . unwrap_or ( 0 ) .. ] ,
) ;
2021-05-11 12:39:04 +00:00
let path = tcx . output_filenames ( ( ) ) . temp_path_ext ( " long-type.txt " , None ) ;
2020-09-16 02:45:58 +00:00
let written_to_path = std ::fs ::write ( & path , s ) . ok ( ) . map ( | _ | path ) ;
2020-09-15 23:22:24 +00:00
2020-09-16 02:45:58 +00:00
( shrunk , written_to_path )
} else {
( s , None )
}
2020-09-15 23:22:24 +00:00
}
2019-06-11 21:11:55 +00:00
fn check_recursion_limit < ' tcx > (
2019-06-13 21:48:52 +00:00
tcx : TyCtxt < ' tcx > ,
2019-06-11 21:11:55 +00:00
instance : Instance < ' tcx > ,
2020-06-22 02:32:35 +00:00
span : Span ,
2019-06-11 21:11:55 +00:00
recursion_depths : & mut DefIdMap < usize > ,
2021-06-25 23:48:26 +00:00
recursion_limit : Limit ,
2019-06-11 21:11:55 +00:00
) -> ( DefId , usize ) {
2017-02-08 17:31:03 +00:00
let def_id = instance . def_id ( ) ;
let recursion_depth = recursion_depths . get ( & def_id ) . cloned ( ) . unwrap_or ( 0 ) ;
2015-11-02 13:46:39 +00:00
debug! ( " => recursion depth={} " , recursion_depth ) ;
2019-12-30 13:30:34 +00:00
let adjusted_recursion_depth = if Some ( def_id ) = = tcx . lang_items ( ) . drop_in_place_fn ( ) {
2017-03-13 23:08:21 +00:00
// HACK: drop_in_place creates tight monomorphization loops. Give
// it more margin.
recursion_depth / 4
} else {
recursion_depth
} ;
2015-11-02 13:46:39 +00:00
// Code that needs to instantiate the same function recursively
// more than the recursion limit is assumed to be causing an
// infinite expansion.
2021-06-25 23:48:26 +00:00
if ! recursion_limit . value_within_limit ( adjusted_recursion_depth ) {
2020-09-16 02:45:58 +00:00
let ( shrunk , written_to_path ) = shrunk_instance_name ( tcx , & instance , 32 , 32 ) ;
let error = format! ( " reached the recursion limit while instantiating ` {} ` " , shrunk ) ;
2020-06-22 02:32:35 +00:00
let mut err = tcx . sess . struct_span_fatal ( span , & error ) ;
err . span_note (
tcx . def_span ( def_id ) ,
& format! ( " ` {} ` defined here " , tcx . def_path_str ( def_id ) ) ,
) ;
2020-09-16 02:45:58 +00:00
if let Some ( path ) = written_to_path {
err . note ( & format! ( " the full type name has been written to ' {} ' " , path . display ( ) ) ) ;
}
2022-03-10 00:11:28 +00:00
err . emit ( )
2015-11-02 13:46:39 +00:00
}
2017-02-08 17:31:03 +00:00
recursion_depths . insert ( def_id , recursion_depth + 1 ) ;
2015-11-02 13:46:39 +00:00
2017-02-08 17:31:03 +00:00
( def_id , recursion_depth )
2015-11-02 13:46:39 +00:00
}
2019-06-13 21:48:52 +00:00
fn check_type_length_limit < ' tcx > ( tcx : TyCtxt < ' tcx > , instance : Instance < ' tcx > ) {
2020-03-23 01:57:04 +00:00
let type_length = instance
. substs
. iter ( )
2022-01-12 03:19:52 +00:00
. flat_map ( | arg | arg . walk ( ) )
2020-03-23 01:57:04 +00:00
. filter ( | arg | match arg . unpack ( ) {
GenericArgKind ::Type ( _ ) | GenericArgKind ::Const ( _ ) = > true ,
GenericArgKind ::Lifetime ( _ ) = > false ,
} )
. count ( ) ;
debug! ( " => type length={} " , type_length ) ;
2016-11-15 21:25:59 +00:00
// Rust code can easily create exponentially-long types using only a
// polynomial recursion depth. Even with the default recursion
// depth, you can easily get cases that take >2^60 steps to run,
// which means that rustc basically hangs.
//
// Bail out in these cases to avoid that bad user experience.
2021-07-04 18:02:51 +00:00
if ! tcx . type_length_limit ( ) . value_within_limit ( type_length ) {
2020-09-16 02:45:58 +00:00
let ( shrunk , written_to_path ) = shrunk_instance_name ( tcx , & instance , 32 , 32 ) ;
let msg = format! ( " reached the type-length limit while instantiating ` {} ` " , shrunk ) ;
2018-12-19 04:39:33 +00:00
let mut diag = tcx . sess . struct_span_fatal ( tcx . def_span ( instance . def_id ( ) ) , & msg ) ;
2020-09-16 02:45:58 +00:00
if let Some ( path ) = written_to_path {
diag . note ( & format! ( " the full type name has been written to ' {} ' " , path . display ( ) ) ) ;
}
diag . help ( & format! (
2016-11-15 21:25:59 +00:00
" consider adding a `#![type_length_limit= \" {} \" ]` attribute to your crate " ,
2019-12-24 22:38:22 +00:00
type_length
) ) ;
2022-03-10 00:11:28 +00:00
diag . emit ( )
2016-11-15 21:25:59 +00:00
}
}
2019-06-14 16:39:39 +00:00
struct MirNeighborCollector < ' a , ' tcx > {
2019-06-13 21:48:52 +00:00
tcx : TyCtxt < ' tcx > ,
2019-06-03 22:26:48 +00:00
body : & ' a mir ::Body < ' tcx > ,
2022-05-26 01:36:54 +00:00
output : & ' a mut MonoItems < ' tcx > ,
2020-03-11 19:36:07 +00:00
instance : Instance < ' tcx > ,
}
impl < ' a , ' tcx > MirNeighborCollector < ' a , ' tcx > {
pub fn monomorphize < T > ( & self , value : T ) -> T
where
T : TypeFoldable < ' tcx > ,
{
debug! ( " monomorphize: self.instance={:?} " , self . instance ) ;
2020-11-06 00:00:00 +00:00
self . instance . subst_mir_and_normalize_erasing_regions (
self . tcx ,
ty ::ParamEnv ::reveal_all ( ) ,
2020-10-24 00:21:18 +00:00
value ,
2020-11-06 00:00:00 +00:00
)
2020-03-11 19:36:07 +00:00
}
2015-11-02 13:46:39 +00:00
}
impl < ' a , ' tcx > MirVisitor < ' tcx > for MirNeighborCollector < ' a , ' tcx > {
2016-08-09 01:46:06 +00:00
fn visit_rvalue ( & mut self , rvalue : & mir ::Rvalue < ' tcx > , location : Location ) {
2015-11-02 13:46:39 +00:00
debug! ( " visiting rvalue {:?} " , * rvalue ) ;
2020-06-22 02:32:35 +00:00
let span = self . body . source_info ( location ) . span ;
2015-11-02 13:46:39 +00:00
match * rvalue {
// When doing an cast from a regular pointer to a fat pointer, we
// have to instantiate all methods of the trait being cast to, so we
// can build the appropriate vtable.
2019-04-15 13:50:44 +00:00
mir ::Rvalue ::Cast (
2019-12-24 22:38:22 +00:00
mir ::CastKind ::Pointer ( PointerCast ::Unsize ) ,
ref operand ,
target_ty ,
2019-04-15 13:50:44 +00:00
) = > {
2020-03-11 19:36:07 +00:00
let target_ty = self . monomorphize ( target_ty ) ;
2019-06-03 22:26:48 +00:00
let source_ty = operand . ty ( self . body , self . tcx ) ;
2020-03-11 19:36:07 +00:00
let source_ty = self . monomorphize ( source_ty ) ;
2019-12-24 22:38:22 +00:00
let ( source_ty , target_ty ) =
find_vtable_types_for_unsizing ( self . tcx , source_ty , target_ty ) ;
2015-11-02 13:46:39 +00:00
// This could also be a different Unsize instruction, like
// from a fixed sized array to a slice. But we are only
// interested in things that produce a vtable.
if target_ty . is_trait ( ) & & ! source_ty . is_trait ( ) {
2019-12-24 22:38:22 +00:00
create_mono_items_for_vtable_methods (
self . tcx ,
target_ty ,
source_ty ,
2020-06-22 02:32:35 +00:00
span ,
2019-12-24 22:38:22 +00:00
self . output ,
) ;
2015-11-02 13:46:39 +00:00
}
}
2019-04-15 13:50:44 +00:00
mir ::Rvalue ::Cast (
2019-12-24 22:38:22 +00:00
mir ::CastKind ::Pointer ( PointerCast ::ReifyFnPointer ) ,
ref operand ,
_ ,
2019-04-15 13:50:44 +00:00
) = > {
2019-06-03 22:26:48 +00:00
let fn_ty = operand . ty ( self . body , self . tcx ) ;
2020-03-11 19:36:07 +00:00
let fn_ty = self . monomorphize ( fn_ty ) ;
2020-06-22 02:32:35 +00:00
visit_fn_use ( self . tcx , fn_ty , false , span , & mut self . output ) ;
2017-03-08 19:21:27 +00:00
}
2019-04-15 13:50:44 +00:00
mir ::Rvalue ::Cast (
2019-12-24 22:38:22 +00:00
mir ::CastKind ::Pointer ( PointerCast ::ClosureFnPointer ( _ ) ) ,
ref operand ,
_ ,
2019-04-15 13:50:44 +00:00
) = > {
2019-06-03 22:26:48 +00:00
let source_ty = operand . ty ( self . body , self . tcx ) ;
2020-03-11 19:36:07 +00:00
let source_ty = self . monomorphize ( source_ty ) ;
2020-08-02 22:49:11 +00:00
match * source_ty . kind ( ) {
2018-08-22 00:35:02 +00:00
ty ::Closure ( def_id , substs ) = > {
2019-05-23 17:45:22 +00:00
let instance = Instance ::resolve_closure (
2019-12-24 22:38:22 +00:00
self . tcx ,
def_id ,
substs ,
ty ::ClosureKind ::FnOnce ,
2022-02-04 22:18:28 +00:00
)
. expect ( " failed to normalize and resolve closure during codegen " ) ;
2020-06-22 12:02:32 +00:00
if should_codegen_locally ( self . tcx , & instance ) {
2020-06-22 12:57:03 +00:00
self . output . push ( create_fn_mono_item ( self . tcx , instance , span ) ) ;
2018-03-06 09:33:42 +00:00
}
2017-02-22 00:24:16 +00:00
}
_ = > bug! ( ) ,
}
}
2020-06-06 17:26:15 +00:00
mir ::Rvalue ::ThreadLocalRef ( def_id ) = > {
assert! ( self . tcx . is_thread_local_static ( def_id ) ) ;
let instance = Instance ::mono ( self . tcx , def_id ) ;
2020-06-22 12:02:32 +00:00
if should_codegen_locally ( self . tcx , & instance ) {
2020-06-06 17:26:15 +00:00
trace! ( " collecting thread-local static {:?} " , def_id ) ;
2020-06-22 02:32:35 +00:00
self . output . push ( respan ( span , MonoItem ::Static ( def_id ) ) ) ;
2020-06-06 17:26:15 +00:00
}
}
2015-11-02 13:46:39 +00:00
_ = > { /* not interesting */ }
}
2016-08-09 01:46:06 +00:00
self . super_rvalue ( rvalue , location ) ;
2015-11-02 13:46:39 +00:00
}
2021-03-30 14:26:40 +00:00
/// This does not walk the constant, as it has been handled entirely here and trying
/// to walk it would attempt to evaluate the `ty::Const` inside, which doesn't necessarily
/// work, as some constants cannot be represented in the type system.
2022-02-16 09:56:01 +00:00
#[ instrument(skip(self), level = " debug " ) ]
2021-03-30 14:26:40 +00:00
fn visit_constant ( & mut self , constant : & mir ::Constant < ' tcx > , location : Location ) {
let literal = self . monomorphize ( constant . literal ) ;
let val = match literal {
mir ::ConstantKind ::Val ( val , _ ) = > val ,
2022-06-10 01:18:06 +00:00
mir ::ConstantKind ::Ty ( ct ) = > match ct . kind ( ) {
2022-02-16 09:56:01 +00:00
ty ::ConstKind ::Value ( val ) = > self . tcx . valtree_to_const_val ( ( ct . ty ( ) , val ) ) ,
2021-03-30 14:26:40 +00:00
ty ::ConstKind ::Unevaluated ( ct ) = > {
2022-02-16 09:56:01 +00:00
debug! ( ? ct ) ;
2021-03-30 14:26:40 +00:00
let param_env = ty ::ParamEnv ::reveal_all ( ) ;
match self . tcx . const_eval_resolve ( param_env , ct , None ) {
// The `monomorphize` call should have evaluated that constant already.
Ok ( val ) = > val ,
2022-01-23 00:49:12 +00:00
Err ( ErrorHandled ::Reported ( _ ) | ErrorHandled ::Linted ) = > return ,
2021-03-30 14:26:40 +00:00
Err ( ErrorHandled ::TooGeneric ) = > span_bug! (
self . body . source_info ( location ) . span ,
" collection encountered polymorphic constant: {:?} " ,
literal
) ,
}
}
_ = > return ,
} ,
} ;
collect_const_value ( self . tcx , val , self . output ) ;
self . visit_ty ( literal . ty ( ) , TyContext ::Location ( location ) ) ;
}
2022-02-16 09:56:01 +00:00
#[ instrument(skip(self), level = " debug " ) ]
2022-02-02 03:24:45 +00:00
fn visit_const ( & mut self , constant : ty ::Const < ' tcx > , location : Location ) {
debug! ( " visiting const {:?} @ {:?} " , constant , location ) ;
2015-11-02 13:46:39 +00:00
2022-02-02 03:24:45 +00:00
let substituted_constant = self . monomorphize ( constant ) ;
2020-03-11 19:36:07 +00:00
let param_env = ty ::ParamEnv ::reveal_all ( ) ;
2022-06-10 01:18:06 +00:00
match substituted_constant . kind ( ) {
2022-02-16 09:56:01 +00:00
ty ::ConstKind ::Value ( val ) = > {
let const_val = self . tcx . valtree_to_const_val ( ( constant . ty ( ) , val ) ) ;
collect_const_value ( self . tcx , const_val , self . output )
}
2021-03-13 15:31:38 +00:00
ty ::ConstKind ::Unevaluated ( unevaluated ) = > {
match self . tcx . const_eval_resolve ( param_env , unevaluated , None ) {
2021-03-30 12:11:29 +00:00
// The `monomorphize` call should have evaluated that constant already.
Ok ( val ) = > span_bug! (
self . body . source_info ( location ) . span ,
" collection encountered the unevaluated constant {} which evaluated to {:?} " ,
substituted_constant ,
val
) ,
2022-01-23 00:49:12 +00:00
Err ( ErrorHandled ::Reported ( _ ) | ErrorHandled ::Linted ) = > { }
2020-03-11 19:36:07 +00:00
Err ( ErrorHandled ::TooGeneric ) = > span_bug! (
2020-07-22 09:31:12 +00:00
self . body . source_info ( location ) . span ,
" collection encountered polymorphic constant: {} " ,
substituted_constant
2020-03-11 19:36:07 +00:00
) ,
}
}
_ = > { }
}
2015-11-02 13:46:39 +00:00
2017-09-03 17:34:48 +00:00
self . super_const ( constant ) ;
2015-11-02 13:46:39 +00:00
}
2016-05-10 21:24:44 +00:00
2020-05-31 10:13:29 +00:00
fn visit_terminator ( & mut self , terminator : & mir ::Terminator < ' tcx > , location : Location ) {
debug! ( " visiting terminator {:?} @ {:?} " , terminator , location ) ;
2020-06-22 02:32:35 +00:00
let source = self . body . source_info ( location ) . span ;
2017-06-18 15:57:39 +00:00
2017-09-12 15:28:17 +00:00
let tcx = self . tcx ;
2020-05-31 10:13:29 +00:00
match terminator . kind {
2017-03-13 23:08:21 +00:00
mir ::TerminatorKind ::Call { ref func , .. } = > {
2019-06-03 22:26:48 +00:00
let callee_ty = func . ty ( self . body , tcx ) ;
2020-03-11 19:36:07 +00:00
let callee_ty = self . monomorphize ( callee_ty ) ;
2020-06-22 02:32:35 +00:00
visit_fn_use ( self . tcx , callee_ty , true , source , & mut self . output ) ;
2017-03-13 23:08:21 +00:00
}
2020-06-10 07:56:54 +00:00
mir ::TerminatorKind ::Drop { ref place , .. }
| mir ::TerminatorKind ::DropAndReplace { ref place , .. } = > {
let ty = place . ty ( self . body , self . tcx ) . ty ;
2020-03-11 19:36:07 +00:00
let ty = self . monomorphize ( ty ) ;
2020-06-22 02:32:35 +00:00
visit_drop_use ( self . tcx , ty , true , source , self . output ) ;
2017-03-13 23:08:21 +00:00
}
2020-05-24 01:04:49 +00:00
mir ::TerminatorKind ::InlineAsm { ref operands , .. } = > {
for op in operands {
2020-06-05 14:52:38 +00:00
match * op {
mir ::InlineAsmOperand ::SymFn { ref value } = > {
2021-03-08 16:18:03 +00:00
let fn_ty = self . monomorphize ( value . literal . ty ( ) ) ;
2020-06-22 02:32:35 +00:00
visit_fn_use ( self . tcx , fn_ty , false , source , & mut self . output ) ;
2020-06-05 14:52:38 +00:00
}
mir ::InlineAsmOperand ::SymStatic { def_id } = > {
let instance = Instance ::mono ( self . tcx , def_id ) ;
2020-06-22 12:02:32 +00:00
if should_codegen_locally ( self . tcx , & instance ) {
2020-06-05 14:52:38 +00:00
trace! ( " collecting asm sym static {:?} " , def_id ) ;
2020-06-22 02:32:35 +00:00
self . output . push ( respan ( source , MonoItem ::Static ( def_id ) ) ) ;
2020-06-05 14:52:38 +00:00
}
}
_ = > { }
2020-05-24 01:04:49 +00:00
}
}
}
2021-10-29 14:51:52 +00:00
mir ::TerminatorKind ::Assert { ref msg , .. } = > {
let lang_item = match msg {
mir ::AssertKind ::BoundsCheck { .. } = > LangItem ::PanicBoundsCheck ,
_ = > LangItem ::Panic ,
} ;
let instance = Instance ::mono ( tcx , tcx . require_lang_item ( lang_item , Some ( source ) ) ) ;
if should_codegen_locally ( tcx , & instance ) {
self . output . push ( create_fn_mono_item ( tcx , instance , source ) ) ;
}
}
2022-01-12 20:45:31 +00:00
mir ::TerminatorKind ::Abort { .. } = > {
let instance = Instance ::mono (
tcx ,
tcx . require_lang_item ( LangItem ::PanicNoUnwind , Some ( source ) ) ,
) ;
if should_codegen_locally ( tcx , & instance ) {
self . output . push ( create_fn_mono_item ( tcx , instance , source ) ) ;
}
}
2019-12-24 22:38:22 +00:00
mir ::TerminatorKind ::Goto { .. }
| mir ::TerminatorKind ::SwitchInt { .. }
| mir ::TerminatorKind ::Resume
| mir ::TerminatorKind ::Return
2021-10-29 14:51:52 +00:00
| mir ::TerminatorKind ::Unreachable = > { }
2019-12-24 22:38:22 +00:00
mir ::TerminatorKind ::GeneratorDrop
| mir ::TerminatorKind ::Yield { .. }
2020-06-02 07:15:24 +00:00
| mir ::TerminatorKind ::FalseEdge { .. }
2019-12-24 22:38:22 +00:00
| mir ::TerminatorKind ::FalseUnwind { .. } = > bug! ( ) ,
2016-05-10 21:24:44 +00:00
}
2020-05-31 10:13:29 +00:00
self . super_terminator ( terminator , location ) ;
2017-03-08 19:21:27 +00:00
}
2017-07-12 15:35:30 +00:00
2021-03-26 13:39:04 +00:00
fn visit_operand ( & mut self , operand : & mir ::Operand < ' tcx > , location : Location ) {
self . super_operand ( operand , location ) ;
2021-07-04 18:02:51 +00:00
let limit = self . tcx . move_size_limit ( ) . 0 ;
2021-03-26 16:28:52 +00:00
if limit = = 0 {
return ;
}
let limit = Size ::from_bytes ( limit ) ;
2021-03-26 13:39:04 +00:00
let ty = operand . ty ( self . body , self . tcx ) ;
let ty = self . monomorphize ( ty ) ;
let layout = self . tcx . layout_of ( ty ::ParamEnv ::reveal_all ( ) . and ( ty ) ) ;
if let Ok ( layout ) = layout {
2021-03-26 16:28:52 +00:00
if layout . size > limit {
2021-03-26 13:39:04 +00:00
debug! ( ? layout ) ;
let source_info = self . body . source_info ( location ) ;
debug! ( ? source_info ) ;
let lint_root = source_info . scope . lint_root ( & self . body . source_scopes ) ;
debug! ( ? lint_root ) ;
2022-02-18 23:48:49 +00:00
let Some ( lint_root ) = lint_root else {
2021-03-26 13:39:04 +00:00
// This happens when the issue is in a function from a foreign crate that
// we monomorphized in the current crate. We can't get a `HirId` for things
// in other crates.
// FIXME: Find out where to report the lint on. Maybe simply crate-level lint root
// but correct span? This would make the lint at least accept crate-level lint attributes.
2022-02-18 23:48:49 +00:00
return ;
2021-03-26 13:39:04 +00:00
} ;
self . tcx . struct_span_lint_hir (
LARGE_ASSIGNMENTS ,
lint_root ,
source_info . span ,
| lint | {
let mut err = lint . build ( & format! ( " moving {} bytes " , layout . size . bytes ( ) ) ) ;
err . span_label ( source_info . span , " value moved from here " ) ;
2022-03-30 15:53:29 +00:00
err . note ( & format! ( r # "The current maximum size is {}, but it can be customized with the move_size_limit attribute: `#![move_size_limit = "..."]`"# , limit . bytes ( ) ) ) ;
2022-01-23 00:49:12 +00:00
err . emit ( ) ;
2021-03-26 13:39:04 +00:00
} ,
) ;
}
}
}
2020-04-21 20:11:00 +00:00
fn visit_local (
2019-12-24 22:38:22 +00:00
& mut self ,
2022-07-01 14:21:21 +00:00
_place_local : Local ,
2019-12-24 22:38:22 +00:00
_context : mir ::visit ::PlaceContext ,
2019-12-11 13:39:24 +00:00
_location : Location ,
2019-12-24 22:38:22 +00:00
) {
2017-07-12 15:35:30 +00:00
}
2017-03-08 19:21:27 +00:00
}
2019-06-11 21:11:55 +00:00
fn visit_drop_use < ' tcx > (
2019-06-13 21:48:52 +00:00
tcx : TyCtxt < ' tcx > ,
2019-06-11 21:11:55 +00:00
ty : Ty < ' tcx > ,
is_direct_call : bool ,
2020-06-22 02:32:35 +00:00
source : Span ,
2022-05-26 01:36:54 +00:00
output : & mut MonoItems < ' tcx > ,
2019-06-11 21:11:55 +00:00
) {
2019-05-23 17:45:22 +00:00
let instance = Instance ::resolve_drop_in_place ( tcx , ty ) ;
2020-06-22 02:32:35 +00:00
visit_instance_use ( tcx , instance , is_direct_call , source , output ) ;
2017-03-13 23:08:21 +00:00
}
2019-06-11 21:11:55 +00:00
fn visit_fn_use < ' tcx > (
2019-06-13 21:48:52 +00:00
tcx : TyCtxt < ' tcx > ,
2019-06-11 21:11:55 +00:00
ty : Ty < ' tcx > ,
is_direct_call : bool ,
2020-06-22 02:32:35 +00:00
source : Span ,
2022-05-26 01:36:54 +00:00
output : & mut MonoItems < ' tcx > ,
2019-06-11 21:11:55 +00:00
) {
2020-08-02 22:49:11 +00:00
if let ty ::FnDef ( def_id , substs ) = * ty . kind ( ) {
2020-04-10 02:13:29 +00:00
let instance = if is_direct_call {
ty ::Instance ::resolve ( tcx , ty ::ParamEnv ::reveal_all ( ) , def_id , substs ) . unwrap ( ) . unwrap ( )
} else {
ty ::Instance ::resolve_for_fn_ptr ( tcx , ty ::ParamEnv ::reveal_all ( ) , def_id , substs )
. unwrap ( )
} ;
2020-06-22 02:32:35 +00:00
visit_instance_use ( tcx , instance , is_direct_call , source , output ) ;
2017-03-13 23:08:21 +00:00
}
}
2017-03-08 19:21:27 +00:00
2019-06-11 21:11:55 +00:00
fn visit_instance_use < ' tcx > (
2019-06-13 21:48:52 +00:00
tcx : TyCtxt < ' tcx > ,
2019-06-11 21:11:55 +00:00
instance : ty ::Instance < ' tcx > ,
is_direct_call : bool ,
2020-06-22 02:32:35 +00:00
source : Span ,
2022-05-26 01:36:54 +00:00
output : & mut MonoItems < ' tcx > ,
2019-06-11 21:11:55 +00:00
) {
2017-03-13 23:08:21 +00:00
debug! ( " visit_item_use({:?}, is_direct_call={:?}) " , instance , is_direct_call ) ;
2020-06-22 12:02:32 +00:00
if ! should_codegen_locally ( tcx , & instance ) {
2019-12-24 22:38:22 +00:00
return ;
2017-03-08 19:21:27 +00:00
}
2016-05-10 21:24:44 +00:00
2017-03-08 19:21:27 +00:00
match instance . def {
2019-12-24 22:38:22 +00:00
ty ::InstanceDef ::Virtual ( .. ) | ty ::InstanceDef ::Intrinsic ( _ ) = > {
2017-03-08 19:21:27 +00:00
if ! is_direct_call {
2019-10-29 18:56:21 +00:00
bug! ( " {:?} being reified " , instance ) ;
2017-03-08 19:21:27 +00:00
}
}
2017-03-13 23:08:21 +00:00
ty ::InstanceDef ::DropGlue ( _ , None ) = > {
2019-10-29 18:56:21 +00:00
// Don't need to emit noop drop glue if we are calling directly.
2017-03-08 19:21:27 +00:00
if ! is_direct_call {
2020-06-22 12:57:03 +00:00
output . push ( create_fn_mono_item ( tcx , instance , source ) ) ;
2017-03-08 19:21:27 +00:00
}
}
2019-12-24 22:38:22 +00:00
ty ::InstanceDef ::DropGlue ( _ , Some ( _ ) )
| ty ::InstanceDef ::VtableShim ( .. )
| ty ::InstanceDef ::ReifyShim ( .. )
| ty ::InstanceDef ::ClosureOnceShim { .. }
| ty ::InstanceDef ::Item ( .. )
| ty ::InstanceDef ::FnPtrShim ( .. )
| ty ::InstanceDef ::CloneShim ( .. ) = > {
2020-06-22 12:57:03 +00:00
output . push ( create_fn_mono_item ( tcx , instance , source ) ) ;
2016-05-10 21:24:44 +00:00
}
}
2015-11-02 13:46:39 +00:00
}
2021-10-01 17:08:06 +00:00
/// Returns `true` if we should codegen an instance in the local crate, or returns `false` if we
/// can just link to the upstream crate and therefore don't need a mono item.
2020-06-22 12:02:32 +00:00
fn should_codegen_locally < ' tcx > ( tcx : TyCtxt < ' tcx > , instance : & Instance < ' tcx > ) -> bool {
2022-02-15 04:58:25 +00:00
let Some ( def_id ) = instance . def . def_id_if_not_guaranteed_local_codegen ( ) else {
2021-10-01 17:08:06 +00:00
return true ;
2017-02-08 17:31:03 +00:00
} ;
2018-03-06 09:33:42 +00:00
2018-11-17 14:05:12 +00:00
if tcx . is_foreign_item ( def_id ) {
2020-06-22 12:57:03 +00:00
// Foreign items are always linked against, there's no way of instantiating them.
2018-11-17 14:05:12 +00:00
return false ;
}
if def_id . is_local ( ) {
2020-06-22 12:57:03 +00:00
// Local items cannot be referred to locally without monomorphizing them locally.
2018-11-17 14:05:12 +00:00
return true ;
}
2020-06-22 12:57:03 +00:00
if tcx . is_reachable_non_generic ( def_id )
| | instance . polymorphize ( tcx ) . upstream_monomorphization ( tcx ) . is_some ( )
{
// We can link to the item in question, no instance needed in this crate.
2018-11-17 14:05:12 +00:00
return false ;
}
if ! tcx . is_mir_available ( def_id ) {
2021-01-13 04:52:06 +00:00
bug! ( " no MIR available for {:?} " , def_id ) ;
2018-11-17 14:05:12 +00:00
}
2020-01-20 15:38:42 +00:00
2020-03-20 14:03:11 +00:00
true
2015-11-02 13:46:39 +00:00
}
2019-05-17 01:20:14 +00:00
/// For a given pair of source and target type that occur in an unsizing coercion,
2015-11-02 13:46:39 +00:00
/// this function finds the pair of types that determines the vtable linking
/// them.
///
2022-03-30 19:50:27 +00:00
/// For example, the source type might be `&SomeStruct` and the target type
2015-11-02 13:46:39 +00:00
/// might be `&SomeTrait` in a cast like:
///
/// let src: &SomeStruct = ...;
/// let target = src as &SomeTrait;
///
/// Then the output of this function would be (SomeStruct, SomeTrait) since for
/// constructing the `target` fat-pointer we need the vtable for that pair.
///
/// Things can get more complicated though because there's also the case where
/// the unsized type occurs as a field:
///
/// ```rust
/// struct ComplexStruct<T: ?Sized> {
/// a: u32,
/// b: f64,
/// c: T
/// }
/// ```
///
/// In this case, if `T` is sized, `&ComplexStruct<T>` is a thin pointer. If `T`
/// is unsized, `&SomeStruct` is a fat pointer, and the vtable it points to is
/// for the pair of `T` (which is a trait) and the concrete type that `T` was
/// originally coerced from:
///
/// let src: &ComplexStruct<SomeStruct> = ...;
/// let target = src as &ComplexStruct<SomeTrait>;
///
/// Again, we want this `find_vtable_types_for_unsizing()` to provide the pair
/// `(SomeStruct, SomeTrait)`.
///
2018-11-27 02:59:49 +00:00
/// Finally, there is also the case of custom unsizing coercions, e.g., for
2015-11-02 13:46:39 +00:00
/// smart pointers such as `Rc` and `Arc`.
2019-06-11 21:11:55 +00:00
fn find_vtable_types_for_unsizing < ' tcx > (
2019-06-13 21:48:52 +00:00
tcx : TyCtxt < ' tcx > ,
2019-06-11 21:11:55 +00:00
source_ty : Ty < ' tcx > ,
target_ty : Ty < ' tcx > ,
) -> ( Ty < ' tcx > , Ty < ' tcx > ) {
2017-09-15 01:44:23 +00:00
let ptr_vtable = | inner_source : Ty < ' tcx > , inner_target : Ty < ' tcx > | {
2019-07-11 11:27:41 +00:00
let param_env = ty ::ParamEnv ::reveal_all ( ) ;
2017-12-18 15:14:00 +00:00
let type_has_metadata = | ty : Ty < ' tcx > | -> bool {
2019-07-11 11:27:41 +00:00
if ty . is_sized ( tcx . at ( DUMMY_SP ) , param_env ) {
2017-12-18 15:14:00 +00:00
return false ;
}
2019-07-11 11:27:41 +00:00
let tail = tcx . struct_tail_erasing_lifetimes ( ty , param_env ) ;
2020-08-02 22:49:11 +00:00
match tail . kind ( ) {
2018-08-22 00:35:29 +00:00
ty ::Foreign ( .. ) = > false ,
2018-08-22 00:35:55 +00:00
ty ::Str | ty ::Slice ( .. ) | ty ::Dynamic ( .. ) = > true ,
2018-12-07 17:14:30 +00:00
_ = > bug! ( " unexpected unsized tail: {:?} " , tail ) ,
2017-12-18 15:14:00 +00:00
}
} ;
if type_has_metadata ( inner_source ) {
( inner_source , inner_target )
} else {
2019-07-11 11:27:41 +00:00
tcx . struct_lockstep_tails_erasing_lifetimes ( inner_source , inner_target , param_env )
2017-12-18 15:14:00 +00:00
}
2017-01-21 14:40:31 +00:00
} ;
2017-11-23 10:04:31 +00:00
2020-08-02 22:49:11 +00:00
match ( & source_ty . kind ( ) , & target_ty . kind ( ) ) {
2020-04-17 00:38:52 +00:00
( & ty ::Ref ( _ , a , _ ) , & ty ::Ref ( _ , b , _ ) | & ty ::RawPtr ( ty ::TypeAndMut { ty : b , .. } ) )
2019-12-24 22:38:22 +00:00
| ( & ty ::RawPtr ( ty ::TypeAndMut { ty : a , .. } ) , & ty ::RawPtr ( ty ::TypeAndMut { ty : b , .. } ) ) = > {
Overhaul `TyS` and `Ty`.
Specifically, change `Ty` from this:
```
pub type Ty<'tcx> = &'tcx TyS<'tcx>;
```
to this
```
pub struct Ty<'tcx>(Interned<'tcx, TyS<'tcx>>);
```
There are two benefits to this.
- It's now a first class type, so we can define methods on it. This
means we can move a lot of methods away from `TyS`, leaving `TyS` as a
barely-used type, which is appropriate given that it's not meant to
be used directly.
- The uniqueness requirement is now explicit, via the `Interned` type.
E.g. the pointer-based `Eq` and `Hash` comes from `Interned`, rather
than via `TyS`, which wasn't obvious at all.
Much of this commit is boring churn. The interesting changes are in
these files:
- compiler/rustc_middle/src/arena.rs
- compiler/rustc_middle/src/mir/visit.rs
- compiler/rustc_middle/src/ty/context.rs
- compiler/rustc_middle/src/ty/mod.rs
Specifically:
- Most mentions of `TyS` are removed. It's very much a dumb struct now;
`Ty` has all the smarts.
- `TyS` now has `crate` visibility instead of `pub`.
- `TyS::make_for_test` is removed in favour of the static `BOOL_TY`,
which just works better with the new structure.
- The `Eq`/`Ord`/`Hash` impls are removed from `TyS`. `Interned`s impls
of `Eq`/`Hash` now suffice. `Ord` is now partly on `Interned`
(pointer-based, for the `Equal` case) and partly on `TyS`
(contents-based, for the other cases).
- There are many tedious sigil adjustments, i.e. adding or removing `*`
or `&`. They seem to be unavoidable.
2022-01-25 03:13:38 +00:00
ptr_vtable ( * a , * b )
2017-01-21 14:40:31 +00:00
}
2018-08-22 00:35:02 +00:00
( & ty ::Adt ( def_a , _ ) , & ty ::Adt ( def_b , _ ) ) if def_a . is_box ( ) & & def_b . is_box ( ) = > {
2017-01-21 14:40:31 +00:00
ptr_vtable ( source_ty . boxed_ty ( ) , target_ty . boxed_ty ( ) )
2015-11-02 13:46:39 +00:00
}
2019-12-24 22:38:22 +00:00
( & ty ::Adt ( source_adt_def , source_substs ) , & ty ::Adt ( target_adt_def , target_substs ) ) = > {
2015-11-02 13:46:39 +00:00
assert_eq! ( source_adt_def , target_adt_def ) ;
2020-03-02 17:43:05 +00:00
let CustomCoerceUnsized ::Struct ( coerce_index ) =
2021-01-02 13:42:15 +00:00
crate ::custom_coerce_unsize_info ( tcx , source_ty , target_ty ) ;
2015-11-02 13:46:39 +00:00
2018-01-07 21:41:41 +00:00
let source_fields = & source_adt_def . non_enum_variant ( ) . fields ;
let target_fields = & target_adt_def . non_enum_variant ( ) . fields ;
2015-11-02 13:46:39 +00:00
2019-12-24 22:38:22 +00:00
assert! (
coerce_index < source_fields . len ( ) & & source_fields . len ( ) = = target_fields . len ( )
) ;
2015-11-02 13:46:39 +00:00
2019-12-24 22:38:22 +00:00
find_vtable_types_for_unsizing (
tcx ,
2019-05-17 01:20:14 +00:00
source_fields [ coerce_index ] . ty ( tcx , source_substs ) ,
2019-12-24 22:38:22 +00:00
target_fields [ coerce_index ] . ty ( tcx , target_substs ) ,
2019-05-17 01:20:14 +00:00
)
2015-11-02 13:46:39 +00:00
}
2019-12-24 22:38:22 +00:00
_ = > bug! (
" find_vtable_types_for_unsizing: invalid coercion {:?} -> {:?} " ,
source_ty ,
target_ty
) ,
2015-11-02 13:46:39 +00:00
}
}
2022-02-16 09:56:01 +00:00
#[ instrument(skip(tcx), level = " debug " ) ]
2020-06-22 12:57:03 +00:00
fn create_fn_mono_item < ' tcx > (
tcx : TyCtxt < ' tcx > ,
instance : Instance < ' tcx > ,
source : Span ,
) -> Spanned < MonoItem < ' tcx > > {
2017-12-05 23:29:36 +00:00
debug! ( " create_fn_mono_item(instance={}) " , instance ) ;
2021-05-05 19:57:08 +00:00
let def_id = instance . def_id ( ) ;
if tcx . sess . opts . debugging_opts . profile_closures & & def_id . is_local ( ) & & tcx . is_closure ( def_id )
{
2021-01-02 13:42:15 +00:00
crate ::util ::dump_closure_profile ( tcx , instance ) ;
2021-05-05 19:57:08 +00:00
}
2022-02-16 09:56:01 +00:00
let respanned = respan ( source , MonoItem ::Fn ( instance . polymorphize ( tcx ) ) ) ;
debug! ( ? respanned ) ;
respanned
2015-11-02 13:46:39 +00:00
}
2017-10-27 08:55:31 +00:00
/// Creates a `MonoItem` for each method that is referenced by the vtable for
2015-11-02 13:46:39 +00:00
/// the given trait/impl pair.
2019-06-11 21:11:55 +00:00
fn create_mono_items_for_vtable_methods < ' tcx > (
2019-06-13 21:48:52 +00:00
tcx : TyCtxt < ' tcx > ,
2019-06-11 21:11:55 +00:00
trait_ty : Ty < ' tcx > ,
impl_ty : Ty < ' tcx > ,
2020-06-22 02:32:35 +00:00
source : Span ,
2022-05-26 01:36:54 +00:00
output : & mut MonoItems < ' tcx > ,
2019-06-11 21:11:55 +00:00
) {
2020-06-22 12:57:03 +00:00
assert! ( ! trait_ty . has_escaping_bound_vars ( ) & & ! impl_ty . has_escaping_bound_vars ( ) ) ;
2015-11-02 13:46:39 +00:00
2020-08-02 22:49:11 +00:00
if let ty ::Dynamic ( ref trait_ty , .. ) = trait_ty . kind ( ) {
2018-12-04 11:28:06 +00:00
if let Some ( principal ) = trait_ty . principal ( ) {
let poly_trait_ref = principal . with_self_ty ( tcx , impl_ty ) ;
assert! ( ! poly_trait_ref . has_escaping_bound_vars ( ) ) ;
// Walk all methods of the trait, including those of its supertraits
2021-06-14 10:02:53 +00:00
let entries = tcx . vtable_entries ( poly_trait_ref ) ;
let methods = entries
2019-12-24 22:38:22 +00:00
. iter ( )
2021-06-14 10:02:53 +00:00
. filter_map ( | entry | match entry {
VtblEntry ::MetadataDropInPlace
| VtblEntry ::MetadataSize
| VtblEntry ::MetadataAlign
| VtblEntry ::Vacant = > None ,
2021-06-17 04:20:18 +00:00
VtblEntry ::TraitVPtr ( _ ) = > {
// all super trait items already covered, so skip them.
None
}
2021-07-18 09:23:37 +00:00
VtblEntry ::Method ( instance ) = > {
Some ( * instance ) . filter ( | instance | should_codegen_locally ( tcx , instance ) )
}
2019-12-24 22:38:22 +00:00
} )
2020-06-22 12:57:03 +00:00
. map ( | item | create_fn_mono_item ( tcx , item , source ) ) ;
2018-12-04 11:28:06 +00:00
output . extend ( methods ) ;
}
2019-05-17 01:20:14 +00:00
// Also add the destructor.
2020-06-22 02:32:35 +00:00
visit_drop_use ( tcx , impl_ty , false , source , output ) ;
2015-11-02 13:46:39 +00:00
}
}
//=-----------------------------------------------------------------------------
// Root Collection
//=-----------------------------------------------------------------------------
2019-06-11 19:03:44 +00:00
struct RootCollector < ' a , ' tcx > {
2019-06-13 21:48:52 +00:00
tcx : TyCtxt < ' tcx > ,
2017-10-25 15:05:19 +00:00
mode : MonoItemCollectionMode ,
2022-05-26 01:36:54 +00:00
output : & ' a mut MonoItems < ' tcx > ,
2021-04-25 17:09:35 +00:00
entry_fn : Option < ( DefId , EntryFnType ) > ,
2015-11-02 13:46:39 +00:00
}
2022-04-07 23:59:02 +00:00
impl < ' v > RootCollector < '_ , ' v > {
fn process_item ( & mut self , id : hir ::ItemId ) {
2022-04-29 17:11:22 +00:00
match self . tcx . def_kind ( id . def_id ) {
2022-04-07 23:59:02 +00:00
DefKind ::Enum | DefKind ::Struct | DefKind ::Union = > {
let item = self . tcx . hir ( ) . item ( id ) ;
match item . kind {
hir ::ItemKind ::Enum ( _ , ref generics )
| hir ::ItemKind ::Struct ( _ , ref generics )
| hir ::ItemKind ::Union ( _ , ref generics ) = > {
if generics . params . is_empty ( ) {
if self . mode = = MonoItemCollectionMode ::Eager {
debug! (
" RootCollector: ADT drop-glue for {} " ,
self . tcx . def_path_str ( item . def_id . to_def_id ( ) )
) ;
let ty =
Instance ::new ( item . def_id . to_def_id ( ) , InternalSubsts ::empty ( ) )
. ty ( self . tcx , ty ::ParamEnv ::reveal_all ( ) ) ;
visit_drop_use ( self . tcx , ty , true , DUMMY_SP , self . output ) ;
}
}
2015-11-02 13:46:39 +00:00
}
2022-04-09 17:52:33 +00:00
_ = > bug! ( ) ,
2015-11-02 13:46:39 +00:00
}
}
2022-04-07 23:59:02 +00:00
DefKind ::GlobalAsm = > {
2019-12-24 22:38:22 +00:00
debug! (
" RootCollector: ItemKind::GlobalAsm({}) " ,
2022-04-07 23:59:02 +00:00
self . tcx . def_path_str ( id . def_id . to_def_id ( ) )
2019-12-24 22:38:22 +00:00
) ;
2022-04-07 23:59:02 +00:00
self . output . push ( dummy_spanned ( MonoItem ::GlobalAsm ( id ) ) ) ;
2017-03-21 15:03:52 +00:00
}
2022-04-07 23:59:02 +00:00
DefKind ::Static ( .. ) = > {
2021-01-30 16:47:51 +00:00
debug! (
" RootCollector: ItemKind::Static({}) " ,
2022-04-07 23:59:02 +00:00
self . tcx . def_path_str ( id . def_id . to_def_id ( ) )
2021-01-30 16:47:51 +00:00
) ;
2022-04-07 23:59:02 +00:00
self . output . push ( dummy_spanned ( MonoItem ::Static ( id . def_id . to_def_id ( ) ) ) ) ;
2015-11-02 13:46:39 +00:00
}
2022-04-07 23:59:02 +00:00
DefKind ::Const = > {
2017-12-01 18:57:45 +00:00
// const items only generate mono items if they are
2016-07-22 16:14:26 +00:00
// actually used somewhere. Just declaring them is insufficient.
2018-08-26 13:19:34 +00:00
// but even just declaring them must collect the items they refer to
2022-04-07 23:59:02 +00:00
if let Ok ( val ) = self . tcx . const_eval_poly ( id . def_id . to_def_id ( ) ) {
2020-02-14 22:56:23 +00:00
collect_const_value ( self . tcx , val , & mut self . output ) ;
2018-08-26 13:19:34 +00:00
}
2016-06-02 21:25:03 +00:00
}
2022-04-07 23:59:02 +00:00
DefKind ::Impl = > {
if self . mode = = MonoItemCollectionMode ::Eager {
2022-04-09 17:52:33 +00:00
let item = self . tcx . hir ( ) . item ( id ) ;
2022-04-07 23:59:02 +00:00
create_mono_items_for_default_impls ( self . tcx , item , self . output ) ;
}
}
DefKind ::Fn = > {
self . push_if_root ( id . def_id ) ;
2015-11-02 13:46:39 +00:00
}
2022-04-07 23:59:02 +00:00
_ = > { }
2015-11-02 13:46:39 +00:00
}
}
2022-04-07 23:59:02 +00:00
fn process_impl_item ( & mut self , id : hir ::ImplItemId ) {
2022-04-29 17:11:22 +00:00
if matches! ( self . tcx . def_kind ( id . def_id ) , DefKind ::AssocFn ) {
2022-04-07 23:59:02 +00:00
self . push_if_root ( id . def_id ) ;
2015-11-02 13:46:39 +00:00
}
}
2020-11-11 20:57:54 +00:00
2020-04-09 08:43:00 +00:00
fn is_root ( & self , def_id : LocalDefId ) -> bool {
2019-12-24 22:38:22 +00:00
! item_requires_monomorphization ( self . tcx , def_id )
& & match self . mode {
MonoItemCollectionMode ::Eager = > true ,
MonoItemCollectionMode ::Lazy = > {
2021-04-25 17:09:35 +00:00
self . entry_fn . and_then ( | ( id , _ ) | id . as_local ( ) ) = = Some ( def_id )
2019-12-24 22:38:22 +00:00
| | self . tcx . is_reachable_non_generic ( def_id )
| | self
. tcx
. codegen_fn_attrs ( def_id )
. flags
. contains ( CodegenFnAttrFlags ::RUSTC_STD_INTERNAL_SYMBOL )
}
2017-10-27 12:30:34 +00:00
}
}
2017-12-03 21:16:24 +00:00
2019-05-17 01:20:14 +00:00
/// If `def_id` represents a root, pushes it onto the list of
2017-12-03 21:16:24 +00:00
/// outputs. (Note that all roots must be monomorphic.)
2022-02-16 09:56:01 +00:00
#[ instrument(skip(self), level = " debug " ) ]
2020-04-09 08:43:00 +00:00
fn push_if_root ( & mut self , def_id : LocalDefId ) {
2017-12-03 21:16:24 +00:00
if self . is_root ( def_id ) {
debug! ( " RootCollector::push_if_root: found root def_id={:?} " , def_id ) ;
2020-04-09 08:43:00 +00:00
let instance = Instance ::mono ( self . tcx , def_id . to_def_id ( ) ) ;
2020-06-22 12:57:03 +00:00
self . output . push ( create_fn_mono_item ( self . tcx , instance , DUMMY_SP ) ) ;
2017-12-03 21:16:24 +00:00
}
}
/// As a special case, when/if we encounter the
/// `main()` function, we also have to generate a
/// monomorphized copy of the start lang item based on
/// the return type of `main`. This is not needed when
/// the user writes their own `start` manually.
2018-03-13 14:40:02 +00:00
fn push_extra_entry_roots ( & mut self ) {
2022-02-18 23:48:49 +00:00
let Some ( ( main_def_id , EntryFnType ::Main ) ) = self . entry_fn else {
return ;
2018-03-13 14:40:02 +00:00
} ;
2020-08-18 10:47:27 +00:00
let start_def_id = match self . tcx . lang_items ( ) . require ( LangItem ::Start ) {
2017-12-03 21:16:24 +00:00
Ok ( s ) = > s ,
Err ( err ) = > self . tcx . sess . fatal ( & err ) ,
} ;
2018-03-13 14:40:02 +00:00
let main_ret_ty = self . tcx . fn_sig ( main_def_id ) . output ( ) ;
2017-12-03 21:16:24 +00:00
// Given that `main()` has no arguments,
// then its return type cannot have
// late-bound regions, since late-bound
// regions must appear in the argument
// listing.
2022-02-23 00:00:00 +00:00
let main_ret_ty = self . tcx . normalize_erasing_regions (
ty ::ParamEnv ::reveal_all ( ) ,
main_ret_ty . no_bound_vars ( ) . unwrap ( ) ,
) ;
2017-12-03 21:16:24 +00:00
let start_instance = Instance ::resolve (
self . tcx ,
2018-02-10 18:18:02 +00:00
ty ::ParamEnv ::reveal_all ( ) ,
2017-12-03 21:16:24 +00:00
start_def_id ,
2019-12-24 22:38:22 +00:00
self . tcx . intern_substs ( & [ main_ret_ty . into ( ) ] ) ,
)
2020-04-10 02:13:29 +00:00
. unwrap ( )
2019-12-24 22:38:22 +00:00
. unwrap ( ) ;
2017-12-03 21:16:24 +00:00
2020-06-22 12:57:03 +00:00
self . output . push ( create_fn_mono_item ( self . tcx , start_instance , DUMMY_SP ) ) ;
2017-12-03 21:16:24 +00:00
}
2017-10-27 12:30:34 +00:00
}
2020-04-09 08:43:00 +00:00
fn item_requires_monomorphization ( tcx : TyCtxt < '_ > , def_id : LocalDefId ) -> bool {
2017-07-13 12:43:56 +00:00
let generics = tcx . generics_of ( def_id ) ;
2018-04-12 16:51:08 +00:00
generics . requires_monomorphization ( tcx )
2017-07-13 12:43:56 +00:00
}
2019-06-11 21:11:55 +00:00
fn create_mono_items_for_default_impls < ' tcx > (
2019-06-13 21:48:52 +00:00
tcx : TyCtxt < ' tcx > ,
2019-11-28 18:28:50 +00:00
item : & ' tcx hir ::Item < ' tcx > ,
2022-05-26 01:36:54 +00:00
output : & mut MonoItems < ' tcx > ,
2019-06-11 21:11:55 +00:00
) {
2019-09-26 16:51:36 +00:00
match item . kind {
2020-11-22 22:46:21 +00:00
hir ::ItemKind ::Impl ( ref impl_ ) = > {
for param in impl_ . generics . params {
2018-05-26 12:11:39 +00:00
match param . kind {
hir ::GenericParamKind ::Lifetime { .. } = > { }
2019-12-24 22:38:22 +00:00
hir ::GenericParamKind ::Type { .. } | hir ::GenericParamKind ::Const { .. } = > {
return ;
2019-02-15 22:25:42 +00:00
}
2018-05-26 12:11:39 +00:00
}
2015-11-02 13:46:39 +00:00
}
2019-12-24 22:38:22 +00:00
debug! (
" create_mono_items_for_default_impls(item={}) " ,
2021-01-30 16:47:51 +00:00
tcx . def_path_str ( item . def_id . to_def_id ( ) )
2019-12-24 22:38:22 +00:00
) ;
2015-11-02 13:46:39 +00:00
2021-01-30 16:47:51 +00:00
if let Some ( trait_ref ) = tcx . impl_trait_ref ( item . def_id ) {
2019-07-15 23:47:08 +00:00
let param_env = ty ::ParamEnv ::reveal_all ( ) ;
2019-12-24 22:38:22 +00:00
let trait_ref = tcx . normalize_erasing_regions ( param_env , trait_ref ) ;
2021-11-18 22:29:07 +00:00
let overridden_methods = tcx . impl_item_implementor_ids ( item . def_id ) ;
2016-08-10 17:39:09 +00:00
for method in tcx . provided_trait_methods ( trait_ref . def_id ) {
2021-11-18 22:29:07 +00:00
if overridden_methods . contains_key ( & method . def_id ) {
2015-11-02 13:46:39 +00:00
continue ;
}
2019-04-17 22:09:22 +00:00
if tcx . generics_of ( method . def_id ) . own_requires_monomorphization ( ) {
2015-11-02 13:46:39 +00:00
continue ;
}
2019-12-24 22:38:22 +00:00
let substs =
InternalSubsts ::for_item ( tcx , method . def_id , | param , _ | match param . kind {
2019-04-25 21:05:04 +00:00
GenericParamDefKind ::Lifetime = > tcx . lifetimes . re_erased . into ( ) ,
2020-08-11 00:02:45 +00:00
GenericParamDefKind ::Type { .. }
| GenericParamDefKind ::Const { .. } = > {
2018-05-15 12:35:53 +00:00
trait_ref . substs [ param . index as usize ]
2018-05-14 17:27:13 +00:00
}
2019-12-24 22:38:22 +00:00
} ) ;
2020-04-10 02:13:29 +00:00
let instance = ty ::Instance ::resolve ( tcx , param_env , method . def_id , substs )
. unwrap ( )
. unwrap ( ) ;
2016-09-19 09:47:47 +00:00
2020-06-22 12:57:03 +00:00
let mono_item = create_fn_mono_item ( tcx , instance , DUMMY_SP ) ;
2020-06-22 12:02:32 +00:00
if mono_item . node . is_instantiable ( tcx ) & & should_codegen_locally ( tcx , & instance )
2019-12-24 22:38:22 +00:00
{
2017-12-05 23:29:36 +00:00
output . push ( mono_item ) ;
2015-11-02 13:46:39 +00:00
}
}
}
}
2019-12-24 22:38:22 +00:00
_ = > bug! ( ) ,
2015-11-02 13:46:39 +00:00
}
}
2019-05-17 01:20:14 +00:00
/// Scans the miri alloc in order to find function calls, closures, and drop-glue.
2022-05-26 01:36:54 +00:00
fn collect_miri < ' tcx > ( tcx : TyCtxt < ' tcx > , alloc_id : AllocId , output : & mut MonoItems < ' tcx > ) {
2020-05-08 08:58:53 +00:00
match tcx . global_alloc ( alloc_id ) {
GlobalAlloc ::Static ( def_id ) = > {
2020-05-02 19:44:25 +00:00
assert! ( ! tcx . is_thread_local_static ( def_id ) ) ;
2019-05-30 12:09:34 +00:00
let instance = Instance ::mono ( tcx , def_id ) ;
2020-06-22 12:02:32 +00:00
if should_codegen_locally ( tcx , & instance ) {
2019-05-30 12:09:34 +00:00
trace! ( " collecting static {:?} " , def_id ) ;
2020-06-22 02:32:35 +00:00
output . push ( dummy_spanned ( MonoItem ::Static ( def_id ) ) ) ;
2018-05-02 04:03:06 +00:00
}
2018-01-16 08:31:48 +00:00
}
2020-05-08 08:58:53 +00:00
GlobalAlloc ::Memory ( alloc ) = > {
2018-05-02 04:03:06 +00:00
trace! ( " collecting {:?} with {:#?} " , alloc_id , alloc ) ;
Introduce `ConstAllocation`.
Currently some `Allocation`s are interned, some are not, and it's very
hard to tell at a use point which is which.
This commit introduces `ConstAllocation` for the known-interned ones,
which makes the division much clearer. `ConstAllocation::inner()` is
used to get the underlying `Allocation`.
In some places it's natural to use an `Allocation`, in some it's natural
to use a `ConstAllocation`, and in some places there's no clear choice.
I've tried to make things look as nice as possible, while generally
favouring `ConstAllocation`, which is the type that embodies more
information. This does require quite a few calls to `inner()`.
The commit also tweaks how `PartialOrd` works for `Interned`. The
previous code was too clever by half, building on `T: Ord` to make the
code shorter. That caused problems with deriving `PartialOrd` and `Ord`
for `ConstAllocation`, so I changed it to build on `T: PartialOrd`,
which is slightly more verbose but much more standard and avoided the
problems.
2022-03-01 20:15:04 +00:00
for & inner in alloc . inner ( ) . relocations ( ) . values ( ) {
2020-03-14 18:13:55 +00:00
rustc_data_structures ::stack ::ensure_sufficient_stack ( | | {
2018-11-02 15:14:24 +00:00
collect_miri ( tcx , inner , output ) ;
} ) ;
2018-05-02 04:03:06 +00:00
}
2019-12-24 22:38:22 +00:00
}
2020-05-08 08:58:53 +00:00
GlobalAlloc ::Function ( fn_instance ) = > {
2020-06-22 12:02:32 +00:00
if should_codegen_locally ( tcx , & fn_instance ) {
2018-05-02 04:03:06 +00:00
trace! ( " collecting {:?} with {:#?} " , alloc_id , fn_instance ) ;
2020-06-22 12:57:03 +00:00
output . push ( create_fn_mono_item ( tcx , fn_instance , DUMMY_SP ) ) ;
2018-05-02 04:03:06 +00:00
}
2018-01-16 08:31:48 +00:00
}
}
}
2019-05-17 01:20:14 +00:00
/// Scans the MIR in order to find function calls, closures, and drop-glue.
2022-02-16 09:56:01 +00:00
#[ instrument(skip(tcx, output), level = " debug " ) ]
2019-06-11 21:11:55 +00:00
fn collect_neighbours < ' tcx > (
2019-06-13 21:48:52 +00:00
tcx : TyCtxt < ' tcx > ,
2019-06-11 21:11:55 +00:00
instance : Instance < ' tcx > ,
2022-05-26 01:36:54 +00:00
output : & mut MonoItems < ' tcx > ,
2019-06-11 21:11:55 +00:00
) {
2019-11-06 17:00:46 +00:00
let body = tcx . instance_mir ( instance . def ) ;
2020-03-28 21:54:41 +00:00
MirNeighborCollector { tcx , body : & body , output , instance } . visit_body ( & body ) ;
2016-06-02 21:25:03 +00:00
}
2016-11-04 21:37:42 +00:00
2022-02-16 09:56:01 +00:00
#[ instrument(skip(tcx, output), level = " debug " ) ]
2020-02-14 22:56:23 +00:00
fn collect_const_value < ' tcx > (
tcx : TyCtxt < ' tcx > ,
value : ConstValue < ' tcx > ,
2022-05-26 01:36:54 +00:00
output : & mut MonoItems < ' tcx > ,
2020-02-14 22:56:23 +00:00
) {
match value {
2021-07-12 18:29:05 +00:00
ConstValue ::Scalar ( Scalar ::Ptr ( ptr , _size ) ) = > collect_miri ( tcx , ptr . provenance , output ) ,
2020-02-14 22:56:23 +00:00
ConstValue ::Slice { data : alloc , start : _ , end : _ } | ConstValue ::ByRef { alloc , .. } = > {
Introduce `ConstAllocation`.
Currently some `Allocation`s are interned, some are not, and it's very
hard to tell at a use point which is which.
This commit introduces `ConstAllocation` for the known-interned ones,
which makes the division much clearer. `ConstAllocation::inner()` is
used to get the underlying `Allocation`.
In some places it's natural to use an `Allocation`, in some it's natural
to use a `ConstAllocation`, and in some places there's no clear choice.
I've tried to make things look as nice as possible, while generally
favouring `ConstAllocation`, which is the type that embodies more
information. This does require quite a few calls to `inner()`.
The commit also tweaks how `PartialOrd` works for `Interned`. The
previous code was too clever by half, building on `T: Ord` to make the
code shorter. That caused problems with deriving `PartialOrd` and `Ord`
for `ConstAllocation`, so I changed it to build on `T: PartialOrd`,
which is slightly more verbose but much more standard and avoided the
problems.
2022-03-01 20:15:04 +00:00
for & id in alloc . inner ( ) . relocations ( ) . values ( ) {
2020-02-14 22:56:23 +00:00
collect_miri ( tcx , id , output ) ;
}
}
_ = > { }
}
}