Auto merge of #36030 - Manishearth:rollup, r=Manishearth

Rollup of 7 pull requests

- Successful merges: #35124, #35877, #35953, #36002, #36004, #36005, #36014
- Failed merges:
This commit is contained in:
bors 2016-08-27 03:07:48 -07:00 committed by GitHub
commit 099b9fdb1a
192 changed files with 1484 additions and 3854 deletions

View File

@ -66,7 +66,7 @@ ERR_IDX_GEN_MD = $(RPATH_VAR2_T_$(CFG_BUILD)_H_$(CFG_BUILD)) $(ERR_IDX_GEN_EXE)
D := $(S)src/doc
DOC_TARGETS := book nomicon style error-index
DOC_TARGETS := book nomicon error-index
COMPILER_DOC_TARGETS :=
DOC_L10N_TARGETS :=
@ -209,13 +209,6 @@ doc/nomicon/index.html: $(RUSTBOOK_EXE) $(wildcard $(S)/src/doc/nomicon/*.md) |
$(Q)rm -rf doc/nomicon
$(Q)$(RUSTBOOK) build $(S)src/doc/nomicon doc/nomicon
style: doc/style/index.html
doc/style/index.html: $(RUSTBOOK_EXE) $(wildcard $(S)/src/doc/style/*.md) | doc/
@$(call E, rustbook: $@)
$(Q)rm -rf doc/style
$(Q)$(RUSTBOOK) build $(S)src/doc/style doc/style
error-index: doc/error-index.html
# Metadata used to generate the index is created as a side effect of

View File

@ -308,10 +308,6 @@ impl Build {
doc::rustbook(self, stage, target.target, "nomicon",
&doc_out);
}
DocStyle { stage } => {
doc::rustbook(self, stage, target.target, "style",
&doc_out);
}
DocStandalone { stage } => {
doc::standalone(self, stage, target.target, &doc_out);
}

View File

@ -92,7 +92,6 @@ macro_rules! targets {
(doc, Doc { stage: u32 }),
(doc_book, DocBook { stage: u32 }),
(doc_nomicon, DocNomicon { stage: u32 }),
(doc_style, DocStyle { stage: u32 }),
(doc_standalone, DocStandalone { stage: u32 }),
(doc_std, DocStd { stage: u32 }),
(doc_test, DocTest { stage: u32 }),
@ -366,8 +365,7 @@ impl<'a> Step<'a> {
vec![self.libtest(compiler)]
}
Source::DocBook { stage } |
Source::DocNomicon { stage } |
Source::DocStyle { stage } => {
Source::DocNomicon { stage } => {
vec![self.target(&build.config.build).tool_rustbook(stage)]
}
Source::DocErrorIndex { stage } => {
@ -382,8 +380,7 @@ impl<'a> Step<'a> {
Source::Doc { stage } => {
let mut deps = vec![
self.doc_book(stage), self.doc_nomicon(stage),
self.doc_style(stage), self.doc_standalone(stage),
self.doc_std(stage),
self.doc_standalone(stage), self.doc_std(stage),
self.doc_error_index(stage),
];

View File

@ -1,64 +0,0 @@
% Style Guidelines
This document collects the emerging principles, conventions, abstractions, and
best practices for writing Rust code.
Since Rust is evolving at a rapid pace, these guidelines are
preliminary. The hope is that writing them down explicitly will help
drive discussion, consensus and adoption.
Whenever feasible, guidelines provide specific examples from Rust's standard
libraries.
### Guideline statuses
Every guideline has a status:
* **[FIXME]**: Marks places where there is more work to be done. In
some cases, that just means going through the RFC process.
* **[FIXME #NNNNN]**: Like **[FIXME]**, but links to the issue tracker.
* **[RFC #NNNN]**: Marks accepted guidelines, linking to the rust-lang
RFC establishing them.
### Guideline stabilization
One purpose of these guidelines is to reach decisions on a number of
cross-cutting API and stylistic choices. Discussion and development of
the guidelines will happen primarily on https://internals.rust-lang.org/,
using the Guidelines category. Discussion can also occur on the
[guidelines issue tracker](https://github.com/rust-lang/rust-guidelines).
Guidelines that are under development or discussion will be marked with the
status **[FIXME]**, with a link to the issue tracker when appropriate.
Once a concrete guideline is ready to be proposed, it should be filed
as an [FIXME: needs RFC](https://github.com/rust-lang/rfcs). If the RFC is
accepted, the official guidelines will be updated to match, and will
include the tag **[RFC #NNNN]** linking to the RFC document.
### What's in this document
This document is broken into four parts:
* **[Style](style/README.md)** provides a set of rules governing naming conventions,
whitespace, and other stylistic issues.
* **[Guidelines by Rust feature](features/README.md)** places the focus on each of
Rust's features, starting from expressions and working the way out toward
crates, dispensing guidelines relevant to each.
* **Topical guidelines and patterns**. The rest of the document proceeds by
cross-cutting topic, starting with
[Ownership and resources](ownership/README.md).
* **APIs for a changing Rust**
discusses the forward-compatibility hazards, especially those that interact
with the pre-1.0 library stabilization process.
> **[FIXME]** Add cross-references throughout this document to the tutorial,
> reference manual, and other guides.
> **[FIXME]** What are some _non_-goals, _non_-principles, or _anti_-patterns that
> we should document?

View File

@ -1,50 +0,0 @@
# Summary
* [Style](style/README.md)
* [Whitespace](style/whitespace.md)
* [Comments](style/comments.md)
* [Braces, semicolons, commas](style/braces.md)
* [Naming](style/naming/README.md)
* [Ownership variants](style/naming/ownership.md)
* [Containers/wrappers](style/naming/containers.md)
* [Conversions](style/naming/conversions.md)
* [Iterators](style/naming/iterators.md)
* [Imports](style/imports.md)
* [Organization](style/organization.md)
* [Guidelines by Rust feature](features/README.md)
* [Let binding](features/let.md)
* [Pattern matching](features/match.md)
* [Loops](features/loops.md)
* [Functions and methods](features/functions-and-methods/README.md)
* [Input](features/functions-and-methods/input.md)
* [Output](features/functions-and-methods/output.md)
* [For convenience](features/functions-and-methods/convenience.md)
* [Types](features/types/README.md)
* [Conversions](features/types/conversions.md)
* [The newtype pattern](features/types/newtype.md)
* [Traits](features/traits/README.md)
* [For generics](features/traits/generics.md)
* [For objects](features/traits/objects.md)
* [For overloading](features/traits/overloading.md)
* [For extensions](features/traits/extensions.md)
* [For reuse](features/traits/reuse.md)
* [Common traits](features/traits/common.md)
* [Modules](features/modules.md)
* [Crates](features/crates.md)
* [Ownership and resources](ownership/README.md)
* [Constructors](ownership/constructors.md)
* [Builders](ownership/builders.md)
* [Destructors](ownership/destructors.md)
* [RAII](ownership/raii.md)
* [Cells and smart pointers](ownership/cell-smart.md)
* [Errors](errors/README.md)
* [Signaling](errors/signaling.md)
* [Handling](errors/handling.md)
* [Propagation](errors/propagation.md)
* [Ergonomics](errors/ergonomics.md)
* [Safety and guarantees](safety/README.md)
* [Using unsafe](safety/unsafe.md)
* [Library guarantees](safety/lib-guarantees.md)
* [Testing](testing/README.md)
* [Unit testing](testing/unit.md)
* [FFI, platform-specific code](platform.md)

View File

@ -1,3 +0,0 @@
% Errors
> **[FIXME]** Add some general text here.

View File

@ -1,66 +0,0 @@
% Ergonomic error handling
Error propagation with raw `Result`s can require tedious matching and
repackaging. This tedium is largely alleviated by the `try!` macro,
and can be completely removed (in some cases) by the "`Result`-`impl`"
pattern.
### The `try!` macro
Prefer
```rust,ignore
use std::io::{File, Open, Write, IoError};
struct Info {
name: String,
age: i32,
rating: i32
}
fn write_info(info: &Info) -> Result<(), IoError> {
let mut file = File::open_mode(&Path::new("my_best_friends.txt"),
Open, Write);
// Early return on error
try!(file.write_line(&format!("name: {}", info.name)));
try!(file.write_line(&format!("age: {}", info.age)));
try!(file.write_line(&format!("rating: {}", info.rating)));
return Ok(());
}
```
over
```rust,ignore
use std::io::{File, Open, Write, IoError};
struct Info {
name: String,
age: i32,
rating: i32
}
fn write_info(info: &Info) -> Result<(), IoError> {
let mut file = File::open_mode(&Path::new("my_best_friends.txt"),
Open, Write);
// Early return on error
match file.write_line(&format!("name: {}", info.name)) {
Ok(_) => (),
Err(e) => return Err(e)
}
match file.write_line(&format!("age: {}", info.age)) {
Ok(_) => (),
Err(e) => return Err(e)
}
return file.write_line(&format!("rating: {}", info.rating));
}
```
See
[the `result` module documentation](https://doc.rust-lang.org/stable/std/result/index.html#the-try-macro)
for more details.
### The `Result`-`impl` pattern [FIXME]
> **[FIXME]** Document the way that the `io` module uses trait impls
> on `std::io::Result` to painlessly propagate errors.

View File

@ -1,7 +0,0 @@
% Handling errors
### Use thread isolation to cope with failure. [FIXME]
> **[FIXME]** Explain how to isolate threads and detect thread failure for recovery.
### Consuming `Result` [FIXME]

View File

@ -1,8 +0,0 @@
% Propagation
> **[FIXME]** We need guidelines on how to layer error information up a stack of
> abstractions.
### Error interoperation [FIXME]
> **[FIXME]** Document the `FromError` infrastructure.

View File

@ -1,125 +0,0 @@
% Signaling errors [RFC #236]
> The guidelines below were approved by [RFC #236](https://github.com/rust-lang/rfcs/pull/236).
Errors fall into one of three categories:
* Catastrophic errors, e.g. out-of-memory.
* Contract violations, e.g. wrong input encoding, index out of bounds.
* Obstructions, e.g. file not found, parse error.
The basic principle of the convention is that:
* Catastrophic errors and programming errors (bugs) can and should only be
recovered at a *coarse grain*, i.e. a thread boundary.
* Obstructions preventing an operation should be reported at a maximally *fine
grain* -- to the immediate invoker of the operation.
## Catastrophic errors
An error is _catastrophic_ if there is no meaningful way for the current thread to
continue after the error occurs.
Catastrophic errors are _extremely_ rare, especially outside of `libstd`.
**Canonical examples**: out of memory, stack overflow.
### For catastrophic errors, panic
For errors like stack overflow, Rust currently aborts the process, but
could in principle panic, which (in the best case) would allow
reporting and recovery from a supervisory thread.
## Contract violations
An API may define a contract that goes beyond the type checking enforced by the
compiler. For example, slices support an indexing operation, with the contract
that the supplied index must be in bounds.
Contracts can be complex and involve more than a single function invocation. For
example, the `RefCell` type requires that `borrow_mut` not be called until all
existing borrows have been relinquished.
### For contract violations, panic
A contract violation is always a bug, and for bugs we follow the Erlang
philosophy of "let it crash": we assume that software *will* have bugs, and we
design coarse-grained thread boundaries to report, and perhaps recover, from these
bugs.
### Contract design
One subtle aspect of these guidelines is that the contract for a function is
chosen by an API designer -- and so the designer also determines what counts as
a violation.
This RFC does not attempt to give hard-and-fast rules for designing
contracts. However, here are some rough guidelines:
* Prefer expressing contracts through static types whenever possible.
* It *must* be possible to write code that uses the API without violating the
contract.
* Contracts are most justified when violations are *inarguably* bugs -- but this
is surprisingly rare.
* Consider whether the API client could benefit from the contract-checking
logic. The checks may be expensive. Or there may be useful programming
patterns where the client does not want to check inputs before hand, but would
rather attempt the operation and then find out whether the inputs were invalid.
* When a contract violation is the *only* kind of error a function may encounter
-- i.e., there are no obstructions to its success other than "bad" inputs --
using `Result` or `Option` instead is especially warranted. Clients can then use
`unwrap` to assert that they have passed valid input, or re-use the error
checking done by the API for their own purposes.
* When in doubt, use loose contracts and instead return a `Result` or `Option`.
## Obstructions
An operation is *obstructed* if it cannot be completed for some reason, even
though the operation's contract has been satisfied. Obstructed operations may
have (documented!) side effects -- they are not required to roll back after
encountering an obstruction. However, they should leave the data structures in
a "coherent" state (satisfying their invariants, continuing to guarantee safety,
etc.).
Obstructions may involve external conditions (e.g., I/O), or they may involve
aspects of the input that are not covered by the contract.
**Canonical examples**: file not found, parse error.
### For obstructions, use `Result`
The
[`Result<T,E>` type](https://doc.rust-lang.org/stable/std/result/index.html)
represents either a success (yielding `T`) or failure (yielding `E`). By
returning a `Result`, a function allows its clients to discover and react to
obstructions in a fine-grained way.
#### What about `Option`?
The `Option` type should not be used for "obstructed" operations; it
should only be used when a `None` return value could be considered a
"successful" execution of the operation.
This is of course a somewhat subjective question, but a good litmus
test is: would a reasonable client ever ignore the result? The
`Result` type provides a lint that ensures the result is actually
inspected, while `Option` does not, and this difference of behavior
can help when deciding between the two types.
Another litmus test: can the operation be understood as asking a
question (possibly with sideeffects)? Operations like `pop` on a
vector can be viewed as asking for the contents of the first element,
with the side effect of removing it if it exists -- with an `Option`
return value.
## Do not provide both `Result` and `panic!` variants.
An API should not provide both `Result`-producing and `panic`king versions of an
operation. It should provide just the `Result` version, allowing clients to use
`try!` or `unwrap` instead as needed. This is part of the general pattern of
cutting down on redundant variants by instead using method chaining.

View File

@ -1,9 +0,0 @@
% Guidelines by language feature
Rust provides a unique combination of language features, some new and some
old. This section gives guidance on when and how to use Rust's features, and
brings attention to some of the tradeoffs between different features.
Notably missing from this section is an in-depth discussion of Rust's pointer
types (both built-in and in the library). The topic of pointers is discussed at
length in a [separate section on ownership](../ownership/README.md).

View File

@ -1,6 +0,0 @@
% Crates
> **[FIXME]** What general guidelines should we provide for crate design?
> Possible topics: facades; per-crate preludes (to be imported as globs);
> "lib.rs"

View File

@ -1,44 +0,0 @@
% Functions and methods
### Prefer methods to functions if there is a clear receiver. **[FIXME: needs RFC]**
Prefer
```rust,ignore
impl Foo {
pub fn frob(&self, w: widget) { ... }
}
```
over
```rust,ignore
pub fn frob(foo: &Foo, w: widget) { ... }
```
for any operation that is clearly associated with a particular
type.
Methods have numerous advantages over functions:
* They do not need to be imported or qualified to be used: all you
need is a value of the appropriate type.
* Their invocation performs autoborrowing (including mutable borrows).
* They make it easy to answer the question "what can I do with a value
of type `T`" (especially when using rustdoc).
* They provide `self` notation, which is more concise and often more
clearly conveys ownership distinctions.
> **[FIXME]** Revisit these guidelines with
> [UFCS](https://github.com/nick29581/rfcs/blob/ufcs/0000-ufcs.md) and
> conventions developing around it.
### Guidelines for inherent methods. **[FIXME]**
> **[FIXME]** We need guidelines for when to provide inherent methods on a type,
> versus methods through a trait or functions.
> **NOTE**: Rules for method resolution around inherent methods are in flux,
> which may impact the guidelines.

View File

@ -1,43 +0,0 @@
% Convenience methods
### Provide small, coherent sets of convenience methods. **[FIXME: needs RFC]**
_Convenience methods_ wrap up existing functionality in a more convenient
way. The work done by a convenience method varies widely:
* _Re-providing functions as methods_. For example, the `std::path::Path` type
provides methods like `stat` on `Path`s that simply invoke the corresponding
function in `std::io::fs`.
* _Skipping through conversions_. For example, the `str` type provides a
`.len()` convenience method which is also expressible as `.as_bytes().len()`.
Sometimes the conversion is more complex: the `str` module also provides
`from_chars`, which encapsulates a simple use of iterators.
* _Encapsulating common arguments_. For example, vectors of `&str`s
provide a `connect` as well as a special case, `concat`, that is expressible
using `connect` with a fixed separator of `""`.
* _Providing more efficient special cases_. The `connect` and `concat` example
also applies here: singling out `concat` as a special case allows for a more
efficient implementation.
Note, however, that the `connect` method actually detects the special case
internally and invokes `concat`. Usually, it is not necessary to add a public
convenience method just for efficiency gains; there should also be a
_conceptual_ reason to add it, e.g. because it is such a common special case.
It is tempting to add convenience methods in a one-off, haphazard way as
common use patterns emerge. Avoid this temptation, and instead _design_ small,
coherent sets of convenience methods that are easy to remember:
* _Small_: Avoid combinatorial explosions of convenience methods. For example,
instead of adding `_str` variants of methods that provide a `str` output,
instead ensure that the normal output type of methods is easily convertible to
`str`.
* _Coherent_: Look for small groups of convenience methods that make sense to
include together. For example, the `Path` API mentioned above includes a small
selection of the most common filesystem operations that take a `Path`
argument. If one convenience method strongly suggests the existence of others,
consider adding the whole group.
* _Memorable_: It is not worth saving a few characters of typing if you have to
look up the name of a convenience method every time you use it. Add
convenience methods with names that are obvious and easy to remember, and add
them for the most common or painful use cases.

View File

@ -1,203 +0,0 @@
% Input to functions and methods
### Let the client decide when to copy and where to place data. [FIXME: needs RFC]
#### Copying:
Prefer
```rust,ignore
fn foo(b: Bar) {
// use b as owned, directly
}
```
over
```rust,ignore
fn foo(b: &Bar) {
let b = b.clone();
// use b as owned after cloning
}
```
If a function requires ownership of a value of unknown type `T`, but does not
otherwise need to make copies, the function should take ownership of the
argument (pass by value `T`) rather than using `.clone()`. That way, the caller
can decide whether to relinquish ownership or to `clone`.
Similarly, the `Copy` trait bound should only be demanded it when absolutely
needed, not as a way of signaling that copies should be cheap to make.
#### Placement:
Prefer
```rust,ignore
fn foo(b: Bar) -> Bar { ... }
```
over
```rust,ignore
fn foo(b: Box<Bar>) -> Box<Bar> { ... }
```
for concrete types `Bar` (as opposed to trait objects). This way, the caller can
decide whether to place data on the stack or heap. No overhead is imposed by
letting the caller determine the placement.
### Minimize assumptions about parameters. [FIXME: needs RFC]
The fewer assumptions a function makes about its inputs, the more widely usable
it becomes.
#### Minimizing assumptions through generics:
Prefer
```rust,ignore
fn foo<T: Iterator<i32>>(c: T) { ... }
```
over any of
```rust,ignore
fn foo(c: &[i32]) { ... }
fn foo(c: &Vec<i32>) { ... }
fn foo(c: &SomeOtherCollection<i32>) { ... }
```
if the function only needs to iterate over the data.
More generally, consider using generics to pinpoint the assumptions a function
needs to make about its arguments.
On the other hand, generics can make it more difficult to read and understand a
function's signature. Aim for "natural" parameter types that a neither overly
concrete nor overly abstract. See the discussion on
[traits](../traits/README.md) for more guidance.
#### Minimizing ownership assumptions:
Prefer either of
```rust,ignore
fn foo(b: &Bar) { ... }
fn foo(b: &mut Bar) { ... }
```
over
```rust,ignore
fn foo(b: Bar) { ... }
```
That is, prefer borrowing arguments rather than transferring ownership, unless
ownership is actually needed.
### Prefer compound return types to out-parameters. [FIXME: needs RFC]
Prefer
```rust,ignore
fn foo() -> (Bar, Bar)
```
over
```rust,ignore
fn foo(output: &mut Bar) -> Bar
```
for returning multiple `Bar` values.
Compound return types like tuples and structs are efficiently compiled
and do not require heap allocation. If a function needs to return
multiple values, it should do so via one of these types.
The primary exception: sometimes a function is meant to modify data
that the caller already owns, for example to re-use a buffer:
```rust,ignore
fn read(&mut self, buf: &mut [u8]) -> std::io::Result<usize>
```
(From the [Read trait](https://doc.rust-lang.org/stable/std/io/trait.Read.html#tymethod.read).)
### Consider validating arguments, statically or dynamically. [FIXME: needs RFC]
_Note: this material is closely related to
[library-level guarantees](../../safety/lib-guarantees.md)._
Rust APIs do _not_ generally follow the
[robustness principle](https://en.wikipedia.org/wiki/Robustness_principle): "be
conservative in what you send; be liberal in what you accept".
Instead, Rust code should _enforce_ the validity of input whenever practical.
Enforcement can be achieved through the following mechanisms (listed
in order of preference).
#### Static enforcement:
Choose an argument type that rules out bad inputs.
For example, prefer
```rust,ignore
enum FooMode {
Mode1,
Mode2,
Mode3,
}
fn foo(mode: FooMode) { ... }
```
over
```rust,ignore
fn foo(mode2: bool, mode3: bool) {
assert!(!mode2 || !mode3);
...
}
```
Static enforcement usually comes at little run-time cost: it pushes the
costs to the boundaries. It also catches bugs early, during compilation,
rather than through run-time failures.
On the other hand, some properties are difficult or impossible to
express using types.
#### Dynamic enforcement:
Validate the input as it is processed (or ahead of time, if necessary). Dynamic
checking is often easier to implement than static checking, but has several
downsides:
1. Runtime overhead (unless checking can be done as part of processing the input).
2. Delayed detection of bugs.
3. Introduces failure cases, either via `panic!` or `Result`/`Option` types (see
the [error handling guidelines](../../errors/README.md)), which must then be
dealt with by client code.
#### Dynamic enforcement with `debug_assert!`:
Same as dynamic enforcement, but with the possibility of easily turning off
expensive checks for production builds.
#### Dynamic enforcement with opt-out:
Same as dynamic enforcement, but adds sibling functions that opt out of the
checking.
The convention is to mark these opt-out functions with a suffix like
`_unchecked` or by placing them in a `raw` submodule.
The unchecked functions can be used judiciously in cases where (1) performance
dictates avoiding checks and (2) the client is otherwise confident that the
inputs are valid.
> **[FIXME]** Should opt-out functions be marked `unsafe`?

View File

@ -1,56 +0,0 @@
% Output from functions and methods
### Don't overpromise. [FIXME]
> **[FIXME]** Add discussion of overly-specific return types,
> e.g. returning a compound iterator type rather than hiding it behind
> a use of newtype.
### Let clients choose what to throw away. [FIXME: needs RFC]
#### Return useful intermediate results:
Many functions that answer a question also compute interesting related data. If
this data is potentially of interest to the client, consider exposing it in the
API.
Prefer
```rust,ignore
struct SearchResult {
found: bool, // item in container?
expected_index: usize // what would the item's index be?
}
fn binary_search(&self, k: Key) -> SearchResult
```
or
```rust,ignore
fn binary_search(&self, k: Key) -> (bool, usize)
```
over
```rust,ignore
fn binary_search(&self, k: Key) -> bool
```
#### Yield back ownership:
Prefer
```rust,ignore
fn from_utf8_owned(vv: Vec<u8>) -> Result<String, Vec<u8>>
```
over
```rust,ignore
fn from_utf8_owned(vv: Vec<u8>) -> Option<String>
```
The `from_utf8_owned` function gains ownership of a vector. In the successful
case, the function consumes its input, returning an owned string without
allocating or copying. In the unsuccessful case, however, the function returns
back ownership of the original slice.

View File

@ -1,103 +0,0 @@
% Let binding
### Always separately bind RAII guards. [FIXME: needs RFC]
Prefer
```rust,ignore
fn use_mutex(m: sync::mutex::Mutex<i32>) {
let guard = m.lock();
do_work(guard);
drop(guard); // unlock the lock
// do other work
}
```
over
```rust,ignore
fn use_mutex(m: sync::mutex::Mutex<i32>) {
do_work(m.lock());
// do other work
}
```
As explained in the [RAII guide](../ownership/raii.md), RAII guards are values
that represent ownership of some resource and whose destructor releases the
resource. Because the lifetime of guards are significant, they should always be
explicitly `let`-bound to make the lifetime clear. Consider using an explicit
`drop` to release the resource early.
### Prefer conditional expressions to deferred initialization. [FIXME: needs RFC]
Prefer
```rust,ignore
let foo = match bar {
Baz => 0,
Quux => 1
};
```
over
```rust,ignore
let foo;
match bar {
Baz => {
foo = 0;
}
Quux => {
foo = 1;
}
}
```
unless the conditions for initialization are too complex to fit into a simple
conditional expression.
### Use type annotations for clarification; prefer explicit generics when inference fails. [FIXME: needs RFC]
Prefer
```rust,ignore
let v = s.iter().map(|x| x * 2)
.collect::<Vec<_>>();
```
over
```rust,ignore
let v: Vec<_> = s.iter().map(|x| x * 2)
.collect();
```
When the type of a value might be unclear to the _reader_ of the code, consider
explicitly annotating it in a `let`.
On the other hand, when the type is unclear to the _compiler_, prefer to specify
the type by explicit generics instantiation, which is usually more clear.
### Shadowing [FIXME]
> **[FIXME]** Repeatedly shadowing a binding is somewhat common in Rust code. We
> need to articulate a guideline on when it is appropriate/useful and when not.
### Prefer immutable bindings. [FIXME: needs RFC]
Use `mut` bindings to signal the span during which a value is mutated:
```rust,ignore
let mut v = Vec::new();
// push things onto v
let v = v;
// use v immutably henceforth
```
### Prefer to bind all `struct` or tuple fields. [FIXME: needs RFC]
When consuming a `struct` or tuple via a `let`, bind all of the fields rather
than using `..` to elide the ones you don't need. The benefit is that when
fields are added, the compiler will pinpoint all of the places where that type
of value was consumed, which will often need to be adjusted to take the new
field properly into account.

View File

@ -1,13 +0,0 @@
% Loops
### Prefer `for` to `while`. [FIXME: needs RFC]
A `for` loop is preferable to a `while` loop, unless the loop counts in a
non-uniform way (making it difficult to express using `for`).
### Guidelines for `loop`. [FIXME]
> **[FIXME]** When is `loop` recommended? Some possibilities:
> * For optimistic retry algorithms
> * For servers
> * To avoid mutating local variables sometimes needed to fit `while`

View File

@ -1,26 +0,0 @@
% Pattern matching
### Dereference `match` targets when possible. [FIXME: needs RFC]
Prefer
~~~~ignore
match *foo {
X(...) => ...
Y(...) => ...
}
~~~~
over
~~~~ignore
match foo {
box X(...) => ...
box Y(...) => ...
}
~~~~
<!-- ### Clearly indicate important scopes. **[FIXME: needs RFC]** -->
<!-- If it is important that the destructor for a value be executed at a specific -->
<!-- time, clearly bind that value using a standalone `let` -->

View File

@ -1,133 +0,0 @@
% Modules
> **[FIXME]** What general guidelines should we provide for module design?
> We should discuss visibility, nesting, `mod.rs`, and any interesting patterns
> around modules.
### Headers [FIXME: needs RFC]
Organize module headers as follows:
1. [Imports](../style/imports.md).
1. `mod` declarations.
1. `pub mod` declarations.
### Avoid `path` directives. [FIXME: needs RFC]
Avoid using `#[path="..."]` directives; make the file system and
module hierarchy match, instead.
### Use the module hierarchy to organize APIs into coherent sections. [FIXME]
> **[FIXME]** Flesh this out with examples; explain what a "coherent
> section" is with examples.
>
> The module hierarchy defines both the public and internal API of your module.
> Breaking related functionality into submodules makes it understandable to both
> users and contributors to the module.
### Place modules in their own file. [FIXME: needs RFC]
> **[FIXME]**
> - "<100 lines" is arbitrary, but it's a clearer recommendation
> than "~1 page" or similar suggestions that vary by screen size, etc.
For all except very short modules (<100 lines) and [tests](../testing/README.md),
place the module `foo` in a separate file, as in:
```rust,ignore
pub mod foo;
// in foo.rs or foo/mod.rs
pub fn bar() { println!("..."); }
/* ... */
```
rather than declaring it inline:
```rust,ignore
pub mod foo {
pub fn bar() { println!("..."); }
/* ... */
}
```
#### Use subdirectories for modules with children. [FIXME: needs RFC]
For modules that themselves have submodules, place the module in a separate
directory (e.g., `bar/mod.rs` for a module `bar`) rather than the same directory.
Note the structure of
[`std::io`](https://doc.rust-lang.org/std/io/). Many of the submodules lack
children, like
[`io::fs`](https://doc.rust-lang.org/std/io/fs/)
and
[`io::stdio`](https://doc.rust-lang.org/std/io/stdio/).
On the other hand,
[`io::net`](https://doc.rust-lang.org/std/io/net/)
contains submodules, so it lives in a separate directory:
```text
io/mod.rs
io/extensions.rs
io/fs.rs
io/net/mod.rs
io/net/addrinfo.rs
io/net/ip.rs
io/net/tcp.rs
io/net/udp.rs
io/net/unix.rs
io/pipe.rs
...
```
While it is possible to define all of `io` within a single directory,
mirroring the module hierarchy in the directory structure makes
submodules of `io::net` easier to find.
### Consider top-level definitions or reexports. [FIXME: needs RFC]
For modules with submodules,
define or [reexport](https://doc.rust-lang.org/std/io/#reexports) commonly used
definitions at the top level:
* Functionality relevant to the module itself or to many of its
children should be defined in `mod.rs`.
* Functionality specific to a submodule should live in that
submodule. Reexport at the top level for the most important or
common definitions.
For example,
[`IoError`](https://doc.rust-lang.org/std/io/struct.IoError.html)
is defined in `io/mod.rs`, since it pertains to the entirety of `io`,
while
[`TcpStream`](https://doc.rust-lang.org/std/io/net/tcp/struct.TcpStream.html)
is defined in `io/net/tcp.rs` and reexported in the `io` module.
### Use internal module hierarchies for organization. [FIXME: needs RFC]
> **[FIXME]**
> - Referencing internal modules from the standard library is subject to
> becoming outdated.
Internal module hierarchies (i.e., private submodules) may be used to
hide implementation details that are not part of the module's API.
For example, in [`std::io`](https://doc.rust-lang.org/std/io/), `mod mem`
provides implementations for
[`BufReader`](https://doc.rust-lang.org/std/io/struct.BufReader.html)
and
[`BufWriter`](https://doc.rust-lang.org/std/io/struct.BufWriter.html),
but these are re-exported in `io/mod.rs` at the top level of the module:
```rust,ignore
// libstd/io/mod.rs
pub use self::mem::{MemReader, BufReader, MemWriter, BufWriter};
/* ... */
mod mem;
```
This hides the detail that there even exists a `mod mem` in `io`, and
helps keep code organized while offering freedom to change the
implementation.

View File

@ -1,22 +0,0 @@
% Traits
Traits are probably Rust's most complex feature, supporting a wide range of use
cases and design tradeoffs. Patterns of trait usage are still emerging.
### Know whether a trait will be used as an object. [FIXME: needs RFC]
Trait objects have some [significant limitations](objects.md): methods
invoked through a trait object cannot use generics, and cannot use
`Self` except in receiver position.
When designing a trait, decide early on whether the trait will be used
as an [object](objects.md) or as a [bound on generics](generics.md);
the tradeoffs are discussed in each of the linked sections.
If a trait is meant to be used as an object, its methods should take
and return trait objects rather than use generics.
### Default methods [FIXME]
> **[FIXME]** Guidelines for default methods.

View File

@ -1,71 +0,0 @@
% Common traits
### Eagerly implement common traits. [FIXME: needs RFC]
Rust's trait system does not allow _orphans_: roughly, every `impl` must live
either in the crate that defines the trait or the implementing
type. Consequently, crates that define new types should eagerly implement all
applicable, common traits.
To see why, consider the following situation:
* Crate `std` defines trait `Debug`.
* Crate `url` defines type `Url`, without implementing `Debug`.
* Crate `webapp` imports from both `std` and `url`,
There is no way for `webapp` to add `Debug` to `url`, since it defines neither.
(Note: the newtype pattern can provide an efficient, but inconvenient
workaround; see [newtype for views](../types/newtype.md))
The most important common traits to implement from `std` are:
```text
Clone, Debug, Hash, Eq
```
#### When safe, derive or otherwise implement `Send` and `Share`. [FIXME]
> **[FIXME]**. This guideline is in flux while the "opt-in" nature of
> built-in traits is being decided. See https://github.com/rust-lang/rfcs/pull/127
### Prefer to derive, rather than implement. [FIXME: needs RFC]
Deriving saves implementation effort, makes correctness trivial, and
automatically adapts to upstream changes.
### Do not overload operators in surprising ways. [FIXME: needs RFC]
Operators with built in syntax (`*`, `|`, and so on) can be provided for a type
by implementing the traits in `core::ops`. These operators come with strong
expectations: implement `Mul` only for an operation that bears some resemblance
to multiplication (and shares the expected properties, e.g. associativity), and
so on for the other traits.
### The `Drop` trait
The `Drop` trait is treated specially by the compiler as a way of
associating destructors with types. See
[the section on destructors](../../ownership/destructors.md) for
guidance.
### The `Deref`/`DerefMut` traits
#### Use `Deref`/`DerefMut` only for smart pointers. [FIXME: needs RFC]
The `Deref` traits are used implicitly by the compiler in many circumstances,
and interact with method resolution. The relevant rules are designed
specifically to accommodate smart pointers, and so the traits should be used
only for that purpose.
#### Do not fail within a `Deref`/`DerefMut` implementation. [FIXME: needs RFC]
Because the `Deref` traits are invoked implicitly by the compiler in sometimes
subtle ways, failure during dereferencing can be extremely confusing. If a
dereference might not succeed, target the `Deref` trait as a `Result` or
`Option` type instead.
#### Avoid inherent methods when implementing `Deref`/`DerefMut` [FIXME: needs RFC]
The rules around method resolution and `Deref` are in flux, but inherent methods
on a type implementing `Deref` are likely to shadow any methods of the referent
with the same name.

View File

@ -1,7 +0,0 @@
% Using traits to add extension methods
> **[FIXME]** Elaborate.
### Consider using default methods rather than extension traits **[FIXME]**
> **[FIXME]** Elaborate.

View File

@ -1,67 +0,0 @@
% Using traits for bounds on generics
The most widespread use of traits is for writing generic functions or types. For
example, the following signature describes a function for consuming any iterator
yielding items of type `A` to produce a collection of `A`:
```rust,ignore
fn from_iter<T: Iterator<A>>(iterator: T) -> SomeCollection<A>
```
Here, the `Iterator` trait specifies an interface that a type `T` must
explicitly implement to be used by this generic function.
**Pros**:
* _Reusability_. Generic functions can be applied to an open-ended collection of
types, while giving a clear contract for the functionality those types must
provide.
* _Static dispatch and optimization_. Each use of a generic function is
specialized ("monomorphized") to the particular types implementing the trait
bounds, which means that (1) invocations of trait methods are static, direct
calls to the implementation and (2) the compiler can inline and otherwise
optimize these calls.
* _Inline layout_. If a `struct` and `enum` type is generic over some type
parameter `T`, values of type `T` will be laid out _inline_ in the
`struct`/`enum`, without any indirection.
* _Inference_. Since the type parameters to generic functions can usually be
inferred, generic functions can help cut down on verbosity in code where
explicit conversions or other method calls would usually be necessary. See the
overloading/implicits use case below.
* _Precise types_. Because generics give a _name_ to the specific type
implementing a trait, it is possible to be precise about places where that
exact type is required or produced. For example, a function
```rust,ignore
fn binary<T: Trait>(x: T, y: T) -> T
```
is guaranteed to consume and produce elements of exactly the same type `T`; it
cannot be invoked with parameters of different types that both implement
`Trait`.
**Cons**:
* _Code size_. Specializing generic functions means that the function body is
duplicated. The increase in code size must be weighed against the performance
benefits of static dispatch.
* _Homogeneous types_. This is the other side of the "precise types" coin: if
`T` is a type parameter, it stands for a _single_ actual type. So for example
a `Vec<T>` contains elements of a single concrete type (and, indeed, the
vector representation is specialized to lay these out in line). Sometimes
heterogeneous collections are useful; see
trait objects below.
* _Signature verbosity_. Heavy use of generics can bloat function signatures.
**[Ed. note]** This problem may be mitigated by some language improvements; stay tuned.
### Favor widespread traits. **[FIXME: needs RFC]**
Generic types are a form of abstraction, which entails a mental indirection: if
a function takes an argument of type `T` bounded by `Trait`, clients must first
think about the concrete types that implement `Trait` to understand how and when
the function is callable.
To keep the cost of abstraction low, favor widely-known traits. Whenever
possible, implement and use traits provided as part of the standard library. Do
not introduce new traits for generics lightly; wait until there are a wide range
of types that can implement the type.

View File

@ -1,49 +0,0 @@
% Using trait objects
> **[FIXME]** What are uses of trait objects other than heterogeneous collections?
Trait objects are useful primarily when _heterogeneous_ collections of objects
need to be treated uniformly; it is the closest that Rust comes to
object-oriented programming.
```rust,ignore
struct Frame { ... }
struct Button { ... }
struct Label { ... }
trait Widget { ... }
impl Widget for Frame { ... }
impl Widget for Button { ... }
impl Widget for Label { ... }
impl Frame {
fn new(contents: &[Box<Widget>]) -> Frame {
...
}
}
fn make_gui() -> Box<Widget> {
let b: Box<Widget> = box Button::new(...);
let l: Box<Widget> = box Label::new(...);
box Frame::new([b, l]) as Box<Widget>
}
```
By using trait objects, we can set up a GUI framework with a `Frame` widget that
contains a heterogeneous collection of children widgets.
**Pros**:
* _Heterogeneity_. When you need it, you really need it.
* _Code size_. Unlike generics, trait objects do not generate specialized
(monomorphized) versions of code, which can greatly reduce code size.
**Cons**:
* _No generic methods_. Trait objects cannot currently provide generic methods.
* _Dynamic dispatch and fat pointers_. Trait objects inherently involve
indirection and vtable dispatch, which can carry a performance penalty.
* _No Self_. Except for the method receiver argument, methods on trait objects
cannot use the `Self` type.

View File

@ -1,7 +0,0 @@
% Using traits for overloading
> **[FIXME]** Elaborate.
> **[FIXME]** We need to decide on guidelines for this use case. There are a few
> patterns emerging in current Rust code, but it's not clear how widespread they
> should be.

View File

@ -1,30 +0,0 @@
% Using traits to share implementations
> **[FIXME]** Elaborate.
> **[FIXME]** We probably want to discourage this, at least when used in a way
> that is publicly exposed.
Traits that provide default implementations for function can provide code reuse
across types. For example, a `print` method can be defined across multiple
types as follows:
``` Rust
trait Printable {
// Default method implementation
fn print(&self) { println!("{:?}", *self) }
}
impl Printable for i32 {}
impl Printable for String {
fn print(&self) { println!("{}", *self) }
}
impl Printable for bool {}
impl Printable for f32 {}
```
This allows the implementation of `print` to be shared across types, yet
overridden where needed, as seen in the `impl` for `String`.

View File

@ -1,68 +0,0 @@
% Data types
### Use custom types to imbue meaning; do not abuse `bool`, `Option` or other core types. **[FIXME: needs RFC]**
Prefer
```rust,ignore
let w = Widget::new(Small, Round)
```
over
```rust,ignore
let w = Widget::new(true, false)
```
Core types like `bool`, `u8` and `Option` have many possible interpretations.
Use custom types (whether `enum`s, `struct`, or tuples) to convey
interpretation and invariants. In the above example,
it is not immediately clear what `true` and `false` are conveying without
looking up the argument names, but `Small` and `Round` are more suggestive.
Using custom types makes it easier to expand the
options later on, for example by adding an `ExtraLarge` variant.
See [the newtype pattern](newtype.md) for a no-cost way to wrap
existing types with a distinguished name.
### Prefer private fields, except for passive data. **[FIXME: needs RFC]**
Making a field public is a strong commitment: it pins down a representation
choice, _and_ prevents the type from providing any validation or maintaining any
invariants on the contents of the field, since clients can mutate it arbitrarily.
Public fields are most appropriate for `struct` types in the C spirit: compound,
passive data structures. Otherwise, consider providing getter/setter methods
and hiding fields instead.
> **[FIXME]** Cross-reference validation for function arguments.
### Use custom `enum`s for alternatives, `bitflags` for C-style flags. **[FIXME: needs RFC]**
Rust supports `enum` types with "custom discriminants":
~~~~
enum Color {
Red = 0xff0000,
Green = 0x00ff00,
Blue = 0x0000ff
}
~~~~
Custom discriminants are useful when an `enum` type needs to be serialized to an
integer value compatibly with some other system/language. They support
"typesafe" APIs: by taking a `Color`, rather than an integer, a function is
guaranteed to get well-formed inputs, even if it later views those inputs as
integers.
An `enum` allows an API to request exactly one choice from among many. Sometimes
an API's input is instead the presence or absence of a set of flags. In C code,
this is often done by having each flag correspond to a particular bit, allowing
a single integer to represent, say, 32 or 64 flags. Rust's `std::bitflags`
module provides a typesafe way for doing so.
### Phantom types. [FIXME]
> **[FIXME]** Add some material on phantom types (https://blog.mozilla.org/research/2014/06/23/static-checking-of-units-in-servo/)

View File

@ -1,22 +0,0 @@
% Conversions between types
### Associate conversions with the most specific type involved. **[FIXME: needs RFC]**
When in doubt, prefer `to_`/`as_`/`into_` to `from_`, because they are
more ergonomic to use (and can be chained with other methods).
For many conversions between two types, one of the types is clearly more
"specific": it provides some additional invariant or interpretation that is not
present in the other type. For example, `str` is more specific than `&[u8]`,
since it is a utf-8 encoded sequence of bytes.
Conversions should live with the more specific of the involved types. Thus,
`str` provides both the `as_bytes` method and the `from_utf8` constructor for
converting to and from `&[u8]` values. Besides being intuitive, this convention
avoids polluting concrete types like `&[u8]` with endless conversion methods.
### Explicitly mark lossy conversions, or do not label them as conversions. **[FIXME: needs RFC]**
If a function's name implies that it is a conversion (prefix `from_`, `as_`,
`to_` or `into_`), but the function loses information, add a suffix `_lossy` or
otherwise indicate the lossyness. Consider avoiding the conversion name prefix.

View File

@ -1,69 +0,0 @@
% The newtype pattern
A "newtype" is a tuple or `struct` with a single field. The terminology is borrowed from Haskell.
Newtypes are a zero-cost abstraction: they introduce a new, distinct name for an
existing type, with no runtime overhead when converting between the two types.
### Use newtypes to provide static distinctions. [FIXME: needs RFC]
Newtypes can statically distinguish between different interpretations of an
underlying type.
For example, a `f64` value might be used to represent a quantity in miles or in
kilometers. Using newtypes, we can keep track of the intended interpretation:
```rust,ignore
struct Miles(pub f64);
struct Kilometers(pub f64);
impl Miles {
fn as_kilometers(&self) -> Kilometers { ... }
}
impl Kilometers {
fn as_miles(&self) -> Miles { ... }
}
```
Once we have separated these two types, we can statically ensure that we do not
confuse them. For example, the function
```rust,ignore
fn are_we_there_yet(distance_travelled: Miles) -> bool { ... }
```
cannot accidentally be called with a `Kilometers` value. The compiler will
remind us to perform the conversion, thus averting certain
[catastrophic bugs](http://en.wikipedia.org/wiki/Mars_Climate_Orbiter).
### Use newtypes with private fields for hiding. [FIXME: needs RFC]
A newtype can be used to hide representation details while making precise
promises to the client.
For example, consider a function `my_transform` that returns a compound iterator
type `Enumerate<Skip<vec::MoveItems<T>>>`. We wish to hide this type from the
client, so that the client's view of the return type is roughly `Iterator<(usize,
T)>`. We can do so using the newtype pattern:
```rust,ignore
struct MyTransformResult<T>(Enumerate<Skip<vec::MoveItems<T>>>);
impl<T> Iterator<(usize, T)> for MyTransformResult<T> { ... }
fn my_transform<T, Iter: Iterator<T>>(iter: Iter) -> MyTransformResult<T> {
...
}
```
Aside from simplifying the signature, this use of newtypes allows us to make a
expose and promise less to the client. The client does not know _how_ the result
iterator is constructed or represented, which means the representation can
change in the future without breaking client code.
> **[FIXME]** Interaction with auto-deref.
### Use newtypes to provide cost-free _views_ of another type. **[FIXME]**
> **[FIXME]** Describe the pattern of using newtypes to provide a new set of
> inherent or trait methods, providing a different perspective on the underlying
> type.

View File

@ -1,3 +0,0 @@
% Ownership and resource management
> **[FIXME]** Add general remarks about ownership/resources here.

View File

@ -1,176 +0,0 @@
% The builder pattern
Some data structures are complicated to construct, due to their construction needing:
* a large number of inputs
* compound data (e.g. slices)
* optional configuration data
* choice between several flavors
which can easily lead to a large number of distinct constructors with
many arguments each.
If `T` is such a data structure, consider introducing a `T` _builder_:
1. Introduce a separate data type `TBuilder` for incrementally configuring a `T`
value. When possible, choose a better name: e.g. `Command` is the builder for
`Process`.
2. The builder constructor should take as parameters only the data _required_ to
make a `T`.
3. The builder should offer a suite of convenient methods for configuration,
including setting up compound inputs (like slices) incrementally.
These methods should return `self` to allow chaining.
4. The builder should provide one or more "_terminal_" methods for actually building a `T`.
The builder pattern is especially appropriate when building a `T` involves side
effects, such as spawning a thread or launching a process.
In Rust, there are two variants of the builder pattern, differing in the
treatment of ownership, as described below.
### Non-consuming builders (preferred):
In some cases, constructing the final `T` does not require the builder itself to
be consumed. The follow variant on
[`std::process::Command`](https://doc.rust-lang.org/stable/std/process/struct.Command.html)
is one example:
```rust,ignore
// NOTE: the actual Command API does not use owned Strings;
// this is a simplified version.
pub struct Command {
program: String,
args: Vec<String>,
cwd: Option<String>,
// etc
}
impl Command {
pub fn new(program: String) -> Command {
Command {
program: program,
args: Vec::new(),
cwd: None,
}
}
/// Add an argument to pass to the program.
pub fn arg<'a>(&'a mut self, arg: String) -> &'a mut Command {
self.args.push(arg);
self
}
/// Add multiple arguments to pass to the program.
pub fn args<'a>(&'a mut self, args: &[String])
-> &'a mut Command {
self.args.push_all(args);
self
}
/// Set the working directory for the child process.
pub fn cwd<'a>(&'a mut self, dir: String) -> &'a mut Command {
self.cwd = Some(dir);
self
}
/// Executes the command as a child process, which is returned.
pub fn spawn(&self) -> std::io::Result<Process> {
...
}
}
```
Note that the `spawn` method, which actually uses the builder configuration to
spawn a process, takes the builder by immutable reference. This is possible
because spawning the process does not require ownership of the configuration
data.
Because the terminal `spawn` method only needs a reference, the configuration
methods take and return a mutable borrow of `self`.
#### The benefit
By using borrows throughout, `Command` can be used conveniently for both
one-liner and more complex constructions:
```rust,ignore
// One-liners
Command::new("/bin/cat").arg("file.txt").spawn();
// Complex configuration
let mut cmd = Command::new("/bin/ls");
cmd.arg(".");
if size_sorted {
cmd.arg("-S");
}
cmd.spawn();
```
### Consuming builders:
Sometimes builders must transfer ownership when constructing the final type
`T`, meaning that the terminal methods must take `self` rather than `&self`:
```rust,ignore
// A simplified excerpt from std::thread::Builder
impl ThreadBuilder {
/// Name the thread-to-be. Currently the name is used for identification
/// only in failure messages.
pub fn named(mut self, name: String) -> ThreadBuilder {
self.name = Some(name);
self
}
/// Redirect thread-local stdout.
pub fn stdout(mut self, stdout: Box<Writer + Send>) -> ThreadBuilder {
self.stdout = Some(stdout);
// ^~~~~~ this is owned and cannot be cloned/re-used
self
}
/// Creates and executes a new child thread.
pub fn spawn(self, f: proc():Send) {
// consume self
...
}
}
```
Here, the `stdout` configuration involves passing ownership of a `Writer`,
which must be transferred to the thread upon construction (in `spawn`).
When the terminal methods of the builder require ownership, there is a basic tradeoff:
* If the other builder methods take/return a mutable borrow, the complex
configuration case will work well, but one-liner configuration becomes
_impossible_.
* If the other builder methods take/return an owned `self`, one-liners
continue to work well but complex configuration is less convenient.
Under the rubric of making easy things easy and hard things possible, _all_
builder methods for a consuming builder should take and returned an owned
`self`. Then client code works as follows:
```rust,ignore
// One-liners
ThreadBuilder::new().named("my_thread").spawn(proc() { ... });
// Complex configuration
let mut thread = ThreadBuilder::new();
thread = thread.named("my_thread_2"); // must re-assign to retain ownership
if reroute {
thread = thread.stdout(mywriter);
}
thread.spawn(proc() { ... });
```
One-liners work as before, because ownership is threaded through each of the
builder methods until being consumed by `spawn`. Complex configuration,
however, is more verbose: it requires re-assigning the builder at each step.

View File

@ -1,4 +0,0 @@
% Cells and smart pointers
> **[FIXME]** Add guidelines about when to use Cell, RefCell, Rc and
> Arc (and how to use them together).

View File

@ -1,62 +0,0 @@
% Constructors
### Define constructors as static, inherent methods. [FIXME: needs RFC]
In Rust, "constructors" are just a convention:
```rust,ignore
impl<T> Vec<T> {
pub fn new() -> Vec<T> { ... }
}
```
Constructors are static (no `self`) inherent methods for the type that they
construct. Combined with the practice of
[fully importing type names](../style/imports.md), this convention leads to
informative but concise construction:
```rust,ignore
use vec::Vec;
// construct a new vector
let mut v = Vec::new();
```
This convention also applied to conversion constructors (prefix `from` rather
than `new`).
### Provide constructors for passive `struct`s with defaults. [FIXME: needs RFC]
Given the `struct`
```rust,ignore
pub struct Config {
pub color: Color,
pub size: Size,
pub shape: Shape,
}
```
provide a constructor if there are sensible defaults:
```rust,ignore
impl Config {
pub fn new() -> Config {
Config {
color: Brown,
size: Medium,
shape: Square,
}
}
}
```
which then allows clients to concisely override using `struct` update syntax:
```rust,ignore
Config { color: Red, .. Config::new() };
```
See the [guideline for field privacy](../features/types/README.md) for
discussion on when to create such "passive" `struct`s with public
fields.

View File

@ -1,22 +0,0 @@
% Destructors
Unlike constructors, destructors in Rust have a special status: they are added
by implementing `Drop` for a type, and they are automatically invoked as values
go out of scope.
> **[FIXME]** This section needs to be expanded.
### Destructors should not fail. [FIXME: needs RFC]
Destructors are executed on thread failure, and in that context a failing
destructor causes the program to abort.
Instead of failing in a destructor, provide a separate method for checking for
clean teardown, e.g. a `close` method, that returns a `Result` to signal
problems.
### Destructors should not block. [FIXME: needs RFC]
Similarly, destructors should not invoke blocking operations, which can make
debugging much more difficult. Again, consider providing a separate method for
preparing for an infallible, nonblocking teardown.

View File

@ -1,12 +0,0 @@
% RAII
Resource Acquisition is Initialization
> **[FIXME]** Explain the RAII pattern and give best practices.
### Whenever possible, tie resource access to guard scopes [FIXME]
> **[FIXME]** Example: Mutex guards guarantee that access to the
> protected resource only happens when the guard is in scope.
`must_use`

View File

@ -1,7 +0,0 @@
% FFI and platform-specific code **[FIXME]**
> **[FIXME]** Not sure where this should live.
When writing cross-platform code, group platform-specific code into a
module called `platform`. Avoid `#[cfg]` directives outside this
`platform` module.

View File

@ -1,19 +0,0 @@
% Safety and guarantees
> **[FIXME]** Is there a better phrase than "strong guarantees" that encompasses
> both e.g. memory safety and e.g. data structure invariants?
A _guarantee_ is a property that holds no matter what client code does, unless
the client explicitly opts out:
* Rust guarantees memory safety and data-race freedom, with `unsafe`
blocks as an opt-out mechanism.
* APIs in Rust often provide their own guarantees. For example, `std::str`
guarantees that its underlying buffer is valid utf-8. The `std::path::Path` type
guarantees no interior nulls. Both strings and paths provide `unsafe` mechanisms
for opting out of these guarantees (and thereby avoiding runtime checks).
Thinking about guarantees is an essential part of writing good Rust code. The
rest of this subsection outlines some cross-cutting principles around
guarantees.

View File

@ -1,81 +0,0 @@
% Library-level guarantees
Most libraries rely on internal invariants, e.g. about their data, resource
ownership, or protocol states. In Rust, broken invariants cannot produce
segfaults, but they can still lead to wrong answers.
### Provide library-level guarantees whenever practical. **[FIXME: needs RFC]**
Library-level invariants should be turned into guarantees whenever
practical. They should hold no matter what the client does, modulo
explicit opt-outs. Depending on the kind of invariant, this can be
achieved through a combination of static and dynamic enforcement, as
described below.
#### Static enforcement:
Guaranteeing invariants almost always requires _hiding_,
i.e. preventing the client from directly accessing or modifying
internal data.
For example, the representation of the `str` type is hidden,
which means that any value of type `str` must have been produced
through an API under the control of the `str` module, and these
APIs in turn ensure valid utf-8 encoding.
Rust's type system makes it possible to provide guarantees even while
revealing more of the representation than usual. For example, the
`as_bytes()` method on `&str` gives a _read-only_ view into the
underlying buffer, which cannot be used to violate the utf-8 property.
#### Dynamic enforcement:
Malformed inputs from the client are hazards to library-level
guarantees, so library APIs should validate their input.
For example, `std::str::from_utf8_owned` attempts to convert a `u8`
slice into an owned string, but dynamically checks that the slice is
valid utf-8 and returns `Err` if not.
See
[the discussion on input validation](../features/functions-and-methods/input.md)
for more detail.
### Prefer static enforcement of guarantees. **[FIXME: needs RFC]**
Static enforcement provides two strong benefits over dynamic enforcement:
* Bugs are caught at compile time.
* There is no runtime cost.
Sometimes purely static enforcement is impossible or impractical. In these
cases, a library should check as much as possible statically, but defer to
dynamic checks where needed.
For example, the `std::string` module exports a `String` type with the guarantee
that all instances are valid utf-8:
* Any _consumer_ of a `String` is statically guaranteed utf-8 contents. For example,
the `append` method can push a `&str` onto the end of a `String` without
checking anything dynamically, since the existing `String` and `&str` are
statically guaranteed to be in utf-8.
* Some _producers_ of a `String` must perform dynamic checks. For example, the
`from_utf8` function attempts to convert a `Vec<u8>` into a `String`, but
dynamically checks that the contents are utf-8.
### Provide opt-outs with caution; make them explicit. **[FIXME: needs RFC]**
Providing library-level guarantees sometimes entails inconvenience (for static
checks) or overhead (for dynamic checks). So it is sometimes desirable to allow
clients to sidestep this checking, while promising to use the API in a way that
still provides the guarantee. Such escape hatches should only be introduced when
there is a demonstrated need for them.
It should be trivial for clients to audit their use of the library for
escape hatches.
See
[the discussion on input validation](../features/functions-and-methods/input.md)
for conventions on marking opt-out functions.

View File

@ -1,22 +0,0 @@
% Using `unsafe`
### Unconditionally guarantee safety, or mark API as `unsafe`. **[FIXME: needs RFC]**
Memory safety, type safety, and data race freedom are basic assumptions for all
Rust code.
APIs that use `unsafe` blocks internally thus have two choices:
* They can guarantee safety _unconditionally_ (i.e., regardless of client
behavior or inputs) and be exported as safe code. Any safety violation is then
the library's fault, not the client's fault.
* They can export potentially unsafe functions with the `unsafe` qualifier. In
this case, the documentation should make very clear the conditions under which
safety is guaranteed.
The result is that a client program can never violate safety merely by having a
bug; it must have explicitly opted out by using an `unsafe` block.
Of the two options for using `unsafe`, creating such safe abstractions (the
first option above) is strongly preferred.

View File

@ -1,5 +0,0 @@
% Style
This section gives a set of strict rules for styling Rust code.
> **[FIXME]** General remarks about the style guidelines

View File

@ -1,77 +0,0 @@
% Braces, semicolons, and commas [FIXME: needs RFC]
### Opening braces always go on the same line.
```rust,ignore
fn foo() {
...
}
fn frobnicate(a: Bar, b: Bar,
c: Bar, d: Bar)
-> Bar {
...
}
trait Bar {
fn baz(&self);
}
impl Bar for Baz {
fn baz(&self) {
...
}
}
frob(|x| {
x.transpose()
})
```
### `match` arms get braces, except for single-line expressions.
```rust,ignore
match foo {
bar => baz,
quux => {
do_something();
do_something_else()
}
}
```
### `return` statements get semicolons.
```rust,ignore
fn foo() {
do_something();
if condition() {
return;
}
do_something_else();
}
```
### Trailing commas
> **[FIXME]** We should have a guideline for when to include trailing
> commas in `struct`s, `match`es, function calls, etc.
>
> One possible rule: a trailing comma should be included whenever the
> closing delimiter appears on a separate line:
```rust,ignore
Foo { bar: 0, baz: 1 }
Foo {
bar: 0,
baz: 1,
}
match a_thing {
None => 0,
Some(x) => 1,
}
```

View File

@ -1,122 +0,0 @@
% Comments [RFC #505]
### Avoid block comments.
Use line comments:
```rust
// Wait for the main thread to return, and set the process error code
// appropriately.
```
Instead of:
``` rust
/*
* Wait for the main thread to return, and set the process error code
* appropriately.
*/
```
## Doc comments
Doc comments are prefixed by three slashes (`///`) and indicate
documentation that you would like to be included in Rustdoc's output.
They support
[Markdown syntax](https://en.wikipedia.org/wiki/Markdown)
and are the main way of documenting your public APIs.
The supported markdown syntax includes all of the extensions listed in the
[GitHub Flavored Markdown]
(https://help.github.com/articles/github-flavored-markdown) documentation,
plus superscripts.
### Summary line
The first line in any doc comment should be a single-line short sentence
providing a summary of the code. This line is used as a short summary
description throughout Rustdoc's output, so it's a good idea to keep it
short.
### Sentence structure
All doc comments, including the summary line, should begin with a
capital letter and end with a period, question mark, or exclamation
point. Prefer full sentences to fragments.
The summary line should be written in
[third person singular present indicative form]
(http://en.wikipedia.org/wiki/English_verbs#Third_person_singular_present).
Basically, this means write "Returns" instead of "Return".
For example:
```rust,ignore
/// Sets up a default runtime configuration, given compiler-supplied arguments.
///
/// This function will block until the entire pool of M:N schedulers has
/// exited. This function also requires a local thread to be available.
///
/// # Arguments
///
/// * `argc` & `argv` - The argument vector. On Unix this information is used
/// by `os::args`.
/// * `main` - The initial procedure to run inside of the M:N scheduling pool.
/// Once this procedure exits, the scheduling pool will begin to shut
/// down. The entire pool (and this function) will only return once
/// all child threads have finished executing.
///
/// # Return value
///
/// The return value is used as the process return code. 0 on success, 101 on
/// error.
```
### Code snippets
Only use inner doc comments `//!` to write crate and module-level documentation,
nothing else. When using `mod` blocks, prefer `///` outside of the block:
```rust
/// This module contains tests
mod test {
// ...
}
```
over
```rust
mod test {
//! This module contains tests
// ...
}
```
### Avoid inner doc comments.
Use inner doc comments _only_ to document crates and file-level modules:
```rust,ignore
//! The core library.
//!
//! The core library is a something something...
```
### Explain context.
Rust doesn't have special constructors, only functions that return new
instances. These aren't visible in the automatically generated documentation
for a type, so you should specifically link to them:
```rust,ignore
/// An iterator that yields `None` forever after the underlying iterator
/// yields `None` once.
///
/// These can be created through
/// [`iter.fuse()`](trait.Iterator.html#method.fuse).
pub struct Fuse<I> {
// ...
}
```

View File

@ -1,13 +0,0 @@
## `return` [RFC #968]
Terminate `return` statements with semicolons:
``` rust,ignore
fn foo(bar: i32) -> Option<i32> {
if some_condition() {
return None;
}
...
}
```

View File

@ -1,50 +0,0 @@
% Imports [FIXME: needs RFC]
The imports of a crate/module should consist of the following
sections, in order, with a blank space between each:
* `extern crate` directives
* external `use` imports
* local `use` imports
* `pub use` imports
For example:
```rust,ignore
// Crates.
extern crate getopts;
extern crate mylib;
// Standard library imports.
use getopts::{optopt, getopts};
use std::os;
// Import from a library that we wrote.
use mylib::webserver;
// Will be reexported when we import this module.
pub use self::types::Webdata;
```
### Avoid `use *`, except in tests.
Glob imports have several downsides:
* They make it harder to tell where names are bound.
* They are forwards-incompatible, since new upstream exports can clash
with existing names.
When writing a [`test` submodule](../testing/README.md), importing `super::*` is appropriate
as a convenience.
### Prefer fully importing types/traits while module-qualifying functions.
For example:
```rust,ignore
use option::Option;
use mem;
let i: isize = mem::transmute(Option(0));
```
> **[FIXME]** Add rationale.

View File

@ -1,115 +0,0 @@
% Naming conventions
### General conventions [RFC #430]
> The guidelines below were approved by [RFC #430](https://github.com/rust-lang/rfcs/pull/430).
In general, Rust tends to use `CamelCase` for "type-level" constructs
(types and traits) and `snake_case` for "value-level" constructs. More
precisely:
| Item | Convention |
| ---- | ---------- |
| Crates | `snake_case` (but prefer single word) |
| Modules | `snake_case` |
| Types | `CamelCase` |
| Traits | `CamelCase` |
| Enum variants | `CamelCase` |
| Functions | `snake_case` |
| Methods | `snake_case` |
| General constructors | `new` or `with_more_details` |
| Conversion constructors | `from_some_other_type` |
| Local variables | `snake_case` |
| Static variables | `SCREAMING_SNAKE_CASE` |
| Constant variables | `SCREAMING_SNAKE_CASE` |
| Type parameters | concise `CamelCase`, usually single uppercase letter: `T` |
| Lifetimes | short, lowercase: `'a` |
<p>
In `CamelCase`, acronyms count as one word: use `Uuid` rather than
`UUID`. In `snake_case`, acronyms are lower-cased: `is_xid_start`.
In `snake_case` or `SCREAMING_SNAKE_CASE`, a "word" should never
consist of a single letter unless it is the last "word". So, we have
`btree_map` rather than `b_tree_map`, but `PI_2` rather than `PI2`.
### Referring to types in function/method names [RFC 344]
> The guidelines below were approved by [RFC #344](https://github.com/rust-lang/rfcs/pull/344).
Function names often involve type names, the most common example being conversions
like `as_slice`. If the type has a purely textual name (ignoring parameters), it
is straightforward to convert between type conventions and function conventions:
Type name | Text in methods
--------- | ---------------
`String` | `string`
`Vec<T>` | `vec`
`YourType`| `your_type`
Types that involve notation follow the convention below. There is some
overlap on these rules; apply the most specific applicable rule:
Type name | Text in methods
--------- | ---------------
`&str` | `str`
`&[T]` | `slice`
`&mut [T]`| `mut_slice`
`&[u8]` | `bytes`
`&T` | `ref`
`&mut T` | `mut`
`*const T`| `ptr`
`*mut T` | `mut_ptr`
### Avoid redundant prefixes [RFC 356]
> The guidelines below were approved by [RFC #356](https://github.com/rust-lang/rfcs/pull/356).
Names of items within a module should not be prefixed with that module's name:
Prefer
```rust,ignore
mod foo {
pub struct Error { ... }
}
```
over
```rust,ignore
mod foo {
pub struct FooError { ... }
}
```
This convention avoids stuttering (like `io::IoError`). Library clients can
rename on import to avoid clashes.
### Getter/setter methods [RFC 344]
> The guidelines below were approved by [RFC #344](https://github.com/rust-lang/rfcs/pull/344).
Some data structures do not wish to provide direct access to their fields, but
instead offer "getter" and "setter" methods for manipulating the field state
(often providing checking or other functionality).
The convention for a field `foo: T` is:
* A method `foo(&self) -> &T` for getting the current value of the field.
* A method `set_foo(&self, val: T)` for setting the field. (The `val` argument
here may take `&T` or some other type, depending on the context.)
Note that this convention is about getters/setters on ordinary data types, *not*
on [builder objects](../../ownership/builders.html).
### Escape hatches [FIXME]
> **[FIXME]** Should we standardize a convention for functions that may break API
> guarantees? e.g. `ToCStr::to_c_str_unchecked`
### Predicates
* Simple boolean predicates should be prefixed with `is_` or another
short question word, e.g., `is_empty`.
* Common exceptions: `lt`, `gt`, and other established predicate names.

View File

@ -1,69 +0,0 @@
% Common container/wrapper methods [FIXME: needs RFC]
Containers, wrappers, and cells all provide ways to access the data
they enclose. Accessor methods often have variants to access the data
by value, by reference, and by mutable reference.
In general, the `get` family of methods is used to access contained
data without any risk of thread failure; they return `Option` as
appropriate. This name is chosen rather than names like `find` or
`lookup` because it is appropriate for a wider range of container types.
#### Containers
For a container with keys/indexes of type `K` and elements of type `V`:
```rust,ignore
// Look up element without failing
fn get(&self, key: K) -> Option<&V>
fn get_mut(&mut self, key: K) -> Option<&mut V>
// Convenience for .get(key).map(|elt| elt.clone())
fn get_clone(&self, key: K) -> Option<V>
// Lookup element, failing if it is not found:
impl Index<K, V> for Container { ... }
impl IndexMut<K, V> for Container { ... }
```
#### Wrappers/Cells
Prefer specific conversion functions like `as_bytes` or `into_vec` whenever
possible. Otherwise, use:
```rust,ignore
// Extract contents without failing
fn get(&self) -> &V
fn get_mut(&mut self) -> &mut V
fn unwrap(self) -> V
```
#### Wrappers/Cells around `Copy` data
```rust,ignore
// Extract contents without failing
fn get(&self) -> V
```
#### `Option`-like types
Finally, we have the cases of types like `Option` and `Result`, which
play a special role for failure.
For `Option<V>`:
```rust,ignore
// Extract contents or fail if not available
fn assert(self) -> V
fn expect(self, &str) -> V
```
For `Result<V, E>`:
```rust,ignore
// Extract the contents of Ok variant; fail if Err
fn assert(self) -> V
// Extract the contents of Err variant; fail if Ok
fn assert_err(self) -> E
```

View File

@ -1,32 +0,0 @@
% Conversions [Rust issue #7087]
> The guidelines below were approved by [rust issue #7087](https://github.com/rust-lang/rust/issues/7087).
> **[FIXME]** Should we provide standard traits for conversions? Doing
> so nicely will require
> [trait reform](https://github.com/rust-lang/rfcs/pull/48) to land.
Conversions should be provided as methods, with names prefixed as follows:
| Prefix | Cost | Consumes convertee |
| ------ | ---- | ------------------ |
| `as_` | Free | No |
| `to_` | Expensive | No |
| `into_` | Variable | Yes |
<p>
For example:
* `as_bytes()` gives a `&[u8]` view into a `&str`, which is a no-op.
* `to_owned()` copies a `&str` to a new `String`.
* `into_bytes()` consumes a `String` and yields the underlying
`Vec<u8>`, which is a no-op.
Conversions prefixed `as_` and `into_` typically _decrease abstraction_, either
exposing a view into the underlying representation (`as`) or deconstructing data
into its underlying representation (`into`). Conversions prefixed `to_`, on the
other hand, typically stay at the same level of abstraction but do some work to
change one representation into another.
> **[FIXME]** The distinctions between conversion methods does not work
> so well for `from_` conversion constructors. Is that a problem?

View File

@ -1,32 +0,0 @@
% Iterators
#### Method names [RFC #199]
> The guidelines below were approved by [RFC #199](https://github.com/rust-lang/rfcs/pull/199).
For a container with elements of type `U`, iterator methods should be named:
```rust,ignore
fn iter(&self) -> T // where T implements Iterator<&U>
fn iter_mut(&mut self) -> T // where T implements Iterator<&mut U>
fn into_iter(self) -> T // where T implements Iterator<U>
```
The default iterator variant yields shared references `&U`.
#### Type names [RFC #344]
> The guidelines below were approved by [RFC #344](https://github.com/rust-lang/rfcs/pull/344).
The name of an iterator type should be the same as the method that
produces the iterator.
For example:
* `iter` should yield an `Iter`
* `iter_mut` should yield an `IterMut`
* `into_iter` should yield an `IntoIter`
* `keys` should yield `Keys`
These type names make the most sense when prefixed with their owning module,
e.g. `vec::IntoIter`.

View File

@ -1,34 +0,0 @@
% Ownership variants [RFC #199]
> The guidelines below were approved by [RFC #199](https://github.com/rust-lang/rfcs/pull/199).
Functions often come in multiple variants: immutably borrowed, mutably
borrowed, and owned.
The right default depends on the function in question. Variants should
be marked through suffixes.
#### Immutably borrowed by default
If `foo` uses/produces an immutable borrow by default, use:
* The `_mut` suffix (e.g. `foo_mut`) for the mutably borrowed variant.
* The `_move` suffix (e.g. `foo_move`) for the owned variant.
#### Owned by default
If `foo` uses/produces owned data by default, use:
* The `_ref` suffix (e.g. `foo_ref`) for the immutably borrowed variant.
* The `_mut` suffix (e.g. `foo_mut`) for the mutably borrowed variant.
#### Exceptions
In the case of iterators, the moving variant can also be understood as
an `into` conversion, `into_iter`, and `for x in v.into_iter()` reads
arguably better than `for x in v.iter_move()`, so the convention is
`into_iter`.
For mutably borrowed variants, if the `mut` qualifier is part of a
type name (e.g. `as_mut_slice`), it should appear as it would appear
in the type.

View File

@ -1,3 +0,0 @@
*
*

View File

@ -1,14 +0,0 @@
% Organization [FIXME: needs RFC]
> **[FIXME]** What else?
### Reexport the most important types at the crate level.
Crates `pub use` the most common types for convenience, so that clients do not
have to remember or write the crate's module hierarchy to use these types.
### Define types and operations together.
Type definitions and the functions/methods that operate on them should be
defined together in a single module, with the type appearing above the
functions/methods.

View File

@ -1,133 +0,0 @@
% Whitespace [FIXME: needs RFC]
* Lines must not exceed 99 characters.
* Use 4 spaces for indentation, _not_ tabs.
* No trailing whitespace at the end of lines or files.
### Spaces
* Use spaces around binary operators, including the equals sign in attributes:
```rust,ignore
#[deprecated = "Use `bar` instead."]
fn foo(a: usize, b: usize) -> usize {
a + b
}
```
* Use a space after colons and commas:
```rust,ignore
fn foo(a: Bar);
MyStruct { foo: 3, bar: 4 }
foo(bar, baz);
```
* Use a space after the opening and before the closing brace for
single line blocks or `struct` expressions:
```rust,ignore
spawn(proc() { do_something(); })
Point { x: 0.1, y: 0.3 }
```
### Line wrapping
* For multiline function signatures, each new line should align with the
first parameter. Multiple parameters per line are permitted:
```rust,ignore
fn frobnicate(a: Bar, b: Bar,
c: Bar, d: Bar)
-> Bar {
...
}
fn foo<T: This,
U: That>(
a: Bar,
b: Bar)
-> Baz {
...
}
```
* Multiline function invocations generally follow the same rule as for
signatures. However, if the final argument begins a new block, the
contents of the block may begin on a new line, indented one level:
```rust,ignore
fn foo_bar(a: Bar, b: Bar,
c: |Bar|) -> Bar {
...
}
// Same line is fine:
foo_bar(x, y, |z| { z.transpose(y) });
// Indented body on new line is also fine:
foo_bar(x, y, |z| {
z.quux();
z.rotate(x)
})
```
> **[FIXME]** Do we also want to allow the following?
>
> ```rust,ignore
> frobnicate(
> arg1,
> arg2,
> arg3)
> ```
>
> This style could ease the conflict between line length and functions
> with many parameters (or long method chains).
### Matches
> * **[Deprecated]** If you have multiple patterns in a single `match`
> arm, write each pattern on a separate line:
>
> ```rust,ignore
> match foo {
> bar(_)
> | baz => quux,
> x
> | y
> | z => {
> quuux
> }
> }
> ```
### Alignment
Idiomatic code should not use extra whitespace in the middle of a line
to provide alignment.
```rust,ignore
// Good
struct Foo {
short: f64,
really_long: f64,
}
// Bad
struct Bar {
short: f64,
really_long: f64,
}
// Good
let a = 0;
let radius = 7;
// Bad
let b = 0;
let diameter = 7;
```

View File

@ -1,5 +0,0 @@
% Testing
> **[FIXME]** Add some general remarks about when and how to unit
> test, versus other kinds of testing. What are our expectations for
> Rust's core libraries?

View File

@ -1,30 +0,0 @@
% Unit testing
Unit tests should live in a `tests` submodule at the bottom of the module they
test. Mark the `tests` submodule with `#[cfg(test)]` so it is only compiled when
testing.
The `tests` module should contain:
* Imports needed only for testing.
* Functions marked with `#[test]` striving for full coverage of the parent module's
definitions.
* Auxiliary functions needed for writing the tests.
For example:
``` rust
// Excerpt from std::str
#[cfg(test)]
mod tests {
#[test]
fn test_eq() {
assert!((eq(&"".to_owned(), &"".to_owned())));
assert!((eq(&"foo".to_owned(), &"foo".to_owned())));
assert!((!eq(&"foo".to_owned(), &"bar".to_owned())));
}
}
```
> **[FIXME]** add details about useful macros for testing, e.g. `assert!`

View File

@ -1,5 +0,0 @@
* [Containers and iteration]()
* [The visitor pattern]()
* [Concurrency]()
* [Documentation]()
* [Macros]()

View File

@ -67,7 +67,6 @@ use core::mem;
use core::ops::{CoerceUnsized, Deref, DerefMut};
use core::ops::{BoxPlace, Boxed, InPlace, Place, Placer};
use core::ptr::{self, Unique};
use core::raw::TraitObject;
use core::convert::From;
/// A value that represents the heap. This is the default place that the `box`
@ -428,12 +427,8 @@ impl Box<Any> {
pub fn downcast<T: Any>(self) -> Result<Box<T>, Box<Any>> {
if self.is::<T>() {
unsafe {
// Get the raw representation of the trait object
let raw = Box::into_raw(self);
let to: TraitObject = mem::transmute::<*mut Any, TraitObject>(raw);
// Extract the data pointer
Ok(Box::from_raw(to.data as *mut T))
let raw: *mut Any = Box::into_raw(self);
Ok(Box::from_raw(raw as *mut T))
}
} else {
Err(self)

View File

@ -91,7 +91,7 @@
#![cfg_attr(stage0, feature(unsafe_no_drop_flag))]
#![feature(unsize)]
#![cfg_attr(not(test), feature(fused, raw, fn_traits, placement_new_protocol))]
#![cfg_attr(not(test), feature(fused, fn_traits, placement_new_protocol))]
#![cfg_attr(test, feature(test, box_heap))]
// Allow testing this library

View File

@ -72,8 +72,6 @@
#![stable(feature = "rust1", since = "1.0.0")]
use fmt;
use mem::transmute;
use raw::TraitObject;
use intrinsics;
use marker::Reflect;
@ -199,11 +197,7 @@ impl Any {
pub fn downcast_ref<T: Any>(&self) -> Option<&T> {
if self.is::<T>() {
unsafe {
// Get the raw representation of the trait object
let to: TraitObject = transmute(self);
// Extract the data pointer
Some(&*(to.data as *const T))
Some(&*(self as *const Any as *const T))
}
} else {
None
@ -240,11 +234,7 @@ impl Any {
pub fn downcast_mut<T: Any>(&mut self) -> Option<&mut T> {
if self.is::<T>() {
unsafe {
// Get the raw representation of the trait object
let to: TraitObject = transmute(self);
// Extract the data pointer
Some(&mut *(to.data as *const T as *mut T))
Some(&mut *(self as *mut Any as *mut T))
}
} else {
None

View File

@ -106,7 +106,8 @@ impl<'combine, 'infcx, 'gcx, 'tcx> TypeRelation<'infcx, 'gcx, 'tcx>
}
}
fn regions(&mut self, a: ty::Region, _: ty::Region) -> RelateResult<'tcx, ty::Region> {
fn regions(&mut self, a: &'tcx ty::Region, _: &'tcx ty::Region)
-> RelateResult<'tcx, &'tcx ty::Region> {
Ok(a)
}

View File

@ -329,8 +329,8 @@ impl<'cx, 'gcx, 'tcx> ty::fold::TypeFolder<'gcx, 'tcx> for Generalizer<'cx, 'gcx
}
}
fn fold_region(&mut self, r: ty::Region) -> ty::Region {
match r {
fn fold_region(&mut self, r: &'tcx ty::Region) -> &'tcx ty::Region {
match *r {
// Never make variables for regions bound within the type itself,
// nor for erased regions.
ty::ReLateBound(..) |

View File

@ -79,7 +79,8 @@ impl<'combine, 'infcx, 'gcx, 'tcx> TypeRelation<'infcx, 'gcx, 'tcx>
}
}
fn regions(&mut self, a: ty::Region, b: ty::Region) -> RelateResult<'tcx, ty::Region> {
fn regions(&mut self, a: &'tcx ty::Region, b: &'tcx ty::Region)
-> RelateResult<'tcx, &'tcx ty::Region> {
debug!("{}.regions({:?}, {:?})",
self.tag(),
a,

View File

@ -99,7 +99,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
pub fn note_and_explain_region(self,
err: &mut DiagnosticBuilder,
prefix: &str,
region: ty::Region,
region: &'tcx ty::Region,
suffix: &str) {
fn item_scope_tag(item: &hir::Item) -> &'static str {
match item.node {
@ -120,7 +120,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
Some(span))
}
let (description, span) = match region {
let (description, span) = match *region {
ty::ReScope(scope) => {
let new_string;
let unknown_scope = || {
@ -405,12 +405,12 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
}
fn free_regions_from_same_fn<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>,
sub: Region,
sup: Region)
sub: &'tcx Region,
sup: &'tcx Region)
-> Option<FreeRegionsFromSameFn> {
debug!("free_regions_from_same_fn(sub={:?}, sup={:?})", sub, sup);
let (scope_id, fr1, fr2) = match (sub, sup) {
(ReFree(fr1), ReFree(fr2)) => {
(&ReFree(fr1), &ReFree(fr2)) => {
if fr1.scope != fr2.scope {
return None
}
@ -602,7 +602,7 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
fn report_generic_bound_failure(&self,
origin: SubregionOrigin<'tcx>,
bound_kind: GenericKind<'tcx>,
sub: Region)
sub: &'tcx Region)
{
// FIXME: it would be better to report the first error message
// with the span of the parameter itself, rather than the span
@ -616,7 +616,7 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
format!("the associated type `{}`", p),
};
let mut err = match sub {
let mut err = match *sub {
ty::ReFree(ty::FreeRegion {bound_region: ty::BrNamed(..), ..}) => {
// Does the required lifetime have a nice name we can print?
let mut err = struct_span_err!(self.tcx.sess,
@ -667,8 +667,8 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
fn report_concrete_failure(&self,
origin: SubregionOrigin<'tcx>,
sub: Region,
sup: Region)
sub: &'tcx Region,
sup: &'tcx Region)
-> DiagnosticBuilder<'tcx> {
match origin {
infer::Subtype(trace) => {
@ -939,9 +939,9 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
fn report_sub_sup_conflict(&self,
var_origin: RegionVariableOrigin,
sub_origin: SubregionOrigin<'tcx>,
sub_region: Region,
sub_region: &'tcx Region,
sup_origin: SubregionOrigin<'tcx>,
sup_region: Region) {
sup_region: &'tcx Region) {
let mut err = self.report_inference_failure(var_origin);
self.tcx.note_and_explain_region(&mut err,

View File

@ -83,8 +83,8 @@ impl<'a, 'gcx, 'tcx> TypeFolder<'gcx, 'tcx> for TypeFreshener<'a, 'gcx, 'tcx> {
self.infcx.tcx
}
fn fold_region(&mut self, r: ty::Region) -> ty::Region {
match r {
fn fold_region(&mut self, r: &'tcx ty::Region) -> &'tcx ty::Region {
match *r {
ty::ReEarlyBound(..) |
ty::ReLateBound(..) => {
// leave bound regions alone
@ -99,7 +99,7 @@ impl<'a, 'gcx, 'tcx> TypeFolder<'gcx, 'tcx> for TypeFreshener<'a, 'gcx, 'tcx> {
ty::ReEmpty |
ty::ReErased => {
// replace all free regions with 'erased
ty::ReErased
self.tcx().mk_region(ty::ReErased)
}
}
}

View File

@ -57,7 +57,8 @@ impl<'combine, 'infcx, 'gcx, 'tcx> TypeRelation<'infcx, 'gcx, 'tcx>
lattice::super_lattice_tys(self, a, b)
}
fn regions(&mut self, a: ty::Region, b: ty::Region) -> RelateResult<'tcx, ty::Region> {
fn regions(&mut self, a: &'tcx ty::Region, b: &'tcx ty::Region)
-> RelateResult<'tcx, &'tcx ty::Region> {
debug!("{}.regions({:?}, {:?})",
self.tag(),
a,

View File

@ -164,7 +164,7 @@ impl<'a, 'gcx, 'tcx> CombineFields<'a, 'gcx, 'tcx> {
.map(|(&skol, &(br, ref regions))| {
let representative =
regions.iter()
.filter(|r| !skol_resolution_map.contains_key(r))
.filter(|&&r| !skol_resolution_map.contains_key(r))
.cloned()
.next()
.unwrap_or_else(|| { // [1]
@ -268,9 +268,9 @@ impl<'a, 'gcx, 'tcx> CombineFields<'a, 'gcx, 'tcx> {
snapshot: &CombinedSnapshot,
debruijn: ty::DebruijnIndex,
new_vars: &[ty::RegionVid],
a_map: &FnvHashMap<ty::BoundRegion, ty::Region>,
r0: ty::Region)
-> ty::Region {
a_map: &FnvHashMap<ty::BoundRegion, &'tcx ty::Region>,
r0: &'tcx ty::Region)
-> &'tcx ty::Region {
// Regions that pre-dated the LUB computation stay as they are.
if !is_var_in_set(new_vars, r0) {
assert!(!r0.is_bound());
@ -301,7 +301,7 @@ impl<'a, 'gcx, 'tcx> CombineFields<'a, 'gcx, 'tcx> {
debug!("generalize_region(r0={:?}): \
replacing with {:?}, tainted={:?}",
r0, *a_br, tainted);
return ty::ReLateBound(debruijn, *a_br);
return infcx.tcx.mk_region(ty::ReLateBound(debruijn, *a_br));
}
}
@ -364,10 +364,12 @@ impl<'a, 'gcx, 'tcx> CombineFields<'a, 'gcx, 'tcx> {
snapshot: &CombinedSnapshot,
debruijn: ty::DebruijnIndex,
new_vars: &[ty::RegionVid],
a_map: &FnvHashMap<ty::BoundRegion, ty::Region>,
a_map: &FnvHashMap<ty::BoundRegion,
&'tcx ty::Region>,
a_vars: &[ty::RegionVid],
b_vars: &[ty::RegionVid],
r0: ty::Region) -> ty::Region {
r0: &'tcx ty::Region)
-> &'tcx ty::Region {
if !is_var_in_set(new_vars, r0) {
assert!(!r0.is_bound());
return r0;
@ -419,7 +421,7 @@ impl<'a, 'gcx, 'tcx> CombineFields<'a, 'gcx, 'tcx> {
if a_r.is_some() && b_r.is_some() && only_new_vars {
// Related to exactly one bound variable from each fn:
return rev_lookup(span, a_map, a_r.unwrap());
return rev_lookup(infcx, span, a_map, a_r.unwrap());
} else if a_r.is_none() && b_r.is_none() {
// Not related to bound variables from either fn:
assert!(!r0.is_bound());
@ -430,13 +432,14 @@ impl<'a, 'gcx, 'tcx> CombineFields<'a, 'gcx, 'tcx> {
}
}
fn rev_lookup(span: Span,
a_map: &FnvHashMap<ty::BoundRegion, ty::Region>,
r: ty::Region) -> ty::Region
fn rev_lookup<'a, 'gcx, 'tcx>(infcx: &InferCtxt<'a, 'gcx, 'tcx>,
span: Span,
a_map: &FnvHashMap<ty::BoundRegion, &'tcx ty::Region>,
r: &'tcx ty::Region) -> &'tcx ty::Region
{
for (a_br, a_r) in a_map {
if *a_r == r {
return ty::ReLateBound(ty::DebruijnIndex::new(1), *a_br);
return infcx.tcx.mk_region(ty::ReLateBound(ty::DebruijnIndex::new(1), *a_br));
}
}
span_bug!(
@ -445,19 +448,21 @@ impl<'a, 'gcx, 'tcx> CombineFields<'a, 'gcx, 'tcx> {
r);
}
fn fresh_bound_variable(infcx: &InferCtxt, debruijn: ty::DebruijnIndex) -> ty::Region {
fn fresh_bound_variable<'a, 'gcx, 'tcx>(infcx: &InferCtxt<'a, 'gcx, 'tcx>,
debruijn: ty::DebruijnIndex)
-> &'tcx ty::Region {
infcx.region_vars.new_bound(debruijn)
}
}
}
fn var_ids<'a, 'gcx, 'tcx>(fields: &CombineFields<'a, 'gcx, 'tcx>,
map: &FnvHashMap<ty::BoundRegion, ty::Region>)
map: &FnvHashMap<ty::BoundRegion, &'tcx ty::Region>)
-> Vec<ty::RegionVid> {
map.iter()
.map(|(_, r)| match *r {
.map(|(_, &r)| match *r {
ty::ReVar(r) => { r }
r => {
_ => {
span_bug!(
fields.trace.origin.span(),
"found non-region-vid: {:?}",
@ -467,8 +472,8 @@ fn var_ids<'a, 'gcx, 'tcx>(fields: &CombineFields<'a, 'gcx, 'tcx>,
.collect()
}
fn is_var_in_set(new_vars: &[ty::RegionVid], r: ty::Region) -> bool {
match r {
fn is_var_in_set(new_vars: &[ty::RegionVid], r: &ty::Region) -> bool {
match *r {
ty::ReVar(ref v) => new_vars.iter().any(|x| x == v),
_ => false
}
@ -479,13 +484,13 @@ fn fold_regions_in<'a, 'gcx, 'tcx, T, F>(tcx: TyCtxt<'a, 'gcx, 'tcx>,
mut fldr: F)
-> T
where T: TypeFoldable<'tcx>,
F: FnMut(ty::Region, ty::DebruijnIndex) -> ty::Region,
F: FnMut(&'tcx ty::Region, ty::DebruijnIndex) -> &'tcx ty::Region,
{
tcx.fold_regions(unbound_value, &mut false, |region, current_depth| {
// we should only be encountering "escaping" late-bound regions here,
// because the ones at the current level should have been replaced
// with fresh variables
assert!(match region {
assert!(match *region {
ty::ReLateBound(..) => false,
_ => true
});
@ -497,9 +502,9 @@ fn fold_regions_in<'a, 'gcx, 'tcx, T, F>(tcx: TyCtxt<'a, 'gcx, 'tcx>,
impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
fn tainted_regions(&self,
snapshot: &CombinedSnapshot,
r: ty::Region,
r: &'tcx ty::Region,
directions: TaintDirections)
-> FnvHashSet<ty::Region> {
-> FnvHashSet<&'tcx ty::Region> {
self.region_vars.tainted(&snapshot.region_vars_snapshot, r, directions)
}
@ -596,7 +601,7 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
pub fn skolemize_late_bound_regions<T>(&self,
binder: &ty::Binder<T>,
snapshot: &CombinedSnapshot)
-> (T, SkolemizationMap)
-> (T, SkolemizationMap<'tcx>)
where T : TypeFoldable<'tcx>
{
let (result, map) = self.tcx.replace_late_bound_regions(binder, |br| {
@ -619,7 +624,7 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
pub fn leak_check(&self,
overly_polymorphic: bool,
span: Span,
skol_map: &SkolemizationMap,
skol_map: &SkolemizationMap<'tcx>,
snapshot: &CombinedSnapshot)
-> RelateResult<'tcx, ()>
{
@ -673,7 +678,7 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
for &tainted_region in &incoming_taints {
// Each skolemized should only be relatable to itself
// or new variables:
match tainted_region {
match *tainted_region {
ty::ReVar(vid) => {
if new_vars.contains(&vid) {
warnings.extend(
@ -742,7 +747,7 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
/// to the depth of the predicate, in this case 1, so that the final
/// predicate is `for<'a> &'a int : Clone`.
pub fn plug_leaks<T>(&self,
skol_map: SkolemizationMap,
skol_map: SkolemizationMap<'tcx>,
snapshot: &CombinedSnapshot,
value: &T) -> T
where T : TypeFoldable<'tcx>
@ -755,7 +760,7 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
// region back to the `ty::BoundRegion` that it originally
// represented. Because `leak_check` passed, we know that
// these taint sets are mutually disjoint.
let inv_skol_map: FnvHashMap<ty::Region, ty::BoundRegion> =
let inv_skol_map: FnvHashMap<&'tcx ty::Region, ty::BoundRegion> =
skol_map
.iter()
.flat_map(|(&skol_br, &skol)| {
@ -794,7 +799,7 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
// (which ought not to escape the snapshot, but we
// don't check that) or itself
assert!(
match r {
match *r {
ty::ReVar(_) => true,
ty::ReSkolemized(_, ref br1) => br == br1,
_ => false,
@ -802,7 +807,8 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
"leak-check would have us replace {:?} with {:?}",
r, br);
ty::ReLateBound(ty::DebruijnIndex::new(current_depth - 1), br.clone())
self.tcx.mk_region(ty::ReLateBound(
ty::DebruijnIndex::new(current_depth - 1), br.clone()))
}
}
});
@ -826,7 +832,7 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
///
/// Note: popping also occurs implicitly as part of `leak_check`.
pub fn pop_skolemized(&self,
skol_map: SkolemizationMap,
skol_map: SkolemizationMap<'tcx>,
snapshot: &CombinedSnapshot)
{
debug!("pop_skolemized({:?})", skol_map);

View File

@ -57,7 +57,8 @@ impl<'combine, 'infcx, 'gcx, 'tcx> TypeRelation<'infcx, 'gcx, 'tcx>
lattice::super_lattice_tys(self, a, b)
}
fn regions(&mut self, a: ty::Region, b: ty::Region) -> RelateResult<'tcx, ty::Region> {
fn regions(&mut self, a: &'tcx ty::Region, b: &'tcx ty::Region)
-> RelateResult<'tcx, &'tcx ty::Region> {
debug!("{}.regions({:?}, {:?})",
self.tag(),
a,

View File

@ -177,7 +177,7 @@ pub struct InferCtxt<'a, 'gcx: 'a+'tcx, 'tcx: 'a> {
/// A map returned by `skolemize_late_bound_regions()` indicating the skolemized
/// region that each late-bound region was replaced with.
pub type SkolemizationMap = FnvHashMap<ty::BoundRegion, ty::Region>;
pub type SkolemizationMap<'tcx> = FnvHashMap<ty::BoundRegion, &'tcx ty::Region>;
/// Why did we require that the two types be related?
///
@ -1123,8 +1123,8 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
pub fn sub_regions(&self,
origin: SubregionOrigin<'tcx>,
a: ty::Region,
b: ty::Region) {
a: &'tcx ty::Region,
b: &'tcx ty::Region) {
debug!("sub_regions({:?} <: {:?})", a, b);
self.region_vars.make_subregion(origin, a, b);
}
@ -1147,7 +1147,7 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
pub fn region_outlives_predicate(&self,
span: Span,
predicate: &ty::PolyRegionOutlivesPredicate)
predicate: &ty::PolyRegionOutlivesPredicate<'tcx>)
-> UnitResult<'tcx>
{
self.commit_if_ok(|snapshot| {
@ -1190,8 +1190,9 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
.new_key(None)
}
pub fn next_region_var(&self, origin: RegionVariableOrigin) -> ty::Region {
ty::ReVar(self.region_vars.new_region_var(origin))
pub fn next_region_var(&self, origin: RegionVariableOrigin)
-> &'tcx ty::Region {
self.tcx.mk_region(ty::ReVar(self.region_vars.new_region_var(origin)))
}
/// Create a region inference variable for the given
@ -1199,7 +1200,7 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
pub fn region_var_for_def(&self,
span: Span,
def: &ty::RegionParameterDef)
-> ty::Region {
-> &'tcx ty::Region {
self.next_region_var(EarlyBoundRegion(span, def.name))
}
@ -1245,7 +1246,7 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
})
}
pub fn fresh_bound_region(&self, debruijn: ty::DebruijnIndex) -> ty::Region {
pub fn fresh_bound_region(&self, debruijn: ty::DebruijnIndex) -> &'tcx ty::Region {
self.region_vars.new_bound(debruijn)
}
@ -1530,7 +1531,7 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
span: Span,
lbrct: LateBoundRegionConversionTime,
value: &ty::Binder<T>)
-> (T, FnvHashMap<ty::BoundRegion,ty::Region>)
-> (T, FnvHashMap<ty::BoundRegion, &'tcx ty::Region>)
where T : TypeFoldable<'tcx>
{
self.tcx.replace_late_bound_regions(
@ -1576,8 +1577,8 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
pub fn verify_generic_bound(&self,
origin: SubregionOrigin<'tcx>,
kind: GenericKind<'tcx>,
a: ty::Region,
bound: VerifyBound) {
a: &'tcx ty::Region,
bound: VerifyBound<'tcx>) {
debug!("verify_generic_bound({:?}, {:?} <: {:?})",
kind,
a,
@ -1666,7 +1667,7 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
self.tcx.region_maps.temporary_scope(rvalue_id)
}
pub fn upvar_capture(&self, upvar_id: ty::UpvarId) -> Option<ty::UpvarCapture> {
pub fn upvar_capture(&self, upvar_id: ty::UpvarId) -> Option<ty::UpvarCapture<'tcx>> {
self.tables.borrow().upvar_capture_map.get(&upvar_id).cloned()
}

View File

@ -123,7 +123,7 @@ pub fn maybe_print_constraints_for<'a, 'gcx, 'tcx>(
struct ConstraintGraph<'a, 'gcx: 'a+'tcx, 'tcx: 'a> {
tcx: TyCtxt<'a, 'gcx, 'tcx>,
graph_name: String,
map: &'a FnvHashMap<Constraint, SubregionOrigin<'tcx>>,
map: &'a FnvHashMap<Constraint<'tcx>, SubregionOrigin<'tcx>>,
node_ids: FnvHashMap<Node, usize>,
}
@ -135,8 +135,8 @@ enum Node {
// type Edge = Constraint;
#[derive(Clone, PartialEq, Eq, Debug, Copy)]
enum Edge {
Constraint(Constraint),
enum Edge<'tcx> {
Constraint(Constraint<'tcx>),
EnclScope(CodeExtent, CodeExtent),
}
@ -177,7 +177,7 @@ impl<'a, 'gcx, 'tcx> ConstraintGraph<'a, 'gcx, 'tcx> {
impl<'a, 'gcx, 'tcx> dot::Labeller<'a> for ConstraintGraph<'a, 'gcx, 'tcx> {
type Node = Node;
type Edge = Edge;
type Edge = Edge<'tcx>;
fn graph_id(&self) -> dot::Id {
dot::Id::new(&*self.graph_name).unwrap()
}
@ -214,11 +214,11 @@ fn constraint_to_nodes(c: &Constraint) -> (Node, Node) {
Constraint::ConstrainVarSubVar(rv_1, rv_2) =>
(Node::RegionVid(rv_1), Node::RegionVid(rv_2)),
Constraint::ConstrainRegSubVar(r_1, rv_2) =>
(Node::Region(r_1), Node::RegionVid(rv_2)),
(Node::Region(*r_1), Node::RegionVid(rv_2)),
Constraint::ConstrainVarSubReg(rv_1, r_2) =>
(Node::RegionVid(rv_1), Node::Region(r_2)),
(Node::RegionVid(rv_1), Node::Region(*r_2)),
Constraint::ConstrainRegSubReg(r_1, r_2) =>
(Node::Region(r_1), Node::Region(r_2)),
(Node::Region(*r_1), Node::Region(*r_2)),
}
}
@ -234,7 +234,7 @@ fn edge_to_nodes(e: &Edge) -> (Node, Node) {
impl<'a, 'gcx, 'tcx> dot::GraphWalk<'a> for ConstraintGraph<'a, 'gcx, 'tcx> {
type Node = Node;
type Edge = Edge;
type Edge = Edge<'tcx>;
fn nodes(&self) -> dot::Nodes<Node> {
let mut set = FnvHashSet();
for node in self.node_ids.keys() {
@ -243,26 +243,26 @@ impl<'a, 'gcx, 'tcx> dot::GraphWalk<'a> for ConstraintGraph<'a, 'gcx, 'tcx> {
debug!("constraint graph has {} nodes", set.len());
set.into_iter().collect()
}
fn edges(&self) -> dot::Edges<Edge> {
fn edges(&self) -> dot::Edges<Edge<'tcx>> {
debug!("constraint graph has {} edges", self.map.len());
let mut v: Vec<_> = self.map.keys().map(|e| Edge::Constraint(*e)).collect();
self.tcx.region_maps.each_encl_scope(|sub, sup| v.push(Edge::EnclScope(*sub, *sup)));
debug!("region graph has {} edges", v.len());
Cow::Owned(v)
}
fn source(&self, edge: &Edge) -> Node {
fn source(&self, edge: &Edge<'tcx>) -> Node {
let (n1, _) = edge_to_nodes(edge);
debug!("edge {:?} has source {:?}", edge, n1);
n1
}
fn target(&self, edge: &Edge) -> Node {
fn target(&self, edge: &Edge<'tcx>) -> Node {
let (_, n2) = edge_to_nodes(edge);
debug!("edge {:?} has target {:?}", edge, n2);
n2
}
}
pub type ConstraintMap<'tcx> = FnvHashMap<Constraint, SubregionOrigin<'tcx>>;
pub type ConstraintMap<'tcx> = FnvHashMap<Constraint<'tcx>, SubregionOrigin<'tcx>>;
fn dump_region_constraints_to<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>,
map: &ConstraintMap<'tcx>,

View File

@ -39,22 +39,22 @@ mod graphviz;
// A constraint that influences the inference process.
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)]
pub enum Constraint {
pub enum Constraint<'tcx> {
// One region variable is subregion of another
ConstrainVarSubVar(RegionVid, RegionVid),
// Concrete region is subregion of region variable
ConstrainRegSubVar(Region, RegionVid),
ConstrainRegSubVar(&'tcx Region, RegionVid),
// Region variable is subregion of concrete region. This does not
// directly affect inference, but instead is checked after
// inference is complete.
ConstrainVarSubReg(RegionVid, Region),
ConstrainVarSubReg(RegionVid, &'tcx Region),
// A constraint where neither side is a variable. This does not
// directly affect inference, but instead is checked after
// inference is complete.
ConstrainRegSubReg(Region, Region),
ConstrainRegSubReg(&'tcx Region, &'tcx Region),
}
// VerifyGenericBound(T, _, R, RS): The parameter type `T` (or
@ -66,8 +66,8 @@ pub enum Constraint {
pub struct Verify<'tcx> {
kind: GenericKind<'tcx>,
origin: SubregionOrigin<'tcx>,
region: Region,
bound: VerifyBound,
region: &'tcx Region,
bound: VerifyBound<'tcx>,
}
#[derive(Copy, Clone, PartialEq, Eq)]
@ -80,36 +80,36 @@ pub enum GenericKind<'tcx> {
// particular region (let's call it `'min`) meets some bound.
// The bound is described the by the following grammar:
#[derive(Debug)]
pub enum VerifyBound {
pub enum VerifyBound<'tcx> {
// B = exists {R} --> some 'r in {R} must outlive 'min
//
// Put another way, the subject value is known to outlive all
// regions in {R}, so if any of those outlives 'min, then the
// bound is met.
AnyRegion(Vec<Region>),
AnyRegion(Vec<&'tcx Region>),
// B = forall {R} --> all 'r in {R} must outlive 'min
//
// Put another way, the subject value is known to outlive some
// region in {R}, so if all of those outlives 'min, then the bound
// is met.
AllRegions(Vec<Region>),
AllRegions(Vec<&'tcx Region>),
// B = exists {B} --> 'min must meet some bound b in {B}
AnyBound(Vec<VerifyBound>),
AnyBound(Vec<VerifyBound<'tcx>>),
// B = forall {B} --> 'min must meet all bounds b in {B}
AllBounds(Vec<VerifyBound>),
AllBounds(Vec<VerifyBound<'tcx>>),
}
#[derive(Copy, Clone, PartialEq, Eq, Hash)]
pub struct TwoRegions {
a: Region,
b: Region,
pub struct TwoRegions<'tcx> {
a: &'tcx Region,
b: &'tcx Region,
}
#[derive(Copy, Clone, PartialEq)]
pub enum UndoLogEntry {
pub enum UndoLogEntry<'tcx> {
/// Pushed when we start a snapshot.
OpenSnapshot,
@ -122,7 +122,7 @@ pub enum UndoLogEntry {
AddVar(RegionVid),
/// We added the given `constraint`
AddConstraint(Constraint),
AddConstraint(Constraint<'tcx>),
/// We added the given `verify`
AddVerify(usize),
@ -131,7 +131,7 @@ pub enum UndoLogEntry {
AddGiven(ty::FreeRegion, ty::RegionVid),
/// We added a GLB/LUB "combinaton variable"
AddCombination(CombineMapType, TwoRegions),
AddCombination(CombineMapType, TwoRegions<'tcx>),
/// During skolemization, we sometimes purge entries from the undo
/// log in a kind of minisnapshot (unlike other snapshots, this
@ -153,13 +153,13 @@ pub enum RegionResolutionError<'tcx> {
/// `ConcreteFailure(o, a, b)`:
///
/// `o` requires that `a <= b`, but this does not hold
ConcreteFailure(SubregionOrigin<'tcx>, Region, Region),
ConcreteFailure(SubregionOrigin<'tcx>, &'tcx Region, &'tcx Region),
/// `GenericBoundFailure(p, s, a)
///
/// The parameter/associated-type `p` must be known to outlive the lifetime
/// `a` (but none of the known bounds are sufficient).
GenericBoundFailure(SubregionOrigin<'tcx>, GenericKind<'tcx>, Region),
GenericBoundFailure(SubregionOrigin<'tcx>, GenericKind<'tcx>, &'tcx Region),
/// `SubSupConflict(v, sub_origin, sub_r, sup_origin, sup_r)`:
///
@ -168,9 +168,9 @@ pub enum RegionResolutionError<'tcx> {
/// `sub_r <= sup_r` does not hold.
SubSupConflict(RegionVariableOrigin,
SubregionOrigin<'tcx>,
Region,
&'tcx Region,
SubregionOrigin<'tcx>,
Region),
&'tcx Region),
/// For subsets of `ConcreteFailure` and `SubSupConflict`, we can derive
/// more specific errors message by suggesting to the user where they
@ -182,7 +182,7 @@ pub enum RegionResolutionError<'tcx> {
#[derive(Clone, Debug)]
pub enum ProcessedErrorOrigin<'tcx> {
ConcreteFailure(SubregionOrigin<'tcx>, Region, Region),
ConcreteFailure(SubregionOrigin<'tcx>, &'tcx Region, &'tcx Region),
VariableFailure(RegionVariableOrigin),
}
@ -213,7 +213,7 @@ impl SameRegions {
}
}
pub type CombineMap = FnvHashMap<TwoRegions, RegionVid>;
pub type CombineMap<'tcx> = FnvHashMap<TwoRegions<'tcx>, RegionVid>;
pub struct RegionVarBindings<'a, 'gcx: 'a+'tcx, 'tcx: 'a> {
tcx: TyCtxt<'a, 'gcx, 'tcx>,
@ -222,7 +222,7 @@ pub struct RegionVarBindings<'a, 'gcx: 'a+'tcx, 'tcx: 'a> {
// Constraints of the form `A <= B` introduced by the region
// checker. Here at least one of `A` and `B` must be a region
// variable.
constraints: RefCell<FnvHashMap<Constraint, SubregionOrigin<'tcx>>>,
constraints: RefCell<FnvHashMap<Constraint<'tcx>, SubregionOrigin<'tcx>>>,
// A "verify" is something that we need to verify after inference is
// done, but which does not directly affect inference in any way.
@ -250,8 +250,8 @@ pub struct RegionVarBindings<'a, 'gcx: 'a+'tcx, 'tcx: 'a> {
// a bit of a hack but seems to work.
givens: RefCell<FnvHashSet<(ty::FreeRegion, ty::RegionVid)>>,
lubs: RefCell<CombineMap>,
glbs: RefCell<CombineMap>,
lubs: RefCell<CombineMap<'tcx>>,
glbs: RefCell<CombineMap<'tcx>>,
skolemization_count: Cell<u32>,
bound_count: Cell<u32>,
@ -264,12 +264,12 @@ pub struct RegionVarBindings<'a, 'gcx: 'a+'tcx, 'tcx: 'a> {
// otherwise we end up adding entries for things like the lower
// bound on a variable and so forth, which can never be rolled
// back.
undo_log: RefCell<Vec<UndoLogEntry>>,
undo_log: RefCell<Vec<UndoLogEntry<'tcx>>>,
unification_table: RefCell<UnificationTable<ty::RegionVid>>,
// This contains the results of inference. It begins as an empty
// option and only acquires a value after inference is complete.
values: RefCell<Option<Vec<VarValue>>>,
values: RefCell<Option<Vec<VarValue<'tcx>>>>,
}
pub struct RegionSnapshot {
@ -303,14 +303,14 @@ impl TaintDirections {
}
}
struct TaintSet {
struct TaintSet<'tcx> {
directions: TaintDirections,
regions: FnvHashSet<ty::Region>
regions: FnvHashSet<&'tcx ty::Region>
}
impl TaintSet {
impl<'a, 'gcx, 'tcx> TaintSet<'tcx> {
fn new(directions: TaintDirections,
initial_region: ty::Region)
initial_region: &'tcx ty::Region)
-> Self {
let mut regions = FnvHashSet();
regions.insert(initial_region);
@ -318,8 +318,9 @@ impl TaintSet {
}
fn fixed_point(&mut self,
undo_log: &[UndoLogEntry],
verifys: &[Verify]) {
tcx: TyCtxt<'a, 'gcx, 'tcx>,
undo_log: &[UndoLogEntry<'tcx>],
verifys: &[Verify<'tcx>]) {
let mut prev_len = 0;
while prev_len < self.len() {
debug!("tainted: prev_len = {:?} new_len = {:?}",
@ -330,19 +331,21 @@ impl TaintSet {
for undo_entry in undo_log {
match undo_entry {
&AddConstraint(ConstrainVarSubVar(a, b)) => {
self.add_edge(ReVar(a), ReVar(b));
self.add_edge(tcx.mk_region(ReVar(a)),
tcx.mk_region(ReVar(b)));
}
&AddConstraint(ConstrainRegSubVar(a, b)) => {
self.add_edge(a, ReVar(b));
self.add_edge(a, tcx.mk_region(ReVar(b)));
}
&AddConstraint(ConstrainVarSubReg(a, b)) => {
self.add_edge(ReVar(a), b);
self.add_edge(tcx.mk_region(ReVar(a)), b);
}
&AddConstraint(ConstrainRegSubReg(a, b)) => {
self.add_edge(a, b);
}
&AddGiven(a, b) => {
self.add_edge(ReFree(a), ReVar(b));
self.add_edge(tcx.mk_region(ReFree(a)),
tcx.mk_region(ReVar(b)));
}
&AddVerify(i) => {
verifys[i].bound.for_each_region(&mut |b| {
@ -359,7 +362,7 @@ impl TaintSet {
}
}
fn into_set(self) -> FnvHashSet<ty::Region> {
fn into_set(self) -> FnvHashSet<&'tcx ty::Region> {
self.regions
}
@ -368,8 +371,8 @@ impl TaintSet {
}
fn add_edge(&mut self,
source: ty::Region,
target: ty::Region) {
source: &'tcx ty::Region,
target: &'tcx ty::Region) {
if self.directions.incoming {
if self.regions.contains(&target) {
self.regions.insert(source);
@ -450,7 +453,7 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
.rollback_to(snapshot.region_snapshot);
}
pub fn rollback_undo_entry(&self, undo_entry: UndoLogEntry) {
pub fn rollback_undo_entry(&self, undo_entry: UndoLogEntry<'tcx>) {
match undo_entry {
OpenSnapshot => {
panic!("Failure to observe stack discipline");
@ -529,13 +532,14 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
/// The `snapshot` argument to this function is not really used;
/// it's just there to make it explicit which snapshot bounds the
/// skolemized region that results. It should always be the top-most snapshot.
pub fn push_skolemized(&self, br: ty::BoundRegion, snapshot: &RegionSnapshot) -> Region {
pub fn push_skolemized(&self, br: ty::BoundRegion, snapshot: &RegionSnapshot)
-> &'tcx Region {
assert!(self.in_snapshot());
assert!(self.undo_log.borrow()[snapshot.length] == OpenSnapshot);
let sc = self.skolemization_count.get();
self.skolemization_count.set(sc + 1);
ReSkolemized(ty::SkolemizedRegionVid { index: sc }, br)
self.tcx.mk_region(ReSkolemized(ty::SkolemizedRegionVid { index: sc }, br))
}
/// Removes all the edges to/from the skolemized regions that are
@ -543,7 +547,7 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
/// completes to remove all trace of the skolemized regions
/// created in that time.
pub fn pop_skolemized(&self,
skols: &FnvHashSet<ty::Region>,
skols: &FnvHashSet<&'tcx ty::Region>,
snapshot: &RegionSnapshot) {
debug!("pop_skolemized_regions(skols={:?})", skols);
@ -566,7 +570,7 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
skols.len());
debug_assert! {
skols.iter()
.all(|k| match *k {
.all(|&k| match *k {
ty::ReSkolemized(index, _) =>
index.index >= first_to_pop &&
index.index < last_to_pop,
@ -597,9 +601,9 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
self.skolemization_count.set(snapshot.skolemization_count);
return;
fn kill_constraint(skols: &FnvHashSet<ty::Region>,
undo_entry: &UndoLogEntry)
-> bool {
fn kill_constraint<'tcx>(skols: &FnvHashSet<&'tcx ty::Region>,
undo_entry: &UndoLogEntry<'tcx>)
-> bool {
match undo_entry {
&AddConstraint(ConstrainVarSubVar(_, _)) =>
false,
@ -626,7 +630,7 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
}
pub fn new_bound(&self, debruijn: ty::DebruijnIndex) -> Region {
pub fn new_bound(&self, debruijn: ty::DebruijnIndex) -> &'tcx Region {
// Creates a fresh bound variable for use in GLB computations.
// See discussion of GLB computation in the large comment at
// the top of this file for more details.
@ -652,14 +656,14 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
bug!("rollover in RegionInference new_bound()");
}
ReLateBound(debruijn, BrFresh(sc))
self.tcx.mk_region(ReLateBound(debruijn, BrFresh(sc)))
}
fn values_are_none(&self) -> bool {
self.values.borrow().is_none()
}
fn add_constraint(&self, constraint: Constraint, origin: SubregionOrigin<'tcx>) {
fn add_constraint(&self, constraint: Constraint<'tcx>, origin: SubregionOrigin<'tcx>) {
// cannot add constraints once regions are resolved
assert!(self.values_are_none());
@ -704,20 +708,26 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
}
}
pub fn make_eqregion(&self, origin: SubregionOrigin<'tcx>, sub: Region, sup: Region) {
pub fn make_eqregion(&self,
origin: SubregionOrigin<'tcx>,
sub: &'tcx Region,
sup: &'tcx Region) {
if sub != sup {
// Eventually, it would be nice to add direct support for
// equating regions.
self.make_subregion(origin.clone(), sub, sup);
self.make_subregion(origin, sup, sub);
if let (ty::ReVar(sub), ty::ReVar(sup)) = (sub, sup) {
if let (ty::ReVar(sub), ty::ReVar(sup)) = (*sub, *sup) {
self.unification_table.borrow_mut().union(sub, sup);
}
}
}
pub fn make_subregion(&self, origin: SubregionOrigin<'tcx>, sub: Region, sup: Region) {
pub fn make_subregion(&self,
origin: SubregionOrigin<'tcx>,
sub: &'tcx Region,
sup: &'tcx Region) {
// cannot add constraints once regions are resolved
assert!(self.values_are_none());
@ -727,26 +737,26 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
origin);
match (sub, sup) {
(ReEarlyBound(..), _) |
(ReLateBound(..), _) |
(_, ReEarlyBound(..)) |
(_, ReLateBound(..)) => {
(&ReEarlyBound(..), _) |
(&ReLateBound(..), _) |
(_, &ReEarlyBound(..)) |
(_, &ReLateBound(..)) => {
span_bug!(origin.span(),
"cannot relate bound region: {:?} <= {:?}",
sub,
sup);
}
(_, ReStatic) => {
(_, &ReStatic) => {
// all regions are subregions of static, so we can ignore this
}
(ReVar(sub_id), ReVar(sup_id)) => {
(&ReVar(sub_id), &ReVar(sup_id)) => {
self.add_constraint(ConstrainVarSubVar(sub_id, sup_id), origin);
}
(r, ReVar(sup_id)) => {
self.add_constraint(ConstrainRegSubVar(r, sup_id), origin);
(_, &ReVar(sup_id)) => {
self.add_constraint(ConstrainRegSubVar(sub, sup_id), origin);
}
(ReVar(sub_id), r) => {
self.add_constraint(ConstrainVarSubReg(sub_id, r), origin);
(&ReVar(sub_id), _) => {
self.add_constraint(ConstrainVarSubReg(sub_id, sup), origin);
}
_ => {
self.add_constraint(ConstrainRegSubReg(sub, sup), origin);
@ -758,8 +768,8 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
pub fn verify_generic_bound(&self,
origin: SubregionOrigin<'tcx>,
kind: GenericKind<'tcx>,
sub: Region,
bound: VerifyBound) {
sub: &'tcx Region,
bound: VerifyBound<'tcx>) {
self.add_verify(Verify {
kind: kind,
origin: origin,
@ -768,29 +778,43 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
});
}
pub fn lub_regions(&self, origin: SubregionOrigin<'tcx>, a: Region, b: Region) -> Region {
pub fn lub_regions(&self,
origin: SubregionOrigin<'tcx>,
a: &'tcx Region,
b: &'tcx Region)
-> &'tcx Region {
// cannot add constraints once regions are resolved
assert!(self.values_are_none());
debug!("RegionVarBindings: lub_regions({:?}, {:?})", a, b);
if a == ty::ReStatic || b == ty::ReStatic {
ReStatic // nothing lives longer than static
} else if a == b {
a // LUB(a,a) = a
} else {
self.combine_vars(Lub, a, b, origin.clone(), |this, old_r, new_r| {
this.make_subregion(origin.clone(), old_r, new_r)
})
match (a, b) {
(r @ &ReStatic, _) | (_, r @ &ReStatic) => {
r // nothing lives longer than static
}
_ if a == b => {
a // LUB(a,a) = a
}
_ => {
self.combine_vars(Lub, a, b, origin.clone(), |this, old_r, new_r| {
this.make_subregion(origin.clone(), old_r, new_r)
})
}
}
}
pub fn glb_regions(&self, origin: SubregionOrigin<'tcx>, a: Region, b: Region) -> Region {
pub fn glb_regions(&self,
origin: SubregionOrigin<'tcx>,
a: &'tcx Region,
b: &'tcx Region)
-> &'tcx Region {
// cannot add constraints once regions are resolved
assert!(self.values_are_none());
debug!("RegionVarBindings: glb_regions({:?}, {:?})", a, b);
match (a, b) {
(ReStatic, r) | (r, ReStatic) => {
(&ReStatic, r) | (r, &ReStatic) => {
r // static lives longer than everything else
}
@ -806,7 +830,7 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
}
}
pub fn resolve_var(&self, rid: RegionVid) -> ty::Region {
pub fn resolve_var(&self, rid: RegionVid) -> &'tcx ty::Region {
match *self.values.borrow() {
None => {
span_bug!((*self.var_origins.borrow())[rid.index as usize].span(),
@ -814,18 +838,19 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
been computed!")
}
Some(ref values) => {
let r = lookup(values, rid);
let r = lookup(self.tcx, values, rid);
debug!("resolve_var({:?}) = {:?}", rid, r);
r
}
}
}
pub fn opportunistic_resolve_var(&self, rid: RegionVid) -> ty::Region {
ty::ReVar(self.unification_table.borrow_mut().find_value(rid).min_vid)
pub fn opportunistic_resolve_var(&self, rid: RegionVid) -> &'tcx ty::Region {
let vid = self.unification_table.borrow_mut().find_value(rid).min_vid;
self.tcx.mk_region(ty::ReVar(vid))
}
fn combine_map(&self, t: CombineMapType) -> &RefCell<CombineMap> {
fn combine_map(&self, t: CombineMapType) -> &RefCell<CombineMap<'tcx>> {
match t {
Glb => &self.glbs,
Lub => &self.lubs,
@ -834,26 +859,26 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
pub fn combine_vars<F>(&self,
t: CombineMapType,
a: Region,
b: Region,
a: &'tcx Region,
b: &'tcx Region,
origin: SubregionOrigin<'tcx>,
mut relate: F)
-> Region
where F: FnMut(&RegionVarBindings<'a, 'gcx, 'tcx>, Region, Region)
-> &'tcx Region
where F: FnMut(&RegionVarBindings<'a, 'gcx, 'tcx>, &'tcx Region, &'tcx Region)
{
let vars = TwoRegions { a: a, b: b };
if let Some(&c) = self.combine_map(t).borrow().get(&vars) {
return ReVar(c);
return self.tcx.mk_region(ReVar(c));
}
let c = self.new_region_var(MiscVariable(origin.span()));
self.combine_map(t).borrow_mut().insert(vars, c);
if self.in_snapshot() {
self.undo_log.borrow_mut().push(AddCombination(t, vars));
}
relate(self, a, ReVar(c));
relate(self, b, ReVar(c));
relate(self, a, self.tcx.mk_region(ReVar(c)));
relate(self, b, self.tcx.mk_region(ReVar(c)));
debug!("combine_vars() c={:?}", c);
ReVar(c)
self.tcx.mk_region(ReVar(c))
}
pub fn vars_created_since_snapshot(&self, mark: &RegionSnapshot) -> Vec<RegionVid> {
@ -878,9 +903,9 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
/// related to other regions.
pub fn tainted(&self,
mark: &RegionSnapshot,
r0: Region,
r0: &'tcx Region,
directions: TaintDirections)
-> FnvHashSet<ty::Region> {
-> FnvHashSet<&'tcx ty::Region> {
debug!("tainted(mark={:?}, r0={:?}, directions={:?})",
mark, r0, directions);
@ -888,7 +913,8 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
// edges and add any new regions we find to result_set. This
// is not a terribly efficient implementation.
let mut taint_set = TaintSet::new(directions, r0);
taint_set.fixed_point(&self.undo_log.borrow()[mark.length..],
taint_set.fixed_point(self.tcx,
&self.undo_log.borrow()[mark.length..],
&self.verifys.borrow());
debug!("tainted: result={:?}", taint_set.regions);
return taint_set.into_set();
@ -910,26 +936,30 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
errors
}
fn lub_concrete_regions(&self, free_regions: &FreeRegionMap, a: Region, b: Region) -> Region {
fn lub_concrete_regions(&self,
free_regions: &FreeRegionMap,
a: &'tcx Region,
b: &'tcx Region)
-> &'tcx Region {
match (a, b) {
(ReLateBound(..), _) |
(_, ReLateBound(..)) |
(ReEarlyBound(..), _) |
(_, ReEarlyBound(..)) |
(ReErased, _) |
(_, ReErased) => {
(&ReLateBound(..), _) |
(_, &ReLateBound(..)) |
(&ReEarlyBound(..), _) |
(_, &ReEarlyBound(..)) |
(&ReErased, _) |
(_, &ReErased) => {
bug!("cannot relate region: LUB({:?}, {:?})", a, b);
}
(ReStatic, _) | (_, ReStatic) => {
ReStatic // nothing lives longer than static
(r @ &ReStatic, _) | (_, r @ &ReStatic) => {
r // nothing lives longer than static
}
(ReEmpty, r) | (r, ReEmpty) => {
(&ReEmpty, r) | (r, &ReEmpty) => {
r // everything lives longer than empty
}
(ReVar(v_id), _) | (_, ReVar(v_id)) => {
(&ReVar(v_id), _) | (_, &ReVar(v_id)) => {
span_bug!((*self.var_origins.borrow())[v_id.index as usize].span(),
"lub_concrete_regions invoked with non-concrete \
regions: {:?}, {:?}",
@ -937,9 +967,8 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
b);
}
(ReFree(ref fr), ReScope(s_id)) |
(ReScope(s_id), ReFree(ref fr)) => {
let f = ReFree(*fr);
(&ReFree(fr), &ReScope(s_id)) |
(&ReScope(s_id), &ReFree(fr)) => {
// A "free" region can be interpreted as "some region
// at least as big as the block fr.scope_id". So, we can
// reasonably compare free regions and scopes:
@ -949,33 +978,34 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
// if the free region's scope `fr.scope_id` is bigger than
// the scope region `s_id`, then the LUB is the free
// region itself:
f
self.tcx.mk_region(ReFree(fr))
} else {
// otherwise, we don't know what the free region is,
// so we must conservatively say the LUB is static:
ReStatic
self.tcx.mk_region(ReStatic)
}
}
(ReScope(a_id), ReScope(b_id)) => {
(&ReScope(a_id), &ReScope(b_id)) => {
// The region corresponding to an outer block is a
// subtype of the region corresponding to an inner
// block.
ReScope(self.tcx.region_maps.nearest_common_ancestor(a_id, b_id))
self.tcx.mk_region(ReScope(
self.tcx.region_maps.nearest_common_ancestor(a_id, b_id)))
}
(ReFree(a_fr), ReFree(b_fr)) => {
free_regions.lub_free_regions(a_fr, b_fr)
(&ReFree(a_fr), &ReFree(b_fr)) => {
self.tcx.mk_region(free_regions.lub_free_regions(a_fr, b_fr))
}
// For these types, we cannot define any additional
// relationship:
(ReSkolemized(..), _) |
(_, ReSkolemized(..)) => {
(&ReSkolemized(..), _) |
(_, &ReSkolemized(..)) => {
if a == b {
a
} else {
ReStatic
self.tcx.mk_region(ReStatic)
}
}
}
@ -985,24 +1015,24 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
// ______________________________________________________________________
#[derive(Copy, Clone, Debug)]
pub enum VarValue {
Value(Region),
pub enum VarValue<'tcx> {
Value(&'tcx Region),
ErrorValue,
}
struct RegionAndOrigin<'tcx> {
region: Region,
region: &'tcx Region,
origin: SubregionOrigin<'tcx>,
}
type RegionGraph = graph::Graph<(), Constraint>;
type RegionGraph<'tcx> = graph::Graph<(), Constraint<'tcx>>;
impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
fn infer_variable_values(&self,
free_regions: &FreeRegionMap,
errors: &mut Vec<RegionResolutionError<'tcx>>,
subject: ast::NodeId)
-> Vec<VarValue> {
-> Vec<VarValue<'tcx>> {
let mut var_data = self.construct_var_data();
// Dorky hack to cause `dump_constraints` to only get called
@ -1020,9 +1050,9 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
var_data
}
fn construct_var_data(&self) -> Vec<VarValue> {
fn construct_var_data(&self) -> Vec<VarValue<'tcx>> {
(0..self.num_vars() as usize)
.map(|_| Value(ty::ReEmpty))
.map(|_| Value(self.tcx.mk_region(ty::ReEmpty)))
.collect()
}
@ -1059,7 +1089,7 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
}
}
fn expansion(&self, free_regions: &FreeRegionMap, var_values: &mut [VarValue]) {
fn expansion(&self, free_regions: &FreeRegionMap, var_values: &mut [VarValue<'tcx>]) {
self.iterate_until_fixed_point("Expansion", |constraint, origin| {
debug!("expansion: constraint={:?} origin={:?}",
constraint, origin);
@ -1089,9 +1119,9 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
fn expand_node(&self,
free_regions: &FreeRegionMap,
a_region: Region,
a_region: &'tcx Region,
b_vid: RegionVid,
b_data: &mut VarValue)
b_data: &mut VarValue<'tcx>)
-> bool {
debug!("expand_node({:?}, {:?} == {:?})",
a_region,
@ -1099,7 +1129,7 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
b_data);
// Check if this relationship is implied by a given.
match a_region {
match *a_region {
ty::ReFree(fr) => {
if self.givens.borrow().contains(&(fr, b_vid)) {
debug!("given");
@ -1136,7 +1166,7 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
/// and check that they are satisfied.
fn collect_errors(&self,
free_regions: &FreeRegionMap,
var_data: &mut Vec<VarValue>,
var_data: &mut Vec<VarValue<'tcx>>,
errors: &mut Vec<RegionResolutionError<'tcx>>) {
let constraints = self.constraints.borrow();
for (constraint, origin) in constraints.iter() {
@ -1192,7 +1222,7 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
for verify in self.verifys.borrow().iter() {
debug!("collect_errors: verify={:?}", verify);
let sub = normalize(var_data, verify.region);
let sub = normalize(self.tcx, var_data, verify.region);
if verify.bound.is_met(self.tcx, free_regions, var_data, sub) {
continue;
}
@ -1213,8 +1243,8 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
/// and create a `RegionResolutionError` for each of them.
fn collect_var_errors(&self,
free_regions: &FreeRegionMap,
var_data: &[VarValue],
graph: &RegionGraph,
var_data: &[VarValue<'tcx>],
graph: &RegionGraph<'tcx>,
errors: &mut Vec<RegionResolutionError<'tcx>>) {
debug!("collect_var_errors");
@ -1271,7 +1301,7 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
}
}
fn construct_graph(&self) -> RegionGraph {
fn construct_graph(&self) -> RegionGraph<'tcx> {
let num_vars = self.num_vars();
let constraints = self.constraints.borrow();
@ -1315,7 +1345,7 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
fn collect_error_for_expanding_node(&self,
free_regions: &FreeRegionMap,
graph: &RegionGraph,
graph: &RegionGraph<'tcx>,
dup_vec: &mut [u32],
node_idx: RegionVid,
errors: &mut Vec<RegionResolutionError<'tcx>>) {
@ -1339,9 +1369,9 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
// the user will more likely get a specific suggestion.
fn free_regions_first(a: &RegionAndOrigin, b: &RegionAndOrigin) -> Ordering {
match (a.region, b.region) {
(ReFree(..), ReFree(..)) => Equal,
(ReFree(..), _) => Less,
(_, ReFree(..)) => Greater,
(&ReFree(..), &ReFree(..)) => Equal,
(&ReFree(..), _) => Less,
(_, &ReFree(..)) => Greater,
(_, _) => Equal,
}
}
@ -1378,7 +1408,7 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
}
fn collect_concrete_regions(&self,
graph: &RegionGraph,
graph: &RegionGraph<'tcx>,
orig_node_idx: RegionVid,
dir: Direction,
dup_vec: &mut [u32])
@ -1423,7 +1453,7 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
fn process_edges<'a, 'gcx, 'tcx>(this: &RegionVarBindings<'a, 'gcx, 'tcx>,
state: &mut WalkState<'tcx>,
graph: &RegionGraph,
graph: &RegionGraph<'tcx>,
source_vid: RegionVid,
dir: Direction) {
debug!("process_edges(source_vid={:?}, dir={:?})", source_vid, dir);
@ -1460,7 +1490,7 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
}
fn iterate_until_fixed_point<F>(&self, tag: &str, mut body: F)
where F: FnMut(&Constraint, &SubregionOrigin<'tcx>) -> bool
where F: FnMut(&Constraint<'tcx>, &SubregionOrigin<'tcx>) -> bool
{
let mut iteration = 0;
let mut changed = true;
@ -1481,17 +1511,23 @@ impl<'a, 'gcx, 'tcx> RegionVarBindings<'a, 'gcx, 'tcx> {
}
fn normalize(values: &Vec<VarValue>, r: ty::Region) -> ty::Region {
match r {
ty::ReVar(rid) => lookup(values, rid),
fn normalize<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>,
values: &Vec<VarValue<'tcx>>,
r: &'tcx ty::Region)
-> &'tcx ty::Region {
match *r {
ty::ReVar(rid) => lookup(tcx, values, rid),
_ => r,
}
}
fn lookup(values: &Vec<VarValue>, rid: ty::RegionVid) -> ty::Region {
fn lookup<'a, 'gcx, 'tcx>(tcx: TyCtxt<'a, 'gcx, 'tcx>,
values: &Vec<VarValue<'tcx>>,
rid: ty::RegionVid)
-> &'tcx ty::Region {
match values[rid.index as usize] {
Value(r) => r,
ErrorValue => ReStatic, // Previously reported error.
ErrorValue => tcx.mk_region(ReStatic), // Previously reported error.
}
}
@ -1535,8 +1571,8 @@ impl<'a, 'gcx, 'tcx> GenericKind<'tcx> {
}
}
impl<'a, 'gcx, 'tcx> VerifyBound {
fn for_each_region(&self, f: &mut FnMut(ty::Region)) {
impl<'a, 'gcx, 'tcx> VerifyBound<'tcx> {
fn for_each_region(&self, f: &mut FnMut(&'tcx ty::Region)) {
match self {
&VerifyBound::AnyRegion(ref rs) |
&VerifyBound::AllRegions(ref rs) => for &r in rs {
@ -1552,7 +1588,7 @@ impl<'a, 'gcx, 'tcx> VerifyBound {
pub fn must_hold(&self) -> bool {
match self {
&VerifyBound::AnyRegion(ref bs) => bs.contains(&ty::ReStatic),
&VerifyBound::AnyRegion(ref bs) => bs.contains(&&ty::ReStatic),
&VerifyBound::AllRegions(ref bs) => bs.is_empty(),
&VerifyBound::AnyBound(ref bs) => bs.iter().any(|b| b.must_hold()),
&VerifyBound::AllBounds(ref bs) => bs.iter().all(|b| b.must_hold()),
@ -1562,13 +1598,13 @@ impl<'a, 'gcx, 'tcx> VerifyBound {
pub fn cannot_hold(&self) -> bool {
match self {
&VerifyBound::AnyRegion(ref bs) => bs.is_empty(),
&VerifyBound::AllRegions(ref bs) => bs.contains(&ty::ReEmpty),
&VerifyBound::AllRegions(ref bs) => bs.contains(&&ty::ReEmpty),
&VerifyBound::AnyBound(ref bs) => bs.iter().all(|b| b.cannot_hold()),
&VerifyBound::AllBounds(ref bs) => bs.iter().any(|b| b.cannot_hold()),
}
}
pub fn or(self, vb: VerifyBound) -> VerifyBound {
pub fn or(self, vb: VerifyBound<'tcx>) -> VerifyBound<'tcx> {
if self.must_hold() || vb.cannot_hold() {
self
} else if self.cannot_hold() || vb.must_hold() {
@ -1578,7 +1614,7 @@ impl<'a, 'gcx, 'tcx> VerifyBound {
}
}
pub fn and(self, vb: VerifyBound) -> VerifyBound {
pub fn and(self, vb: VerifyBound<'tcx>) -> VerifyBound<'tcx> {
if self.must_hold() && vb.must_hold() {
self
} else if self.cannot_hold() && vb.cannot_hold() {
@ -1590,18 +1626,18 @@ impl<'a, 'gcx, 'tcx> VerifyBound {
fn is_met(&self, tcx: TyCtxt<'a, 'gcx, 'tcx>,
free_regions: &FreeRegionMap,
var_values: &Vec<VarValue>,
min: ty::Region)
var_values: &Vec<VarValue<'tcx>>,
min: &'tcx ty::Region)
-> bool {
match self {
&VerifyBound::AnyRegion(ref rs) =>
rs.iter()
.map(|&r| normalize(var_values, r))
.map(|&r| normalize(tcx, var_values, r))
.any(|r| free_regions.is_subregion_of(tcx, min, r)),
&VerifyBound::AllRegions(ref rs) =>
rs.iter()
.map(|&r| normalize(var_values, r))
.map(|&r| normalize(tcx, var_values, r))
.all(|r| free_regions.is_subregion_of(tcx, min, r)),
&VerifyBound::AnyBound(ref bs) =>

View File

@ -72,10 +72,10 @@ impl<'a, 'gcx, 'tcx> TypeFolder<'gcx, 'tcx> for OpportunisticTypeAndRegionResolv
}
}
fn fold_region(&mut self, r: ty::Region) -> ty::Region {
match r {
ty::ReVar(rid) => self.infcx.region_vars.opportunistic_resolve_var(rid),
_ => r,
fn fold_region(&mut self, r: &'tcx ty::Region) -> &'tcx ty::Region {
match *r {
ty::ReVar(rid) => self.infcx.region_vars.opportunistic_resolve_var(rid),
_ => r,
}
}
}
@ -138,10 +138,10 @@ impl<'a, 'gcx, 'tcx> TypeFolder<'gcx, 'tcx> for FullTypeResolver<'a, 'gcx, 'tcx>
}
}
fn fold_region(&mut self, r: ty::Region) -> ty::Region {
match r {
ty::ReVar(rid) => self.infcx.region_vars.resolve_var(rid),
_ => r,
fn fold_region(&mut self, r: &'tcx ty::Region) -> &'tcx ty::Region {
match *r {
ty::ReVar(rid) => self.infcx.region_vars.resolve_var(rid),
_ => r,
}
}
}

View File

@ -107,7 +107,8 @@ impl<'combine, 'infcx, 'gcx, 'tcx> TypeRelation<'infcx, 'gcx, 'tcx>
}
}
fn regions(&mut self, a: ty::Region, b: ty::Region) -> RelateResult<'tcx, ty::Region> {
fn regions(&mut self, a: &'tcx ty::Region, b: &'tcx ty::Region)
-> RelateResult<'tcx, &'tcx ty::Region> {
debug!("{}.regions({:?}, {:?}) self.cause={:?}",
self.tag(), a, b, self.fields.cause);
// FIXME -- we have more fine-grained information available

View File

@ -27,6 +27,7 @@
#![feature(box_patterns)]
#![feature(box_syntax)]
#![feature(collections)]
#![feature(conservative_impl_trait)]
#![feature(const_fn)]
#![feature(core_intrinsics)]
#![feature(enumset)]

View File

@ -149,7 +149,7 @@ pub trait CrateStore<'tcx> {
fn closure_kind(&self, def_id: DefId) -> ty::ClosureKind;
fn closure_ty<'a>(&self, tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId)
-> ty::ClosureTy<'tcx>;
fn item_variances(&self, def: DefId) -> ty::ItemVariances;
fn item_variances(&self, def: DefId) -> Vec<ty::Variance>;
fn repr_attrs(&self, def: DefId) -> Vec<attr::ReprAttr>;
fn item_type<'a>(&self, tcx: TyCtxt<'a, 'tcx, 'tcx>, def: DefId)
-> Ty<'tcx>;
@ -198,7 +198,6 @@ pub trait CrateStore<'tcx> {
fn is_default_impl(&self, impl_did: DefId) -> bool;
fn is_extern_item<'a>(&self, tcx: TyCtxt<'a, 'tcx, 'tcx>, did: DefId) -> bool;
fn is_foreign_item(&self, did: DefId) -> bool;
fn is_static_method(&self, did: DefId) -> bool;
fn is_statically_included_foreign_item(&self, id: ast::NodeId) -> bool;
fn is_typedef(&self, did: DefId) -> bool;
@ -329,7 +328,7 @@ impl<'tcx> CrateStore<'tcx> for DummyCrateStore {
fn closure_kind(&self, def_id: DefId) -> ty::ClosureKind { bug!("closure_kind") }
fn closure_ty<'a>(&self, tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId)
-> ty::ClosureTy<'tcx> { bug!("closure_ty") }
fn item_variances(&self, def: DefId) -> ty::ItemVariances { bug!("item_variances") }
fn item_variances(&self, def: DefId) -> Vec<ty::Variance> { bug!("item_variances") }
fn repr_attrs(&self, def: DefId) -> Vec<attr::ReprAttr> { bug!("repr_attrs") }
fn item_type<'a>(&self, tcx: TyCtxt<'a, 'tcx, 'tcx>, def: DefId)
-> Ty<'tcx> { bug!("item_type") }
@ -391,7 +390,6 @@ impl<'tcx> CrateStore<'tcx> for DummyCrateStore {
fn is_extern_item<'a>(&self, tcx: TyCtxt<'a, 'tcx, 'tcx>, did: DefId) -> bool
{ bug!("is_extern_item") }
fn is_foreign_item(&self, did: DefId) -> bool { bug!("is_foreign_item") }
fn is_static_method(&self, did: DefId) -> bool { bug!("is_static_method") }
fn is_statically_included_foreign_item(&self, id: ast::NodeId) -> bool { false }
fn is_typedef(&self, did: DefId) -> bool { bug!("is_typedef") }

View File

@ -95,7 +95,7 @@ impl<'a, 'tcx> MarkSymbolVisitor<'a, 'tcx> {
Def::AssociatedTy(..) | Def::Method(_) | Def::AssociatedConst(_)
if self.tcx.trait_of_item(def.def_id()).is_some() => {
if let Some(substs) = self.tcx.tables.borrow().item_substs.get(&id) {
match substs.substs.types[0].sty {
match substs.substs.type_at(0).sty {
TyEnum(tyid, _) | TyStruct(tyid, _) => {
self.check_def_id(tyid.did)
}

View File

@ -76,7 +76,7 @@ pub trait Delegate<'tcx> {
borrow_id: ast::NodeId,
borrow_span: Span,
cmt: mc::cmt<'tcx>,
loan_region: ty::Region,
loan_region: &'tcx ty::Region,
bk: ty::BorrowKind,
loan_cause: LoanCause);
@ -301,11 +301,11 @@ impl<'a, 'gcx, 'tcx> ExprUseVisitor<'a, 'gcx, 'tcx> {
for arg in &decl.inputs {
let arg_ty = return_if_err!(self.mc.infcx.node_ty(arg.pat.id));
let fn_body_scope = self.tcx().region_maps.node_extent(body.id);
let fn_body_scope_r = self.tcx().node_scope_region(body.id);
let arg_cmt = self.mc.cat_rvalue(
arg.id,
arg.pat.span,
ty::ReScope(fn_body_scope), // Args live only as long as the fn body.
fn_body_scope_r, // Args live only as long as the fn body.
arg_ty);
self.walk_irrefutable_pat(arg_cmt, &arg.pat);
@ -352,7 +352,7 @@ impl<'a, 'gcx, 'tcx> ExprUseVisitor<'a, 'gcx, 'tcx> {
fn borrow_expr(&mut self,
expr: &hir::Expr,
r: ty::Region,
r: &'tcx ty::Region,
bk: ty::BorrowKind,
cause: LoanCause) {
debug!("borrow_expr(expr={:?}, r={:?}, bk={:?})",
@ -431,7 +431,8 @@ impl<'a, 'gcx, 'tcx> ExprUseVisitor<'a, 'gcx, 'tcx> {
hir::ExprMatch(ref discr, ref arms, _) => {
let discr_cmt = return_if_err!(self.mc.cat_expr(&discr));
self.borrow_expr(&discr, ty::ReEmpty, ty::ImmBorrow, MatchDiscriminant);
let r = self.tcx().mk_region(ty::ReEmpty);
self.borrow_expr(&discr, r, ty::ImmBorrow, MatchDiscriminant);
// treatment of the discriminant is handled while walking the arms.
for arm in arms {
@ -449,7 +450,7 @@ impl<'a, 'gcx, 'tcx> ExprUseVisitor<'a, 'gcx, 'tcx> {
// make sure that the thing we are pointing out stays valid
// for the lifetime `scope_r` of the resulting ptr:
let expr_ty = return_if_err!(self.mc.infcx.node_ty(expr.id));
if let ty::TyRef(&r, _) = expr_ty.sty {
if let ty::TyRef(r, _) = expr_ty.sty {
let bk = ty::BorrowKind::from_mutbl(m);
self.borrow_expr(&base, r, bk, AddrOf);
}
@ -557,7 +558,6 @@ impl<'a, 'gcx, 'tcx> ExprUseVisitor<'a, 'gcx, 'tcx> {
let callee_ty = return_if_err!(self.mc.infcx.expr_ty_adjusted(callee));
debug!("walk_callee: callee={:?} callee_ty={:?}",
callee, callee_ty);
let call_scope = self.tcx().region_maps.node_extent(call.id);
match callee_ty.sty {
ty::TyFnDef(..) | ty::TyFnPtr(_) => {
self.consume_expr(callee);
@ -578,14 +578,16 @@ impl<'a, 'gcx, 'tcx> ExprUseVisitor<'a, 'gcx, 'tcx> {
};
match overloaded_call_type {
FnMutOverloadedCall => {
let call_scope_r = self.tcx().node_scope_region(call.id);
self.borrow_expr(callee,
ty::ReScope(call_scope),
call_scope_r,
ty::MutBorrow,
ClosureInvocation);
}
FnOverloadedCall => {
let call_scope_r = self.tcx().node_scope_region(call.id);
self.borrow_expr(callee,
ty::ReScope(call_scope),
call_scope_r,
ty::ImmBorrow,
ClosureInvocation);
}
@ -761,7 +763,7 @@ impl<'a, 'gcx, 'tcx> ExprUseVisitor<'a, 'gcx, 'tcx> {
};
let bk = ty::BorrowKind::from_mutbl(m);
self.delegate.borrow(expr.id, expr.span, cmt,
*r, bk, AutoRef);
r, bk, AutoRef);
}
}
}
@ -822,7 +824,7 @@ impl<'a, 'gcx, 'tcx> ExprUseVisitor<'a, 'gcx, 'tcx> {
self.delegate.borrow(expr.id,
expr.span,
cmt_base,
*r,
r,
ty::BorrowKind::from_mutbl(m),
AutoRef);
}
@ -835,7 +837,7 @@ impl<'a, 'gcx, 'tcx> ExprUseVisitor<'a, 'gcx, 'tcx> {
// Converting from a &T to *T (or &mut T to *mut T) is
// treated as borrowing it for the enclosing temporary
// scope.
let r = ty::ReScope(self.tcx().region_maps.node_extent(expr.id));
let r = self.tcx().node_scope_region(expr.id);
self.delegate.borrow(expr.id,
expr.span,
@ -890,7 +892,7 @@ impl<'a, 'gcx, 'tcx> ExprUseVisitor<'a, 'gcx, 'tcx> {
// methods are implicitly autoref'd which sadly does not use
// adjustments, so we must hardcode the borrow here.
let r = ty::ReScope(self.tcx().region_maps.node_extent(expr.id));
let r = self.tcx().node_scope_region(expr.id);
let bk = ty::ImmBorrow;
for &arg in &rhs {
@ -979,7 +981,7 @@ impl<'a, 'gcx, 'tcx> ExprUseVisitor<'a, 'gcx, 'tcx> {
// It is also a borrow or copy/move of the value being matched.
match bmode {
hir::BindByRef(m) => {
if let ty::TyRef(&r, _) = pat_ty.sty {
if let ty::TyRef(r, _) = pat_ty.sty {
let bk = ty::BorrowKind::from_mutbl(m);
delegate.borrow(pat.id, pat.span, cmt_pat, r, bk, RefBinding);
}

View File

@ -37,7 +37,7 @@ impl FreeRegionMap {
for implied_bound in implied_bounds {
debug!("implied bound: {:?}", implied_bound);
match *implied_bound {
ImpliedBound::RegionSubRegion(ty::ReFree(free_a), ty::ReFree(free_b)) => {
ImpliedBound::RegionSubRegion(&ty::ReFree(free_a), &ty::ReFree(free_b)) => {
self.relate_free_regions(free_a, free_b);
}
ImpliedBound::RegionSubRegion(..) |
@ -65,9 +65,9 @@ impl FreeRegionMap {
}
ty::Predicate::RegionOutlives(ty::Binder(ty::OutlivesPredicate(r_a, r_b))) => {
match (r_a, r_b) {
(ty::ReStatic, ty::ReFree(_)) => {},
(ty::ReFree(fr_a), ty::ReStatic) => self.relate_to_static(fr_a),
(ty::ReFree(fr_a), ty::ReFree(fr_b)) => {
(&ty::ReStatic, &ty::ReFree(_)) => {},
(&ty::ReFree(fr_a), &ty::ReStatic) => self.relate_to_static(fr_a),
(&ty::ReFree(fr_a), &ty::ReFree(fr_b)) => {
// Record that `'a:'b`. Or, put another way, `'b <= 'a`.
self.relate_free_regions(fr_b, fr_a);
}
@ -122,26 +122,26 @@ impl FreeRegionMap {
/// inference* and sadly the logic is somewhat duplicated with the code in infer.rs.
pub fn is_subregion_of(&self,
tcx: TyCtxt,
sub_region: ty::Region,
super_region: ty::Region)
sub_region: &ty::Region,
super_region: &ty::Region)
-> bool {
let result = sub_region == super_region || {
match (sub_region, super_region) {
(ty::ReEmpty, _) |
(_, ty::ReStatic) =>
(&ty::ReEmpty, _) |
(_, &ty::ReStatic) =>
true,
(ty::ReScope(sub_scope), ty::ReScope(super_scope)) =>
(&ty::ReScope(sub_scope), &ty::ReScope(super_scope)) =>
tcx.region_maps.is_subscope_of(sub_scope, super_scope),
(ty::ReScope(sub_scope), ty::ReFree(fr)) =>
(&ty::ReScope(sub_scope), &ty::ReFree(fr)) =>
tcx.region_maps.is_subscope_of(sub_scope, fr.scope) ||
self.is_static(fr),
(ty::ReFree(sub_fr), ty::ReFree(super_fr)) =>
(&ty::ReFree(sub_fr), &ty::ReFree(super_fr)) =>
self.sub_free_region(sub_fr, super_fr),
(ty::ReStatic, ty::ReFree(sup_fr)) =>
(&ty::ReStatic, &ty::ReFree(sup_fr)) =>
self.is_static(sup_fr),
_ =>

View File

@ -90,11 +90,11 @@ use std::rc::Rc;
#[derive(Clone, PartialEq)]
pub enum Categorization<'tcx> {
Rvalue(ty::Region), // temporary val, argument is its scope
Rvalue(&'tcx ty::Region), // temporary val, argument is its scope
StaticItem,
Upvar(Upvar), // upvar referenced by closure env
Local(ast::NodeId), // local variable
Deref(cmt<'tcx>, usize, PointerKind), // deref of a ptr
Deref(cmt<'tcx>, usize, PointerKind<'tcx>), // deref of a ptr
Interior(cmt<'tcx>, InteriorKind), // something interior: field, tuple, etc
Downcast(cmt<'tcx>, DefId), // selects a particular enum variant (*1)
@ -110,18 +110,18 @@ pub struct Upvar {
// different kinds of pointers:
#[derive(Clone, Copy, PartialEq, Eq, Hash)]
pub enum PointerKind {
pub enum PointerKind<'tcx> {
/// `Box<T>`
Unique,
/// `&T`
BorrowedPtr(ty::BorrowKind, ty::Region),
BorrowedPtr(ty::BorrowKind, &'tcx ty::Region),
/// `*T`
UnsafePtr(hir::Mutability),
/// Implicit deref of the `&T` that results from an overloaded index `[]`.
Implicit(ty::BorrowKind, ty::Region),
Implicit(ty::BorrowKind, &'tcx ty::Region),
}
// We use the term "interior" to mean "something reachable from the
@ -198,8 +198,8 @@ pub type cmt<'tcx> = Rc<cmt_<'tcx>>;
// We pun on *T to mean both actual deref of a ptr as well
// as accessing of components:
#[derive(Copy, Clone)]
pub enum deref_kind {
deref_ptr(PointerKind),
pub enum deref_kind<'tcx> {
deref_ptr(PointerKind<'tcx>),
deref_interior(InteriorKind),
}
@ -216,7 +216,7 @@ fn deref_kind(t: Ty, context: DerefKindContext) -> McResult<deref_kind> {
ty::TyRef(r, mt) => {
let kind = ty::BorrowKind::from_mutbl(mt.mutbl);
Ok(deref_ptr(BorrowedPtr(kind, *r)))
Ok(deref_ptr(BorrowedPtr(kind, r)))
}
ty::TyRawPtr(ref mt) => {
@ -767,13 +767,13 @@ impl<'a, 'gcx, 'tcx> MemCategorizationContext<'a, 'gcx, 'tcx> {
};
// Region of environment pointer
let env_region = ty::ReFree(ty::FreeRegion {
let env_region = self.tcx().mk_region(ty::ReFree(ty::FreeRegion {
// The environment of a closure is guaranteed to
// outlive any bindings introduced in the body of the
// closure itself.
scope: self.tcx().region_maps.item_extent(fn_body_id),
bound_region: ty::BrEnv
});
}));
let env_ptr = BorrowedPtr(env_borrow_kind, env_region);
@ -817,11 +817,11 @@ impl<'a, 'gcx, 'tcx> MemCategorizationContext<'a, 'gcx, 'tcx> {
/// Returns the lifetime of a temporary created by expr with id `id`.
/// This could be `'static` if `id` is part of a constant expression.
pub fn temporary_scope(&self, id: ast::NodeId) -> ty::Region {
match self.infcx.temporary_scope(id) {
pub fn temporary_scope(&self, id: ast::NodeId) -> &'tcx ty::Region {
self.tcx().mk_region(match self.infcx.temporary_scope(id) {
Some(scope) => ty::ReScope(scope),
None => ty::ReStatic
}
})
}
pub fn cat_rvalue_node(&self,
@ -845,7 +845,7 @@ impl<'a, 'gcx, 'tcx> MemCategorizationContext<'a, 'gcx, 'tcx> {
let re = if qualif.intersects(ConstQualif::NON_STATIC_BORROWS) {
self.temporary_scope(id)
} else {
ty::ReStatic
self.tcx().mk_region(ty::ReStatic)
};
let ret = self.cat_rvalue(id, span, re, expr_ty);
debug!("cat_rvalue_node ret {:?}", ret);
@ -855,7 +855,7 @@ impl<'a, 'gcx, 'tcx> MemCategorizationContext<'a, 'gcx, 'tcx> {
pub fn cat_rvalue(&self,
cmt_id: ast::NodeId,
span: Span,
temp_scope: ty::Region,
temp_scope: &'tcx ty::Region,
expr_ty: Ty<'tcx>) -> cmt<'tcx> {
let ret = Rc::new(cmt_ {
id:cmt_id,
@ -1480,7 +1480,7 @@ pub fn ptr_sigil(ptr: PointerKind) -> &'static str {
}
}
impl fmt::Debug for PointerKind {
impl<'tcx> fmt::Debug for PointerKind<'tcx> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match *self {
Unique => write!(f, "Box"),

View File

@ -89,9 +89,12 @@ struct LifetimeContext<'a, 'tcx: 'a> {
#[derive(PartialEq, Debug)]
enum ScopeChain<'a> {
/// EarlyScope(['a, 'b, ...], s) extends s with early-bound
/// lifetimes.
EarlyScope(&'a [hir::LifetimeDef], Scope<'a>),
/// EarlyScope(['a, 'b, ...], start, s) extends s with early-bound
/// lifetimes, with consecutive parameter indices from `start`.
/// That is, 'a has index `start`, 'b has index `start + 1`, etc.
/// Indices before `start` correspond to other generic parameters
/// of a parent item (trait/impl of a method), or `Self` in traits.
EarlyScope(&'a [hir::LifetimeDef], u32, Scope<'a>),
/// LateScope(['a, 'b, ...], s) extends s with late-bound
/// lifetimes introduced by the declaration binder_id.
LateScope(&'a [hir::LifetimeDef], Scope<'a>),
@ -157,7 +160,12 @@ impl<'a, 'tcx, 'v> Visitor<'v> for LifetimeContext<'a, 'tcx> {
hir::ItemImpl(_, _, ref generics, _, _, _) => {
// These kinds of items have only early bound lifetime parameters.
let lifetimes = &generics.lifetimes;
this.with(EarlyScope(lifetimes, &ROOT_SCOPE), |old_scope, this| {
let start = if let hir::ItemTrait(..) = item.node {
1 // Self comes before lifetimes
} else {
0
};
this.with(EarlyScope(lifetimes, start, &ROOT_SCOPE), |old_scope, this| {
this.check_lifetime_defs(old_scope, lifetimes);
intravisit::walk_item(this, item);
});
@ -461,7 +469,7 @@ fn extract_labels(ctxt: &mut LifetimeContext, b: &hir::Block) {
FnScope { s, .. } => { scope = s; }
RootScope => { return; }
EarlyScope(lifetimes, s) |
EarlyScope(lifetimes, _, s) |
LateScope(lifetimes, s) => {
for lifetime_def in lifetimes {
// FIXME (#24278): non-hygienic comparison
@ -566,8 +574,24 @@ impl<'a, 'tcx> LifetimeContext<'a, 'tcx> {
.cloned()
.partition(|l| self.map.late_bound.contains_key(&l.lifetime.id));
// Find the start of nested early scopes, e.g. in methods.
let mut start = 0;
if let EarlyScope(..) = *self.scope {
let parent = self.hir_map.expect_item(self.hir_map.get_parent(fn_id));
if let hir::ItemTrait(..) = parent.node {
start += 1; // Self comes first.
}
match parent.node {
hir::ItemTrait(_, ref generics, _, _) |
hir::ItemImpl(_, _, ref generics, _, _, _) => {
start += generics.lifetimes.len() + generics.ty_params.len();
}
_ => {}
}
}
let this = self;
this.with(EarlyScope(&early, this.scope), move |old_scope, this| {
this.with(EarlyScope(&early, start as u32, this.scope), move |old_scope, this| {
this.with(LateScope(&late, this.scope), move |_, this| {
this.check_lifetime_defs(old_scope, &generics.lifetimes);
walk(this);
@ -597,19 +621,11 @@ impl<'a, 'tcx> LifetimeContext<'a, 'tcx> {
break;
}
EarlyScope(lifetimes, s) => {
EarlyScope(lifetimes, start, s) => {
match search_lifetimes(lifetimes, lifetime_ref) {
Some((mut index, lifetime_def)) => {
// Adjust for nested early scopes, e.g. in methods.
let mut parent = s;
while let EarlyScope(lifetimes, s) = *parent {
index += lifetimes.len() as u32;
parent = s;
}
assert_eq!(*parent, RootScope);
Some((index, lifetime_def)) => {
let decl_id = lifetime_def.id;
let def = DefEarlyBoundRegion(index, decl_id);
let def = DefEarlyBoundRegion(start + index, decl_id);
self.insert_lifetime(lifetime_ref, def);
return;
}
@ -671,7 +687,7 @@ impl<'a, 'tcx> LifetimeContext<'a, 'tcx> {
break;
}
EarlyScope(lifetimes, s) |
EarlyScope(lifetimes, _, s) |
LateScope(lifetimes, s) => {
search_result = search_lifetimes(lifetimes, lifetime_ref);
if search_result.is_some() {
@ -767,7 +783,7 @@ impl<'a, 'tcx> LifetimeContext<'a, 'tcx> {
return;
}
EarlyScope(lifetimes, s) |
EarlyScope(lifetimes, _, s) |
LateScope(lifetimes, s) => {
if let Some((_, lifetime_def)) = search_lifetimes(lifetimes, lifetime) {
signal_shadowing_problem(

View File

@ -911,7 +911,7 @@ pub enum Rvalue<'tcx> {
Repeat(Operand<'tcx>, TypedConstVal<'tcx>),
/// &x or &mut x
Ref(Region, BorrowKind, Lvalue<'tcx>),
Ref(&'tcx Region, BorrowKind, Lvalue<'tcx>),
/// length of a [X] or [X;n] value
Len(Lvalue<'tcx>),

View File

@ -145,8 +145,7 @@ impl<'tcx> Rvalue<'tcx> {
}
&Rvalue::Ref(reg, bk, ref lv) => {
let lv_ty = lv.ty(mir, tcx).to_ty(tcx);
Some(tcx.mk_ref(
tcx.mk_region(reg),
Some(tcx.mk_ref(reg,
ty::TypeAndMut {
ty: lv_ty,
mutbl: bk.to_mutbl_lossy()

View File

@ -757,7 +757,7 @@ make_mir_visitor!(Visitor,);
make_mir_visitor!(MutVisitor,mut);
#[derive(Copy, Clone, Debug)]
pub enum LvalueContext {
pub enum LvalueContext<'tcx> {
// Appears as LHS of an assignment
Store,
@ -771,7 +771,7 @@ pub enum LvalueContext {
Inspect,
// Being borrowed
Borrow { region: Region, kind: BorrowKind },
Borrow { region: &'tcx Region, kind: BorrowKind },
// Being sliced -- this should be same as being borrowed, probably
Slice { from_start: usize, from_end: usize },

View File

@ -232,8 +232,8 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
if let Ok(..) = self.can_equate(&trait_self_ty, &impl_self_ty) {
self_match_impls.push(def_id);
if trait_ref.substs.types[1..].iter()
.zip(&impl_trait_ref.substs.types[1..])
if trait_ref.substs.types().skip(1)
.zip(impl_trait_ref.substs.types().skip(1))
.all(|(u,v)| self.fuzzy_match_tys(u, v))
{
fuzzy_match_impls.push(def_id);
@ -738,8 +738,7 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
ty::Predicate::Trait(ref data) => {
let trait_ref = data.to_poly_trait_ref();
let self_ty = trait_ref.self_ty();
let all_types = &trait_ref.substs().types;
if all_types.references_error() {
if predicate.references_error() {
} else {
// Typically, this ambiguity should only happen if
// there are unresolved type inference variables

View File

@ -93,7 +93,7 @@ pub struct FulfillmentContext<'tcx> {
#[derive(Clone)]
pub struct RegionObligation<'tcx> {
pub sub_region: ty::Region,
pub sub_region: &'tcx ty::Region,
pub sup_type: Ty<'tcx>,
pub cause: ObligationCause<'tcx>,
}
@ -142,7 +142,7 @@ impl<'a, 'gcx, 'tcx> DeferredObligation<'tcx> {
// Auto trait obligations on `impl Trait`.
if tcx.trait_has_default_impl(predicate.def_id()) {
let substs = predicate.skip_binder().trait_ref.substs;
if substs.types.len() == 1 && substs.regions.is_empty() {
if substs.types().count() == 1 && substs.regions().next().is_none() {
if let ty::TyAnon(..) = predicate.skip_binder().self_ty().sty {
return true;
}
@ -162,7 +162,7 @@ impl<'a, 'gcx, 'tcx> DeferredObligation<'tcx> {
let concrete_ty = ty_scheme.ty.subst(tcx, substs);
let predicate = ty::TraitRef {
def_id: self.predicate.def_id(),
substs: Substs::new_trait(tcx, vec![], vec![], concrete_ty)
substs: Substs::new_trait(tcx, concrete_ty, &[])
}.to_predicate();
let original_obligation = Obligation::new(self.cause.clone(),
@ -246,7 +246,7 @@ impl<'a, 'gcx, 'tcx> FulfillmentContext<'tcx> {
pub fn register_region_obligation(&mut self,
t_a: Ty<'tcx>,
r_b: ty::Region,
r_b: &'tcx ty::Region,
cause: ObligationCause<'tcx>)
{
register_region_obligation(t_a, r_b, cause, &mut self.region_obligations);
@ -440,8 +440,7 @@ fn trait_ref_type_vars<'a, 'gcx, 'tcx>(selcx: &mut SelectionContext<'a, 'gcx, 't
{
t.skip_binder() // ok b/c this check doesn't care about regions
.input_types()
.iter()
.map(|t| selcx.infcx().resolve_type_vars_if_possible(t))
.map(|t| selcx.infcx().resolve_type_vars_if_possible(&t))
.filter(|t| t.has_infer_types())
.flat_map(|t| t.walk())
.filter(|t| match t.sty { ty::TyInfer(_) => true, _ => false })
@ -581,7 +580,8 @@ fn process_predicate<'a, 'gcx, 'tcx>(
// Otherwise, we have something of the form
// `for<'a> T: 'a where 'a not in T`, which we can treat as `T: 'static`.
Some(t_a) => {
register_region_obligation(t_a, ty::ReStatic,
let r_static = selcx.tcx().mk_region(ty::ReStatic);
register_region_obligation(t_a, r_static,
obligation.cause.clone(),
region_obligations);
Ok(Some(vec![]))
@ -691,7 +691,7 @@ fn coinductive_obligation<'a,'gcx,'tcx>(selcx: &SelectionContext<'a,'gcx,'tcx>,
}
fn register_region_obligation<'tcx>(t_a: Ty<'tcx>,
r_b: ty::Region,
r_b: &'tcx ty::Region,
cause: ObligationCause<'tcx>,
region_obligations: &mut NodeMap<Vec<RegionObligation<'tcx>>>)
{

View File

@ -145,7 +145,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
match predicate {
ty::Predicate::Trait(ref data) => {
// In the case of a trait predicate, we can skip the "self" type.
data.0.trait_ref.input_types()[1..].iter().any(|t| t.has_self_ty())
data.skip_binder().input_types().skip(1).any(|t| t.has_self_ty())
}
ty::Predicate::Projection(..) |
ty::Predicate::WellFormed(..) |

View File

@ -36,7 +36,7 @@ use super::util;
use hir::def_id::DefId;
use infer;
use infer::{InferCtxt, InferOk, TypeFreshener, TypeOrigin};
use ty::subst::{Subst, Substs};
use ty::subst::{Kind, Subst, Substs};
use ty::{self, ToPredicate, ToPolyTraitRef, Ty, TyCtxt, TypeFoldable};
use traits;
use ty::fast_reject;
@ -644,8 +644,7 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> {
// This suffices to allow chains like `FnMut` implemented in
// terms of `Fn` etc, but we could probably make this more
// precise still.
let input_types = stack.fresh_trait_ref.0.input_types();
let unbound_input_types = input_types.iter().any(|ty| ty.is_fresh());
let unbound_input_types = stack.fresh_trait_ref.input_types().any(|ty| ty.is_fresh());
if unbound_input_types && self.intercrate {
debug!("evaluate_stack({:?}) --> unbound argument, intercrate --> ambiguous",
stack.fresh_trait_ref);
@ -1064,9 +1063,7 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> {
match *candidate {
Ok(Some(_)) | Err(_) => true,
Ok(None) => {
cache_fresh_trait_pred.0.trait_ref.substs.types.has_infer_types()
}
Ok(None) => cache_fresh_trait_pred.has_infer_types()
}
}
@ -1250,7 +1247,7 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> {
obligation: &TraitObligation<'tcx>,
trait_bound: ty::PolyTraitRef<'tcx>,
skol_trait_ref: ty::TraitRef<'tcx>,
skol_map: &infer::SkolemizationMap,
skol_map: &infer::SkolemizationMap<'tcx>,
snapshot: &infer::CombinedSnapshot)
-> bool
{
@ -1603,7 +1600,7 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> {
return;
}
};
let target = obligation.predicate.skip_binder().input_types()[1];
let target = obligation.predicate.skip_binder().trait_ref.substs.type_at(1);
debug!("assemble_candidates_for_unsizing(source={:?}, target={:?})",
source, target);
@ -1936,7 +1933,7 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> {
// for `PhantomData<T>`, we pass `T`
ty::TyStruct(def, substs) if def.is_phantom_data() => {
substs.types.to_vec()
substs.types().collect()
}
ty::TyStruct(def, substs) | ty::TyEnum(def, substs) => {
@ -1985,7 +1982,7 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> {
trait_def_id,
recursion_depth,
normalized_ty,
vec![]);
&[]);
obligations.push(skol_obligation);
this.infcx().plug_leaks(skol_map, snapshot, &obligations)
})
@ -2180,12 +2177,11 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> {
match self_ty.sty {
ty::TyTrait(ref data) => {
// OK to skip the binder, it is reintroduced below
let input_types = data.principal.skip_binder().input_types();
let input_types = data.principal.input_types();
let assoc_types = data.projection_bounds.iter()
.map(|pb| pb.skip_binder().ty);
let all_types: Vec<_> = input_types.iter().cloned()
.chain(assoc_types)
.collect();
let all_types: Vec<_> = input_types.chain(assoc_types)
.collect();
// reintroduce the two binding levels we skipped, then flatten into one
let all_types = ty::Binder(ty::Binder(all_types));
@ -2267,7 +2263,7 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> {
mut substs: Normalized<'tcx, &'tcx Substs<'tcx>>,
cause: ObligationCause<'tcx>,
recursion_depth: usize,
skol_map: infer::SkolemizationMap,
skol_map: infer::SkolemizationMap<'tcx>,
snapshot: &infer::CombinedSnapshot)
-> VtableImplData<'tcx, PredicateObligation<'tcx>>
{
@ -2476,7 +2472,7 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> {
// regions here. See the comment there for more details.
let source = self.infcx.shallow_resolve(
tcx.no_late_bound_regions(&obligation.self_ty()).unwrap());
let target = obligation.predicate.skip_binder().input_types()[1];
let target = obligation.predicate.skip_binder().trait_ref.substs.type_at(1);
let target = self.infcx.shallow_resolve(target);
debug!("confirm_builtin_unsize_candidate(source={:?}, target={:?})",
@ -2585,7 +2581,7 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> {
} else {
return Err(Unimplemented);
};
let mut ty_params = BitVector::new(substs_a.types.len());
let mut ty_params = BitVector::new(substs_a.types().count());
let mut found = false;
for ty in field.walk() {
if let ty::TyParam(p) = ty.sty {
@ -2601,14 +2597,14 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> {
// TyError and ensure they do not affect any other fields.
// This could be checked after type collection for any struct
// with a potentially unsized trailing field.
let types = substs_a.types.iter().enumerate().map(|(i, ty)| {
let params = substs_a.params().iter().enumerate().map(|(i, &k)| {
if ty_params.contains(i) {
tcx.types.err
Kind::from(tcx.types.err)
} else {
ty
k
}
}).collect();
let substs = Substs::new(tcx, types, substs_a.regions.clone());
});
let substs = Substs::new(tcx, params);
for &ty in fields.split_last().unwrap().1 {
if ty.subst(tcx, substs).references_error() {
return Err(Unimplemented);
@ -2621,15 +2617,14 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> {
// Check that the source structure with the target's
// type parameters is a subtype of the target.
let types = substs_a.types.iter().enumerate().map(|(i, ty)| {
let params = substs_a.params().iter().enumerate().map(|(i, &k)| {
if ty_params.contains(i) {
substs_b.types[i]
Kind::from(substs_b.type_at(i))
} else {
ty
k
}
}).collect();
let substs = Substs::new(tcx, types, substs_a.regions.clone());
let new_struct = tcx.mk_struct(def, substs);
});
let new_struct = tcx.mk_struct(def, Substs::new(tcx, params));
let origin = TypeOrigin::Misc(obligation.cause.span);
let InferOk { obligations, .. } =
self.infcx.sub_types(false, origin, new_struct, target)
@ -2642,7 +2637,7 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> {
obligation.predicate.def_id(),
obligation.recursion_depth + 1,
inner_source,
vec![inner_target]));
&[inner_target]));
}
_ => bug!()
@ -2665,7 +2660,8 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> {
impl_def_id: DefId,
obligation: &TraitObligation<'tcx>,
snapshot: &infer::CombinedSnapshot)
-> (Normalized<'tcx, &'tcx Substs<'tcx>>, infer::SkolemizationMap)
-> (Normalized<'tcx, &'tcx Substs<'tcx>>,
infer::SkolemizationMap<'tcx>)
{
match self.match_impl(impl_def_id, obligation, snapshot) {
Ok((substs, skol_map)) => (substs, skol_map),
@ -2682,7 +2678,7 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> {
obligation: &TraitObligation<'tcx>,
snapshot: &infer::CombinedSnapshot)
-> Result<(Normalized<'tcx, &'tcx Substs<'tcx>>,
infer::SkolemizationMap), ()>
infer::SkolemizationMap<'tcx>), ()>
{
let impl_trait_ref = self.tcx().impl_trait_ref(impl_def_id).unwrap();
@ -2753,9 +2749,9 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> {
// substitution if we find that any of the input types, when
// simplified, do not match.
obligation.predicate.0.input_types().iter()
obligation.predicate.skip_binder().input_types()
.zip(impl_trait_ref.input_types())
.any(|(&obligation_ty, &impl_ty)| {
.any(|(obligation_ty, impl_ty)| {
let simplified_obligation_ty =
fast_reject::simplify_type(self.tcx(), obligation_ty, true);
let simplified_impl_ty =
@ -2875,7 +2871,7 @@ impl<'cx, 'gcx, 'tcx> SelectionContext<'cx, 'gcx, 'tcx> {
recursion_depth: usize,
def_id: DefId, // of impl or trait
substs: &Substs<'tcx>, // for impl or trait
skol_map: infer::SkolemizationMap,
skol_map: infer::SkolemizationMap<'tcx>,
snapshot: &infer::CombinedSnapshot)
-> Vec<PredicateObligation<'tcx>>
{

View File

@ -386,7 +386,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
Ok(def_id) => {
Ok(ty::TraitRef {
def_id: def_id,
substs: Substs::new_trait(self, vec![], vec![], param_ty)
substs: Substs::new_trait(self, param_ty, &[])
})
}
Err(e) => {
@ -401,12 +401,12 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
trait_def_id: DefId,
recursion_depth: usize,
param_ty: Ty<'tcx>,
ty_params: Vec<Ty<'tcx>>)
ty_params: &[Ty<'tcx>])
-> PredicateObligation<'tcx>
{
let trait_ref = ty::TraitRef {
def_id: trait_def_id,
substs: Substs::new_trait(self, ty_params, vec![], param_ty)
substs: Substs::new_trait(self, param_ty, ty_params)
};
predicate_for_trait_ref(cause, trait_ref, recursion_depth)
}
@ -496,7 +496,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
};
let trait_ref = ty::TraitRef {
def_id: fn_trait_def_id,
substs: Substs::new_trait(self, vec![arguments_tuple], vec![], self_ty),
substs: Substs::new_trait(self, self_ty, &[arguments_tuple]),
};
ty::Binder((trait_ref, sig.0.output))
}

View File

@ -52,7 +52,8 @@ impl<'a, 'gcx, 'tcx> TypeRelation<'a, 'gcx, 'tcx> for Match<'a, 'gcx, 'tcx> {
self.relate(a, b)
}
fn regions(&mut self, a: ty::Region, b: ty::Region) -> RelateResult<'tcx, ty::Region> {
fn regions(&mut self, a: &'tcx ty::Region, b: &'tcx ty::Region)
-> RelateResult<'tcx, &'tcx ty::Region> {
debug!("{}.regions({:?}, {:?})",
self.tag(),
a,

View File

@ -213,7 +213,7 @@ pub struct Tables<'tcx> {
pub method_map: ty::MethodMap<'tcx>,
/// Borrows
pub upvar_capture_map: ty::UpvarCaptureMap,
pub upvar_capture_map: ty::UpvarCaptureMap<'tcx>,
/// Records the type of each closure. The def ID is the ID of the
/// expression defining the closure.
@ -1152,12 +1152,17 @@ fn keep_local<'tcx, T: ty::TypeFoldable<'tcx>>(x: &T) -> bool {
impl_interners!('tcx,
type_list: mk_type_list(Vec<Ty<'tcx>>, keep_local) -> [Ty<'tcx>],
substs: mk_substs(Substs<'tcx>, |substs: &Substs| {
keep_local(&substs.types) || keep_local(&substs.regions)
substs.params().iter().any(keep_local)
}) -> Substs<'tcx>,
bare_fn: mk_bare_fn(BareFnTy<'tcx>, |fty: &BareFnTy| {
keep_local(&fty.sig)
}) -> BareFnTy<'tcx>,
region: mk_region(Region, keep_local) -> Region
region: mk_region(Region, |r| {
match r {
&ty::ReVar(_) | &ty::ReSkolemized(..) => true,
_ => false
}
}) -> Region
);
impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {

View File

@ -41,11 +41,11 @@ pub enum TypeError<'tcx> {
FixedArraySize(ExpectedFound<usize>),
TyParamSize(ExpectedFound<usize>),
ArgCount,
RegionsDoesNotOutlive(Region, Region),
RegionsNotSame(Region, Region),
RegionsNoOverlap(Region, Region),
RegionsInsufficientlyPolymorphic(BoundRegion, Region),
RegionsOverlyPolymorphic(BoundRegion, Region),
RegionsDoesNotOutlive(&'tcx Region, &'tcx Region),
RegionsNotSame(&'tcx Region, &'tcx Region),
RegionsNoOverlap(&'tcx Region, &'tcx Region),
RegionsInsufficientlyPolymorphic(BoundRegion, &'tcx Region),
RegionsOverlyPolymorphic(BoundRegion, &'tcx Region),
Sorts(ExpectedFound<Ty<'tcx>>),
IntegerAsChar,
IntMismatch(ExpectedFound<ty::IntVarValue>),
@ -296,7 +296,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
self.note_and_explain_region(db, "concrete lifetime that was found is ",
conc_region, "");
}
RegionsOverlyPolymorphic(_, ty::ReVar(_)) => {
RegionsOverlyPolymorphic(_, &ty::ReVar(_)) => {
// don't bother to print out the message below for
// inference variables, it's not very illuminating.
}

View File

@ -137,7 +137,7 @@ impl FlagComputation {
}
&ty::TyRef(r, ref m) => {
self.add_region(*r);
self.add_region(r);
self.add_ty(m.ty);
}
@ -176,8 +176,8 @@ impl FlagComputation {
self.add_bound_computation(&computation);
}
fn add_region(&mut self, r: ty::Region) {
match r {
fn add_region(&mut self, r: &ty::Region) {
match *r {
ty::ReVar(..) => {
self.add_flags(TypeFlags::HAS_RE_INFER);
self.add_flags(TypeFlags::KEEP_IN_LOCAL_TCX);
@ -208,8 +208,11 @@ impl FlagComputation {
}
fn add_substs(&mut self, substs: &Substs) {
self.add_tys(&substs.types);
for &r in &substs.regions {
for ty in substs.types() {
self.add_ty(ty);
}
for r in substs.regions() {
self.add_region(r);
}
}

View File

@ -169,7 +169,7 @@ pub trait TypeFolder<'gcx: 'tcx, 'tcx> : Sized {
fty.super_fold_with(self)
}
fn fold_region(&mut self, r: ty::Region) -> ty::Region {
fn fold_region(&mut self, r: &'tcx ty::Region) -> &'tcx ty::Region {
r.super_fold_with(self)
}
@ -188,7 +188,7 @@ pub trait TypeVisitor<'tcx> : Sized {
t.super_visit_with(self)
}
fn visit_region(&mut self, r: ty::Region) -> bool {
fn visit_region(&mut self, r: &'tcx ty::Region) -> bool {
r.super_visit_with(self)
}
}
@ -222,13 +222,15 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
/// whether any late-bound regions were skipped
pub fn collect_regions<T>(self,
value: &T,
region_set: &mut FnvHashSet<ty::Region>)
region_set: &mut FnvHashSet<&'tcx ty::Region>)
-> bool
where T : TypeFoldable<'tcx>
{
let mut have_bound_regions = false;
self.fold_regions(value, &mut have_bound_regions,
|r, d| { region_set.insert(r.from_depth(d)); r });
self.fold_regions(value, &mut have_bound_regions, |r, d| {
region_set.insert(self.mk_region(r.from_depth(d)));
r
});
have_bound_regions
}
@ -240,7 +242,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
skipped_regions: &mut bool,
mut f: F)
-> T
where F : FnMut(ty::Region, u32) -> ty::Region,
where F : FnMut(&'tcx ty::Region, u32) -> &'tcx ty::Region,
T : TypeFoldable<'tcx>,
{
value.fold_with(&mut RegionFolder::new(self, skipped_regions, &mut f))
@ -260,14 +262,14 @@ pub struct RegionFolder<'a, 'gcx: 'a+'tcx, 'tcx: 'a> {
tcx: TyCtxt<'a, 'gcx, 'tcx>,
skipped_regions: &'a mut bool,
current_depth: u32,
fld_r: &'a mut (FnMut(ty::Region, u32) -> ty::Region + 'a),
fld_r: &'a mut (FnMut(&'tcx ty::Region, u32) -> &'tcx ty::Region + 'a),
}
impl<'a, 'gcx, 'tcx> RegionFolder<'a, 'gcx, 'tcx> {
pub fn new<F>(tcx: TyCtxt<'a, 'gcx, 'tcx>,
skipped_regions: &'a mut bool,
fld_r: &'a mut F) -> RegionFolder<'a, 'gcx, 'tcx>
where F : FnMut(ty::Region, u32) -> ty::Region
where F : FnMut(&'tcx ty::Region, u32) -> &'tcx ty::Region
{
RegionFolder {
tcx: tcx,
@ -288,8 +290,8 @@ impl<'a, 'gcx, 'tcx> TypeFolder<'gcx, 'tcx> for RegionFolder<'a, 'gcx, 'tcx> {
t
}
fn fold_region(&mut self, r: ty::Region) -> ty::Region {
match r {
fn fold_region(&mut self, r: &'tcx ty::Region) -> &'tcx ty::Region {
match *r {
ty::ReLateBound(debruijn, _) if debruijn.depth < self.current_depth => {
debug!("RegionFolder.fold_region({:?}) skipped bound region (current depth={})",
r, self.current_depth);
@ -313,16 +315,16 @@ impl<'a, 'gcx, 'tcx> TypeFolder<'gcx, 'tcx> for RegionFolder<'a, 'gcx, 'tcx> {
struct RegionReplacer<'a, 'gcx: 'a+'tcx, 'tcx: 'a> {
tcx: TyCtxt<'a, 'gcx, 'tcx>,
current_depth: u32,
fld_r: &'a mut (FnMut(ty::BoundRegion) -> ty::Region + 'a),
map: FnvHashMap<ty::BoundRegion, ty::Region>
fld_r: &'a mut (FnMut(ty::BoundRegion) -> &'tcx ty::Region + 'a),
map: FnvHashMap<ty::BoundRegion, &'tcx ty::Region>
}
impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
pub fn replace_late_bound_regions<T,F>(self,
value: &Binder<T>,
mut f: F)
-> (T, FnvHashMap<ty::BoundRegion, ty::Region>)
where F : FnMut(ty::BoundRegion) -> ty::Region,
-> (T, FnvHashMap<ty::BoundRegion, &'tcx ty::Region>)
where F : FnMut(ty::BoundRegion) -> &'tcx ty::Region,
T : TypeFoldable<'tcx>,
{
let mut replacer = RegionReplacer::new(self, &mut f);
@ -340,7 +342,10 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
where T : TypeFoldable<'tcx>
{
self.replace_late_bound_regions(value, |br| {
ty::ReFree(ty::FreeRegion{scope: all_outlive_scope, bound_region: br})
self.mk_region(ty::ReFree(ty::FreeRegion {
scope: all_outlive_scope,
bound_region: br
}))
}).0
}
@ -353,11 +358,11 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
let bound0_value = bound2_value.skip_binder().skip_binder();
let value = self.fold_regions(bound0_value, &mut false,
|region, current_depth| {
match region {
match *region {
ty::ReLateBound(debruijn, br) if debruijn.depth >= current_depth => {
// should be true if no escaping regions from bound2_value
assert!(debruijn.depth - current_depth <= 1);
ty::ReLateBound(ty::DebruijnIndex::new(current_depth), br)
self.mk_region(ty::ReLateBound(ty::DebruijnIndex::new(current_depth), br))
}
_ => {
region
@ -411,7 +416,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
pub fn erase_late_bound_regions<T>(self, value: &Binder<T>) -> T
where T : TypeFoldable<'tcx>
{
self.replace_late_bound_regions(value, |_| ty::ReErased).0
self.replace_late_bound_regions(value, |_| self.mk_region(ty::ReErased)).0
}
/// Rewrite any late-bound regions so that they are anonymous. Region numbers are
@ -428,7 +433,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
let mut counter = 0;
Binder(self.replace_late_bound_regions(sig, |_| {
counter += 1;
ty::ReLateBound(ty::DebruijnIndex::new(1), ty::BrAnon(counter))
self.mk_region(ty::ReLateBound(ty::DebruijnIndex::new(1), ty::BrAnon(counter)))
}).0)
}
}
@ -436,7 +441,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
impl<'a, 'gcx, 'tcx> RegionReplacer<'a, 'gcx, 'tcx> {
fn new<F>(tcx: TyCtxt<'a, 'gcx, 'tcx>, fld_r: &'a mut F)
-> RegionReplacer<'a, 'gcx, 'tcx>
where F : FnMut(ty::BoundRegion) -> ty::Region
where F : FnMut(ty::BoundRegion) -> &'tcx ty::Region
{
RegionReplacer {
tcx: tcx,
@ -465,22 +470,22 @@ impl<'a, 'gcx, 'tcx> TypeFolder<'gcx, 'tcx> for RegionReplacer<'a, 'gcx, 'tcx> {
t.super_fold_with(self)
}
fn fold_region(&mut self, r: ty::Region) -> ty::Region {
match r {
fn fold_region(&mut self, r:&'tcx ty::Region) -> &'tcx ty::Region {
match *r {
ty::ReLateBound(debruijn, br) if debruijn.depth == self.current_depth => {
let fld_r = &mut self.fld_r;
let region = *self.map.entry(br).or_insert_with(|| fld_r(br));
if let ty::ReLateBound(debruijn1, br) = region {
if let ty::ReLateBound(debruijn1, br) = *region {
// If the callback returns a late-bound region,
// that region should always use depth 1. Then we
// adjust it to the correct depth.
assert_eq!(debruijn1.depth, 1);
ty::ReLateBound(debruijn, br)
self.tcx.mk_region(ty::ReLateBound(debruijn, br))
} else {
region
}
}
r => r
_ => r
}
}
}
@ -528,7 +533,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
u.super_fold_with(self)
}
fn fold_region(&mut self, r: ty::Region) -> ty::Region {
fn fold_region(&mut self, r: &'tcx ty::Region) -> &'tcx ty::Region {
// because late-bound regions affect subtyping, we can't
// erase the bound/free distinction, but we can replace
// all free regions with 'erased.
@ -537,9 +542,9 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
// type system never "sees" those, they get substituted
// away. In trans, they will always be erased to 'erased
// whenever a substitution occurs.
match r {
match *r {
ty::ReLateBound(..) => r,
_ => ty::ReErased
_ => self.tcx().mk_region(ty::ReErased)
}
}
}
@ -574,7 +579,7 @@ pub fn shift_regions<'a, 'gcx, 'tcx, T>(tcx: TyCtxt<'a, 'gcx, 'tcx>,
value, amount);
value.fold_with(&mut RegionFolder::new(tcx, &mut false, &mut |region, _current_depth| {
shift_region(region, amount)
tcx.mk_region(shift_region(*region, amount))
}))
}
@ -616,7 +621,7 @@ impl<'tcx> TypeVisitor<'tcx> for HasEscapingRegionsVisitor {
t.region_depth > self.depth
}
fn visit_region(&mut self, r: ty::Region) -> bool {
fn visit_region(&mut self, r: &'tcx ty::Region) -> bool {
r.escapes_depth(self.depth)
}
}
@ -630,17 +635,18 @@ impl<'tcx> TypeVisitor<'tcx> for HasTypeFlagsVisitor {
t.flags.get().intersects(self.flags)
}
fn visit_region(&mut self, r: ty::Region) -> bool {
fn visit_region(&mut self, r: &'tcx ty::Region) -> bool {
if self.flags.intersects(ty::TypeFlags::HAS_LOCAL_NAMES) {
// does this represent a region that cannot be named
// in a global way? used in fulfillment caching.
match r {
match *r {
ty::ReStatic | ty::ReEmpty | ty::ReErased => {}
_ => return true,
}
}
if self.flags.intersects(ty::TypeFlags::HAS_RE_INFER) {
match r {
if self.flags.intersects(ty::TypeFlags::HAS_RE_INFER |
ty::TypeFlags::KEEP_IN_LOCAL_TCX) {
match *r {
ty::ReVar(_) | ty::ReSkolemized(..) => { return true }
_ => {}
}
@ -688,8 +694,8 @@ impl<'tcx> TypeVisitor<'tcx> for LateBoundRegionsCollector {
t.super_visit_with(self)
}
fn visit_region(&mut self, r: ty::Region) -> bool {
match r {
fn visit_region(&mut self, r: &'tcx ty::Region) -> bool {
match *r {
ty::ReLateBound(debruijn, br) if debruijn.depth == self.current_depth => {
self.regions.insert(br);
}

View File

@ -264,7 +264,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
match self_ty.sty {
ty::TyStruct(adt_def, substs) |
ty::TyEnum(adt_def, substs) => {
if substs.types.is_empty() { // ignore regions
if substs.types().next().is_none() { // ignore regions
self.push_item_path(buffer, adt_def.did);
} else {
buffer.push(&format!("<{}>", self_ty));

View File

@ -38,7 +38,7 @@ dep_map_ty! { TraitItemDefIds: TraitItemDefIds(DefId) -> Rc<Vec<ty::ImplOrTraitI
dep_map_ty! { ImplTraitRefs: ItemSignature(DefId) -> Option<ty::TraitRef<'tcx>> }
dep_map_ty! { TraitDefs: ItemSignature(DefId) -> &'tcx ty::TraitDef<'tcx> }
dep_map_ty! { AdtDefs: ItemSignature(DefId) -> ty::AdtDefMaster<'tcx> }
dep_map_ty! { ItemVariances: ItemSignature(DefId) -> Rc<ty::ItemVariances> }
dep_map_ty! { ItemVariances: ItemSignature(DefId) -> Rc<Vec<ty::Variance>> }
dep_map_ty! { InherentImpls: InherentImpls(DefId) -> Rc<Vec<DefId>> }
dep_map_ty! { ImplItems: ImplItems(DefId) -> Vec<ty::ImplOrTraitItemId> }
dep_map_ty! { TraitItems: TraitItems(DefId) -> Rc<Vec<ty::ImplOrTraitItem<'tcx>>> }

View File

@ -343,7 +343,7 @@ pub struct Method<'tcx> {
pub generics: &'tcx Generics<'tcx>,
pub predicates: GenericPredicates<'tcx>,
pub fty: &'tcx BareFnTy<'tcx>,
pub explicit_self: ExplicitSelfCategory,
pub explicit_self: ExplicitSelfCategory<'tcx>,
pub vis: Visibility,
pub defaultness: hir::Defaultness,
pub def_id: DefId,
@ -355,7 +355,7 @@ impl<'tcx> Method<'tcx> {
generics: &'tcx ty::Generics<'tcx>,
predicates: GenericPredicates<'tcx>,
fty: &'tcx BareFnTy<'tcx>,
explicit_self: ExplicitSelfCategory,
explicit_self: ExplicitSelfCategory<'tcx>,
vis: Visibility,
defaultness: hir::Defaultness,
def_id: DefId,
@ -417,21 +417,6 @@ pub struct AssociatedType<'tcx> {
pub container: ImplOrTraitItemContainer,
}
#[derive(Clone, PartialEq, RustcDecodable, RustcEncodable)]
pub struct ItemVariances {
pub types: Vec<Variance>,
pub regions: Vec<Variance>,
}
impl ItemVariances {
pub fn empty() -> ItemVariances {
ItemVariances {
types: vec![],
regions: vec![],
}
}
}
#[derive(Clone, PartialEq, RustcDecodable, RustcEncodable, Copy)]
pub enum Variance {
Covariant, // T<A> <: T<B> iff A <: B -- e.g., function return type
@ -658,28 +643,28 @@ pub enum BorrowKind {
/// Information describing the capture of an upvar. This is computed
/// during `typeck`, specifically by `regionck`.
#[derive(PartialEq, Clone, Debug, Copy)]
pub enum UpvarCapture {
pub enum UpvarCapture<'tcx> {
/// Upvar is captured by value. This is always true when the
/// closure is labeled `move`, but can also be true in other cases
/// depending on inference.
ByValue,
/// Upvar is captured by reference.
ByRef(UpvarBorrow),
ByRef(UpvarBorrow<'tcx>),
}
#[derive(PartialEq, Clone, Copy)]
pub struct UpvarBorrow {
pub struct UpvarBorrow<'tcx> {
/// The kind of borrow: by-ref upvars have access to shared
/// immutable borrows, which are not part of the normal language
/// syntax.
pub kind: BorrowKind,
/// Region of the resulting reference.
pub region: ty::Region,
pub region: &'tcx ty::Region,
}
pub type UpvarCaptureMap = FnvHashMap<UpvarId, UpvarCapture>;
pub type UpvarCaptureMap<'tcx> = FnvHashMap<UpvarId, UpvarCapture<'tcx>>;
#[derive(Copy, Clone)]
pub struct ClosureUpvar<'tcx> {
@ -700,7 +685,7 @@ pub enum IntVarValue {
/// this is `None`, then the default is inherited from the
/// surrounding context. See RFC #599 for details.
#[derive(Copy, Clone)]
pub enum ObjectLifetimeDefault {
pub enum ObjectLifetimeDefault<'tcx> {
/// Require an explicit annotation. Occurs when multiple
/// `T:'a` constraints are found.
Ambiguous,
@ -709,7 +694,7 @@ pub enum ObjectLifetimeDefault {
BaseDefault,
/// Use the given region as the default.
Specific(Region),
Specific(&'tcx Region),
}
#[derive(Clone)]
@ -719,18 +704,18 @@ pub struct TypeParameterDef<'tcx> {
pub index: u32,
pub default_def_id: DefId, // for use in error reporing about defaults
pub default: Option<Ty<'tcx>>,
pub object_lifetime_default: ObjectLifetimeDefault,
pub object_lifetime_default: ObjectLifetimeDefault<'tcx>,
}
#[derive(Clone)]
pub struct RegionParameterDef {
pub struct RegionParameterDef<'tcx> {
pub name: Name,
pub def_id: DefId,
pub index: u32,
pub bounds: Vec<ty::Region>,
pub bounds: Vec<&'tcx ty::Region>,
}
impl RegionParameterDef {
impl<'tcx> RegionParameterDef<'tcx> {
pub fn to_early_bound_region(&self) -> ty::Region {
ty::ReEarlyBound(ty::EarlyBoundRegion {
index: self.index,
@ -750,11 +735,25 @@ pub struct Generics<'tcx> {
pub parent: Option<DefId>,
pub parent_regions: u32,
pub parent_types: u32,
pub regions: Vec<RegionParameterDef>,
pub regions: Vec<RegionParameterDef<'tcx>>,
pub types: Vec<TypeParameterDef<'tcx>>,
pub has_self: bool,
}
impl<'tcx> Generics<'tcx> {
pub fn parent_count(&self) -> usize {
self.parent_regions as usize + self.parent_types as usize
}
pub fn own_count(&self) -> usize {
self.regions.len() + self.types.len()
}
pub fn count(&self) -> usize {
self.parent_count() + self.own_count()
}
}
/// Bounds on generics.
#[derive(Clone)]
pub struct GenericPredicates<'tcx> {
@ -812,7 +811,7 @@ pub enum Predicate<'tcx> {
Equate(PolyEquatePredicate<'tcx>),
/// where 'a : 'b
RegionOutlives(PolyRegionOutlivesPredicate),
RegionOutlives(PolyRegionOutlivesPredicate<'tcx>),
/// where T : 'a
TypeOutlives(PolyTypeOutlivesPredicate<'tcx>),
@ -951,7 +950,6 @@ impl<'tcx> TraitPredicate<'tcx> {
// leads to more recompilation.
let def_ids: Vec<_> =
self.input_types()
.iter()
.flat_map(|t| t.walk())
.filter_map(|t| match t.sty {
ty::TyStruct(adt_def, _) |
@ -964,8 +962,8 @@ impl<'tcx> TraitPredicate<'tcx> {
DepNode::TraitSelect(self.def_id(), def_ids)
}
pub fn input_types(&self) -> &[Ty<'tcx>] {
&self.trait_ref.substs.types
pub fn input_types<'a>(&'a self) -> impl DoubleEndedIterator<Item=Ty<'tcx>> + 'a {
self.trait_ref.input_types()
}
pub fn self_ty(&self) -> Ty<'tcx> {
@ -992,8 +990,9 @@ pub type PolyEquatePredicate<'tcx> = ty::Binder<EquatePredicate<'tcx>>;
#[derive(Clone, PartialEq, Eq, Hash, Debug)]
pub struct OutlivesPredicate<A,B>(pub A, pub B); // `A : B`
pub type PolyOutlivesPredicate<A,B> = ty::Binder<OutlivesPredicate<A,B>>;
pub type PolyRegionOutlivesPredicate = PolyOutlivesPredicate<ty::Region, ty::Region>;
pub type PolyTypeOutlivesPredicate<'tcx> = PolyOutlivesPredicate<Ty<'tcx>, ty::Region>;
pub type PolyRegionOutlivesPredicate<'tcx> = PolyOutlivesPredicate<&'tcx ty::Region,
&'tcx ty::Region>;
pub type PolyTypeOutlivesPredicate<'tcx> = PolyOutlivesPredicate<Ty<'tcx>, &'tcx ty::Region>;
/// This kind of predicate has no *direct* correspondent in the
/// syntax, but it roughly corresponds to the syntactic forms:
@ -1082,7 +1081,7 @@ impl<'tcx> ToPredicate<'tcx> for PolyEquatePredicate<'tcx> {
}
}
impl<'tcx> ToPredicate<'tcx> for PolyRegionOutlivesPredicate {
impl<'tcx> ToPredicate<'tcx> for PolyRegionOutlivesPredicate<'tcx> {
fn to_predicate(&self) -> Predicate<'tcx> {
Predicate::RegionOutlives(self.clone())
}
@ -1107,7 +1106,7 @@ impl<'tcx> Predicate<'tcx> {
pub fn walk_tys(&self) -> IntoIter<Ty<'tcx>> {
let vec: Vec<_> = match *self {
ty::Predicate::Trait(ref data) => {
data.0.trait_ref.input_types().to_vec()
data.skip_binder().input_types().collect()
}
ty::Predicate::Rfc1592(ref data) => {
return data.walk_tys()
@ -1123,10 +1122,7 @@ impl<'tcx> Predicate<'tcx> {
}
ty::Predicate::Projection(ref data) => {
let trait_inputs = data.0.projection_ty.trait_ref.input_types();
trait_inputs.iter()
.cloned()
.chain(Some(data.0.ty))
.collect()
trait_inputs.chain(Some(data.0.ty)).collect()
}
ty::Predicate::WellFormed(data) => {
vec![data]
@ -1206,15 +1202,15 @@ impl<'tcx> TraitRef<'tcx> {
}
pub fn self_ty(&self) -> Ty<'tcx> {
self.substs.types[0]
self.substs.type_at(0)
}
pub fn input_types(&self) -> &[Ty<'tcx>] {
pub fn input_types<'a>(&'a self) -> impl DoubleEndedIterator<Item=Ty<'tcx>> + 'a {
// Select only the "input types" from a trait-reference. For
// now this is all the types that appear in the
// trait-reference, but it should eventually exclude
// associated types.
&self.substs.types
self.substs.types()
}
}
@ -1239,7 +1235,7 @@ pub struct ParameterEnvironment<'tcx> {
/// indicates it must outlive at least the function body (the user
/// may specify stronger requirements). This field indicates the
/// region of the callee.
pub implicit_region_bound: ty::Region,
pub implicit_region_bound: &'tcx ty::Region,
/// Obligations that the caller must satisfy. This is basically
/// the set of bounds on the in-scope type parameters, translated
@ -1866,7 +1862,7 @@ impl<'a, 'tcx> AdtDefData<'tcx, 'tcx> {
};
let sized_predicate = Binder(TraitRef {
def_id: sized_trait,
substs: Substs::new_trait(tcx, vec![], vec![], ty)
substs: Substs::new_trait(tcx, ty, &[])
}).to_predicate();
let predicates = tcx.lookup_predicates(self.did).predicates;
if predicates.into_iter().any(|p| p == sized_predicate) {
@ -2593,7 +2589,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
|| self.lookup_repr_hints(did).contains(&attr::ReprSimd)
}
pub fn item_variances(self, item_id: DefId) -> Rc<ItemVariances> {
pub fn item_variances(self, item_id: DefId) -> Rc<Vec<ty::Variance>> {
lookup_locally_or_in_crate_store(
"item_variance_map", item_id, &self.item_variance_map,
|| Rc::new(self.sess.cstore.item_variances(item_id)))
@ -2827,7 +2823,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
ty::ParameterEnvironment {
free_substs: Substs::empty(self),
caller_bounds: Vec::new(),
implicit_region_bound: ty::ReEmpty,
implicit_region_bound: self.mk_region(ty::ReEmpty),
free_id_outlive: free_id_outlive
}
}
@ -2843,8 +2839,10 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
let substs = Substs::for_item(self.global_tcx(), def_id, |def, _| {
// map bound 'a => free 'a
ReFree(FreeRegion { scope: free_id_outlive,
bound_region: def.to_bound_region() })
self.global_tcx().mk_region(ReFree(FreeRegion {
scope: free_id_outlive,
bound_region: def.to_bound_region()
}))
}, |def, _| {
// map T => T
self.global_tcx().mk_param_from_def(def)
@ -2894,7 +2892,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
let unnormalized_env = ty::ParameterEnvironment {
free_substs: free_substs,
implicit_region_bound: ty::ReScope(free_id_outlive),
implicit_region_bound: tcx.mk_region(ty::ReScope(free_id_outlive)),
caller_bounds: predicates,
free_id_outlive: free_id_outlive,
};
@ -2903,6 +2901,10 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
traits::normalize_param_env_or_error(tcx, unnormalized_env, cause)
}
pub fn node_scope_region(self, id: NodeId) -> &'tcx Region {
self.mk_region(ty::ReScope(self.region_maps.node_extent(id)))
}
pub fn is_method_call(self, expr_id: NodeId) -> bool {
self.tables.borrow().method_map.contains_key(&MethodCall::expr(expr_id))
}
@ -2912,7 +2914,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
autoderefs))
}
pub fn upvar_capture(self, upvar_id: ty::UpvarId) -> Option<ty::UpvarCapture> {
pub fn upvar_capture(self, upvar_id: ty::UpvarId) -> Option<ty::UpvarCapture<'tcx>> {
Some(self.tables.borrow().upvar_capture_map.get(&upvar_id).unwrap().clone())
}
@ -2938,10 +2940,10 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
/// The category of explicit self.
#[derive(Clone, Copy, Eq, PartialEq, Debug)]
pub enum ExplicitSelfCategory {
pub enum ExplicitSelfCategory<'tcx> {
Static,
ByValue,
ByReference(Region, hir::Mutability),
ByReference(&'tcx Region, hir::Mutability),
ByBox,
}

View File

@ -17,7 +17,7 @@ use ty::{self, Ty, TypeFoldable};
#[derive(Debug)]
pub enum Component<'tcx> {
Region(ty::Region),
Region(&'tcx ty::Region),
Param(ty::ParamTy),
UnresolvedInferenceVariable(ty::InferTy),
@ -210,7 +210,7 @@ impl<'a, 'gcx, 'tcx> InferCtxt<'a, 'gcx, 'tcx> {
}
}
fn push_region_constraints<'tcx>(out: &mut Vec<Component<'tcx>>, regions: Vec<ty::Region>) {
fn push_region_constraints<'tcx>(out: &mut Vec<Component<'tcx>>, regions: Vec<&'tcx ty::Region>) {
for r in regions {
if !r.is_bound() {
out.push(Component::Region(r));

View File

@ -14,7 +14,7 @@
//! type equality, etc.
use hir::def_id::DefId;
use ty::subst::Substs;
use ty::subst::{Kind, Substs};
use ty::{self, Ty, TyCtxt, TypeFoldable};
use ty::error::{ExpectedFound, TypeError};
use std::rc::Rc;
@ -71,8 +71,8 @@ pub trait TypeRelation<'a, 'gcx: 'a+'tcx, 'tcx: 'a> : Sized {
fn tys(&mut self, a: Ty<'tcx>, b: Ty<'tcx>)
-> RelateResult<'tcx, Ty<'tcx>>;
fn regions(&mut self, a: ty::Region, b: ty::Region)
-> RelateResult<'tcx, ty::Region>;
fn regions(&mut self, a: &'tcx ty::Region, b: &'tcx ty::Region)
-> RelateResult<'tcx, &'tcx ty::Region>;
fn binders<T>(&mut self, a: &ty::Binder<T>, b: &ty::Binder<T>)
-> RelateResult<'tcx, ty::Binder<T>>
@ -139,7 +139,7 @@ fn relate_item_substs<'a, 'gcx, 'tcx, R>(relation: &mut R,
}
pub fn relate_substs<'a, 'gcx, 'tcx, R>(relation: &mut R,
variances: Option<&ty::ItemVariances>,
variances: Option<&Vec<ty::Variance>>,
a_subst: &'tcx Substs<'tcx>,
b_subst: &'tcx Substs<'tcx>)
-> RelateResult<'tcx, &'tcx Substs<'tcx>>
@ -147,19 +147,18 @@ pub fn relate_substs<'a, 'gcx, 'tcx, R>(relation: &mut R,
{
let tcx = relation.tcx();
let types = a_subst.types.iter().enumerate().map(|(i, a_ty)| {
let b_ty = &b_subst.types[i];
let variance = variances.map_or(ty::Invariant, |v| v.types[i]);
relation.relate_with_variance(variance, a_ty, b_ty)
}).collect::<Result<_, _>>()?;
let params = a_subst.params().iter().zip(b_subst.params()).enumerate().map(|(i, (a, b))| {
let variance = variances.map_or(ty::Invariant, |v| v[i]);
if let (Some(a_ty), Some(b_ty)) = (a.as_type(), b.as_type()) {
Ok(Kind::from(relation.relate_with_variance(variance, &a_ty, &b_ty)?))
} else if let (Some(a_r), Some(b_r)) = (a.as_region(), b.as_region()) {
Ok(Kind::from(relation.relate_with_variance(variance, &a_r, &b_r)?))
} else {
bug!()
}
});
let regions = a_subst.regions.iter().enumerate().map(|(i, a_r)| {
let b_r = &b_subst.regions[i];
let variance = variances.map_or(ty::Invariant, |v| v.regions[i]);
relation.relate_with_variance(variance, a_r, b_r)
}).collect::<Result<_, _>>()?;
Ok(Substs::new(tcx, types, regions))
Substs::maybe_new(tcx, params)
}
impl<'tcx> Relate<'tcx> for &'tcx ty::BareFnTy<'tcx> {
@ -473,9 +472,9 @@ pub fn super_relate_tys<'a, 'gcx, 'tcx, R>(relation: &mut R,
(&ty::TyRef(a_r, ref a_mt), &ty::TyRef(b_r, ref b_mt)) =>
{
let r = relation.relate_with_variance(ty::Contravariant, a_r, b_r)?;
let r = relation.relate_with_variance(ty::Contravariant, &a_r, &b_r)?;
let mt = relation.relate(a_mt, b_mt)?;
Ok(tcx.mk_ref(tcx.mk_region(r), mt))
Ok(tcx.mk_ref(r, mt))
}
(&ty::TyArray(a_t, sz_a), &ty::TyArray(b_t, sz_b)) =>
@ -571,11 +570,11 @@ impl<'tcx> Relate<'tcx> for &'tcx Substs<'tcx> {
}
}
impl<'tcx> Relate<'tcx> for ty::Region {
impl<'tcx> Relate<'tcx> for &'tcx ty::Region {
fn relate<'a, 'gcx, R>(relation: &mut R,
a: &ty::Region,
b: &ty::Region)
-> RelateResult<'tcx, ty::Region>
a: &&'tcx ty::Region,
b: &&'tcx ty::Region)
-> RelateResult<'tcx, &'tcx ty::Region>
where R: TypeRelation<'a, 'gcx, 'tcx>, 'gcx: 'a+'tcx, 'tcx: 'a
{
relation.regions(*a, *b)

View File

@ -9,7 +9,6 @@
// except according to those terms.
use infer::type_variable;
use ty::subst::Substs;
use ty::{self, Lift, Ty, TyCtxt};
use ty::fold::{TypeFoldable, TypeFolder, TypeVisitor};
@ -73,13 +72,6 @@ impl<'tcx, T: Lift<'tcx>> Lift<'tcx> for Vec<T> {
}
}
impl<'tcx> Lift<'tcx> for ty::Region {
type Lifted = Self;
fn lift_to_tcx(&self, _: TyCtxt) -> Option<ty::Region> {
Some(*self)
}
}
impl<'a, 'tcx> Lift<'tcx> for ty::TraitRef<'a> {
type Lifted = ty::TraitRef<'tcx>;
fn lift_to_tcx<'b, 'gcx>(&self, tcx: TyCtxt<'b, 'gcx, 'tcx>) -> Option<Self::Lifted> {
@ -316,13 +308,21 @@ impl<'a, 'tcx> Lift<'tcx> for ty::error::TypeError<'a> {
FixedArraySize(x) => FixedArraySize(x),
TyParamSize(x) => TyParamSize(x),
ArgCount => ArgCount,
RegionsDoesNotOutlive(a, b) => RegionsDoesNotOutlive(a, b),
RegionsNotSame(a, b) => RegionsNotSame(a, b),
RegionsNoOverlap(a, b) => RegionsNoOverlap(a, b),
RegionsInsufficientlyPolymorphic(a, b) => {
RegionsInsufficientlyPolymorphic(a, b)
RegionsDoesNotOutlive(a, b) => {
return tcx.lift(&(a, b)).map(|(a, b)| RegionsDoesNotOutlive(a, b))
}
RegionsNotSame(a, b) => {
return tcx.lift(&(a, b)).map(|(a, b)| RegionsNotSame(a, b))
}
RegionsNoOverlap(a, b) => {
return tcx.lift(&(a, b)).map(|(a, b)| RegionsNoOverlap(a, b))
}
RegionsInsufficientlyPolymorphic(a, b) => {
return tcx.lift(&b).map(|b| RegionsInsufficientlyPolymorphic(a, b))
}
RegionsOverlyPolymorphic(a, b) => {
return tcx.lift(&b).map(|b| RegionsOverlyPolymorphic(a, b))
}
RegionsOverlyPolymorphic(a, b) => RegionsOverlyPolymorphic(a, b),
IntegerAsChar => IntegerAsChar,
IntMismatch(x) => IntMismatch(x),
FloatMismatch(x) => FloatMismatch(x),
@ -655,7 +655,7 @@ impl<'tcx> TypeFoldable<'tcx> for ty::ImplHeader<'tcx> {
}
}
impl<'tcx> TypeFoldable<'tcx> for ty::Region {
impl<'tcx> TypeFoldable<'tcx> for &'tcx ty::Region {
fn super_fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, _folder: &mut F) -> Self {
*self
}
@ -673,41 +673,6 @@ impl<'tcx> TypeFoldable<'tcx> for ty::Region {
}
}
impl<'tcx> TypeFoldable<'tcx> for &'tcx ty::Region {
fn super_fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, _folder: &mut F) -> Self {
*self
}
fn fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self {
let region = folder.fold_region(**self);
folder.tcx().mk_region(region)
}
fn super_visit_with<V: TypeVisitor<'tcx>>(&self, _visitor: &mut V) -> bool {
false
}
fn visit_with<V: TypeVisitor<'tcx>>(&self, visitor: &mut V) -> bool {
visitor.visit_region(**self)
}
}
impl<'tcx> TypeFoldable<'tcx> for &'tcx Substs<'tcx> {
fn super_fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self {
let types = self.types.fold_with(folder);
let regions = self.regions.fold_with(folder);
Substs::new(folder.tcx(), types, regions)
}
fn fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self {
folder.fold_substs(self)
}
fn super_visit_with<V: TypeVisitor<'tcx>>(&self, visitor: &mut V) -> bool {
self.types.visit_with(visitor) || self.regions.visit_with(visitor)
}
}
impl<'tcx> TypeFoldable<'tcx> for ty::ClosureSubsts<'tcx> {
fn super_fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self {
ty::ClosureSubsts {
@ -783,7 +748,7 @@ impl<'tcx> TypeFoldable<'tcx> for ty::TypeParameterDef<'tcx> {
}
}
impl<'tcx> TypeFoldable<'tcx> for ty::ObjectLifetimeDefault {
impl<'tcx> TypeFoldable<'tcx> for ty::ObjectLifetimeDefault<'tcx> {
fn super_fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self {
match *self {
ty::ObjectLifetimeDefault::Ambiguous =>
@ -805,7 +770,7 @@ impl<'tcx> TypeFoldable<'tcx> for ty::ObjectLifetimeDefault {
}
}
impl<'tcx> TypeFoldable<'tcx> for ty::RegionParameterDef {
impl<'tcx> TypeFoldable<'tcx> for ty::RegionParameterDef<'tcx> {
fn super_fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self {
ty::RegionParameterDef {
name: self.name,

View File

@ -19,8 +19,8 @@ use util::common::ErrorReported;
use collections::enum_set::{self, EnumSet, CLike};
use std::fmt;
use std::ops;
use std::mem;
use std::ops;
use syntax::abi;
use syntax::ast::{self, Name};
use syntax::parse::token::keywords;
@ -293,7 +293,7 @@ impl<'tcx> Decodable for ClosureSubsts<'tcx> {
#[derive(Clone, PartialEq, Eq, Hash)]
pub struct TraitObject<'tcx> {
pub principal: PolyExistentialTraitRef<'tcx>,
pub region_bound: ty::Region,
pub region_bound: &'tcx ty::Region,
pub builtin_bounds: BuiltinBounds,
pub projection_bounds: Vec<PolyExistentialProjection<'tcx>>,
}
@ -335,7 +335,7 @@ impl<'tcx> PolyTraitRef<'tcx> {
self.0.substs
}
pub fn input_types(&self) -> &[Ty<'tcx>] {
pub fn input_types<'a>(&'a self) -> impl DoubleEndedIterator<Item=Ty<'tcx>> + 'a {
// FIXME(#20664) every use of this fn is probably a bug, it should yield Binder<>
self.0.input_types()
}
@ -360,12 +360,12 @@ pub struct ExistentialTraitRef<'tcx> {
}
impl<'tcx> ExistentialTraitRef<'tcx> {
pub fn input_types(&self) -> &[Ty<'tcx>] {
pub fn input_types<'a>(&'a self) -> impl DoubleEndedIterator<Item=Ty<'tcx>> + 'a {
// Select only the "input types" from a trait-reference. For
// now this is all the types that appear in the
// trait-reference, but it should eventually exclude
// associated types.
&self.substs.types
self.substs.types()
}
}
@ -376,7 +376,7 @@ impl<'tcx> PolyExistentialTraitRef<'tcx> {
self.0.def_id
}
pub fn input_types(&self) -> &[Ty<'tcx>] {
pub fn input_types<'a>(&'a self) -> impl DoubleEndedIterator<Item=Ty<'tcx>> + 'a {
// FIXME(#20664) every use of this fn is probably a bug, it should yield Binder<>
self.0.input_types()
}
@ -675,6 +675,15 @@ pub enum Region {
ReErased,
}
impl<'tcx> Decodable for &'tcx Region {
fn decode<D: Decoder>(d: &mut D) -> Result<&'tcx Region, D::Error> {
let r = Decodable::decode(d)?;
cstore::tls::with_decoding_context(d, |dcx, _| {
Ok(dcx.tcx().mk_region(r))
})
}
}
#[derive(Copy, Clone, PartialEq, Eq, Hash, RustcEncodable, RustcDecodable, Debug)]
pub struct EarlyBoundRegion {
pub index: u32,
@ -1206,26 +1215,26 @@ impl<'a, 'gcx, 'tcx> TyS<'tcx> {
/// Returns the regions directly referenced from this type (but
/// not types reachable from this type via `walk_tys`). This
/// ignores late-bound regions binders.
pub fn regions(&self) -> Vec<ty::Region> {
pub fn regions(&self) -> Vec<&'tcx ty::Region> {
match self.sty {
TyRef(region, _) => {
vec![*region]
vec![region]
}
TyTrait(ref obj) => {
let mut v = vec![obj.region_bound];
v.extend_from_slice(&obj.principal.skip_binder().substs.regions);
v.extend(obj.principal.skip_binder().substs.regions());
v
}
TyEnum(_, substs) |
TyStruct(_, substs) |
TyAnon(_, substs) => {
substs.regions.to_vec()
substs.regions().collect()
}
TyClosure(_, ref substs) => {
substs.func_substs.regions.to_vec()
substs.func_substs.regions().collect()
}
TyProjection(ref data) => {
data.trait_ref.substs.regions.to_vec()
data.trait_ref.substs.regions().collect()
}
TyFnDef(..) |
TyFnPtr(_) |

View File

@ -13,41 +13,156 @@
use middle::cstore;
use hir::def_id::DefId;
use ty::{self, Ty, TyCtxt};
use ty::fold::{TypeFoldable, TypeFolder};
use ty::fold::{TypeFoldable, TypeFolder, TypeVisitor};
use serialize::{Encodable, Encoder, Decodable, Decoder};
use syntax_pos::{Span, DUMMY_SP};
///////////////////////////////////////////////////////////////////////////
use core::nonzero::NonZero;
use std::fmt;
use std::iter;
use std::marker::PhantomData;
use std::mem;
/// An entity in the Rust typesystem, which can be one of
/// several kinds (only types and lifetimes for now).
/// To reduce memory usage, a `Kind` is a interned pointer,
/// with the lowest 2 bits being reserved for a tag to
/// indicate the type (`Ty` or `Region`) it points to.
#[derive(Copy, Clone, PartialEq, Eq, Hash)]
pub struct Kind<'tcx> {
ptr: NonZero<usize>,
marker: PhantomData<(Ty<'tcx>, &'tcx ty::Region)>
}
const TAG_MASK: usize = 0b11;
const TYPE_TAG: usize = 0b00;
const REGION_TAG: usize = 0b01;
impl<'tcx> From<Ty<'tcx>> for Kind<'tcx> {
fn from(ty: Ty<'tcx>) -> Kind<'tcx> {
// Ensure we can use the tag bits.
assert_eq!(mem::align_of_val(ty) & TAG_MASK, 0);
let ptr = ty as *const _ as usize;
Kind {
ptr: unsafe {
NonZero::new(ptr | TYPE_TAG)
},
marker: PhantomData
}
}
}
impl<'tcx> From<&'tcx ty::Region> for Kind<'tcx> {
fn from(r: &'tcx ty::Region) -> Kind<'tcx> {
// Ensure we can use the tag bits.
assert_eq!(mem::align_of_val(r) & TAG_MASK, 0);
let ptr = r as *const _ as usize;
Kind {
ptr: unsafe {
NonZero::new(ptr | REGION_TAG)
},
marker: PhantomData
}
}
}
impl<'tcx> Kind<'tcx> {
#[inline]
unsafe fn downcast<T>(self, tag: usize) -> Option<&'tcx T> {
let ptr = *self.ptr;
if ptr & TAG_MASK == tag {
Some(&*((ptr & !TAG_MASK) as *const _))
} else {
None
}
}
#[inline]
pub fn as_type(self) -> Option<Ty<'tcx>> {
unsafe {
self.downcast(TYPE_TAG)
}
}
#[inline]
pub fn as_region(self) -> Option<&'tcx ty::Region> {
unsafe {
self.downcast(REGION_TAG)
}
}
}
impl<'tcx> fmt::Debug for Kind<'tcx> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
if let Some(ty) = self.as_type() {
write!(f, "{:?}", ty)
} else if let Some(r) = self.as_region() {
write!(f, "{:?}", r)
} else {
write!(f, "<unknwon @ {:p}>", *self.ptr as *const ())
}
}
}
impl<'tcx> TypeFoldable<'tcx> for Kind<'tcx> {
fn super_fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self {
if let Some(ty) = self.as_type() {
Kind::from(ty.fold_with(folder))
} else if let Some(r) = self.as_region() {
Kind::from(r.fold_with(folder))
} else {
bug!()
}
}
fn super_visit_with<V: TypeVisitor<'tcx>>(&self, visitor: &mut V) -> bool {
if let Some(ty) = self.as_type() {
ty.visit_with(visitor)
} else if let Some(r) = self.as_region() {
r.visit_with(visitor)
} else {
bug!()
}
}
}
/// A substitution mapping type/region parameters to new values.
#[derive(Clone, PartialEq, Eq, Hash)]
#[derive(Clone, PartialEq, Eq, Debug, Hash)]
pub struct Substs<'tcx> {
pub types: Vec<Ty<'tcx>>,
pub regions: Vec<ty::Region>,
params: Vec<Kind<'tcx>>
}
impl<'a, 'gcx, 'tcx> Substs<'tcx> {
pub fn new(tcx: TyCtxt<'a, 'gcx, 'tcx>,
t: Vec<Ty<'tcx>>,
r: Vec<ty::Region>)
-> &'tcx Substs<'tcx>
{
tcx.mk_substs(Substs { types: t, regions: r })
pub fn new<I>(tcx: TyCtxt<'a, 'gcx, 'tcx>, params: I)
-> &'tcx Substs<'tcx>
where I: IntoIterator<Item=Kind<'tcx>> {
tcx.mk_substs(Substs {
params: params.into_iter().collect()
})
}
pub fn maybe_new<I, E>(tcx: TyCtxt<'a, 'gcx, 'tcx>, params: I)
-> Result<&'tcx Substs<'tcx>, E>
where I: IntoIterator<Item=Result<Kind<'tcx>, E>> {
Ok(tcx.mk_substs(Substs {
params: params.into_iter().collect::<Result<_, _>>()?
}))
}
pub fn new_trait(tcx: TyCtxt<'a, 'gcx, 'tcx>,
mut t: Vec<Ty<'tcx>>,
r: Vec<ty::Region>,
s: Ty<'tcx>)
s: Ty<'tcx>,
t: &[Ty<'tcx>])
-> &'tcx Substs<'tcx>
{
t.insert(0, s);
Substs::new(tcx, t, r)
let t = iter::once(s).chain(t.iter().cloned());
Substs::new(tcx, t.map(Kind::from))
}
pub fn empty(tcx: TyCtxt<'a, 'gcx, 'tcx>) -> &'tcx Substs<'tcx> {
Substs::new(tcx, vec![], vec![])
Substs::new(tcx, vec![])
}
/// Creates a Substs for generic parameter definitions,
@ -60,19 +175,16 @@ impl<'a, 'gcx, 'tcx> Substs<'tcx> {
mut mk_region: FR,
mut mk_type: FT)
-> &'tcx Substs<'tcx>
where FR: FnMut(&ty::RegionParameterDef, &Substs<'tcx>) -> ty::Region,
where FR: FnMut(&ty::RegionParameterDef, &Substs<'tcx>) -> &'tcx ty::Region,
FT: FnMut(&ty::TypeParameterDef<'tcx>, &Substs<'tcx>) -> Ty<'tcx> {
let defs = tcx.lookup_generics(def_id);
let num_regions = defs.parent_regions as usize + defs.regions.len();
let num_types = defs.parent_types as usize + defs.types.len();
let mut substs = Substs {
regions: Vec::with_capacity(num_regions),
types: Vec::with_capacity(num_types)
params: Vec::with_capacity(defs.count())
};
substs.fill_item(tcx, defs, &mut mk_region, &mut mk_type);
Substs::new(tcx, substs.types, substs.regions)
tcx.mk_substs(substs)
}
fn fill_item<FR, FT>(&mut self,
@ -80,36 +192,76 @@ impl<'a, 'gcx, 'tcx> Substs<'tcx> {
defs: &ty::Generics<'tcx>,
mk_region: &mut FR,
mk_type: &mut FT)
where FR: FnMut(&ty::RegionParameterDef, &Substs<'tcx>) -> ty::Region,
where FR: FnMut(&ty::RegionParameterDef, &Substs<'tcx>) -> &'tcx ty::Region,
FT: FnMut(&ty::TypeParameterDef<'tcx>, &Substs<'tcx>) -> Ty<'tcx> {
if let Some(def_id) = defs.parent {
let parent_defs = tcx.lookup_generics(def_id);
self.fill_item(tcx, parent_defs, mk_region, mk_type);
}
for def in &defs.regions {
let region = mk_region(def, self);
assert_eq!(def.index as usize, self.regions.len());
self.regions.push(region);
// Handle Self first, before all regions.
let mut types = defs.types.iter();
if defs.parent.is_none() && defs.has_self {
let def = types.next().unwrap();
let ty = mk_type(def, self);
assert_eq!(def.index as usize, self.params.len());
self.params.push(Kind::from(ty));
}
for def in &defs.types {
for def in &defs.regions {
let region = mk_region(def, self);
assert_eq!(def.index as usize, self.params.len());
self.params.push(Kind::from(region));
}
for def in types {
let ty = mk_type(def, self);
assert_eq!(def.index as usize, self.types.len());
self.types.push(ty);
assert_eq!(def.index as usize, self.params.len());
self.params.push(Kind::from(ty));
}
}
pub fn is_noop(&self) -> bool {
self.regions.is_empty() && self.types.is_empty()
self.params.is_empty()
}
#[inline]
pub fn params(&self) -> &[Kind<'tcx>] {
&self.params
}
#[inline]
pub fn types(&'a self) -> impl DoubleEndedIterator<Item=Ty<'tcx>> + 'a {
self.params.iter().filter_map(|k| k.as_type())
}
#[inline]
pub fn regions(&'a self) -> impl DoubleEndedIterator<Item=&'tcx ty::Region> + 'a {
self.params.iter().filter_map(|k| k.as_region())
}
#[inline]
pub fn type_at(&self, i: usize) -> Ty<'tcx> {
self.params[i].as_type().unwrap_or_else(|| {
bug!("expected type for param #{} in {:?}", i, self.params);
})
}
#[inline]
pub fn region_at(&self, i: usize) -> &'tcx ty::Region {
self.params[i].as_region().unwrap_or_else(|| {
bug!("expected region for param #{} in {:?}", i, self.params);
})
}
#[inline]
pub fn type_for_def(&self, ty_param_def: &ty::TypeParameterDef) -> Ty<'tcx> {
self.types[ty_param_def.index as usize]
self.type_at(ty_param_def.index as usize)
}
pub fn region_for_def(&self, def: &ty::RegionParameterDef) -> ty::Region {
self.regions[def.index as usize]
#[inline]
pub fn region_for_def(&self, def: &ty::RegionParameterDef) -> &'tcx ty::Region {
self.region_at(def.index as usize)
}
/// Transform from substitutions for a child of `source_ancestor`
@ -122,11 +274,27 @@ impl<'a, 'gcx, 'tcx> Substs<'tcx> {
target_substs: &Substs<'tcx>)
-> &'tcx Substs<'tcx> {
let defs = tcx.lookup_generics(source_ancestor);
let regions = target_substs.regions.iter()
.chain(&self.regions[defs.regions.len()..]).cloned().collect();
let types = target_substs.types.iter()
.chain(&self.types[defs.types.len()..]).cloned().collect();
Substs::new(tcx, types, regions)
tcx.mk_substs(Substs {
params: target_substs.params.iter()
.chain(&self.params[defs.own_count()..]).cloned().collect()
})
}
}
impl<'tcx> TypeFoldable<'tcx> for &'tcx Substs<'tcx> {
fn super_fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self {
let params = self.params.iter().map(|k| k.fold_with(folder)).collect();
folder.tcx().mk_substs(Substs {
params: params
})
}
fn fold_with<'gcx: 'tcx, F: TypeFolder<'gcx, 'tcx>>(&self, folder: &mut F) -> Self {
folder.fold_substs(self)
}
fn super_visit_with<V: TypeVisitor<'tcx>>(&self, visitor: &mut V) -> bool {
self.params.visit_with(visitor)
}
}
@ -215,16 +383,18 @@ impl<'a, 'gcx, 'tcx> TypeFolder<'gcx, 'tcx> for SubstFolder<'a, 'gcx, 'tcx> {
t
}
fn fold_region(&mut self, r: ty::Region) -> ty::Region {
fn fold_region(&mut self, r: &'tcx ty::Region) -> &'tcx ty::Region {
// Note: This routine only handles regions that are bound on
// type declarations and other outer declarations, not those
// bound in *fn types*. Region substitution of the bound
// regions that appear in a function signature is done using
// the specialized routine `ty::replace_late_regions()`.
match r {
match *r {
ty::ReEarlyBound(data) => {
match self.substs.regions.get(data.index as usize) {
Some(&r) => {
let r = self.substs.params.get(data.index as usize)
.and_then(|k| k.as_region());
match r {
Some(r) => {
self.shift_region_through_binders(r)
}
None => {
@ -278,9 +448,10 @@ impl<'a, 'gcx, 'tcx> TypeFolder<'gcx, 'tcx> for SubstFolder<'a, 'gcx, 'tcx> {
impl<'a, 'gcx, 'tcx> SubstFolder<'a, 'gcx, 'tcx> {
fn ty_for_param(&self, p: ty::ParamTy, source_ty: Ty<'tcx>) -> Ty<'tcx> {
// Look up the type in the substitutions. It really should be in there.
let opt_ty = self.substs.types.get(p.idx as usize);
let opt_ty = self.substs.params.get(p.idx as usize)
.and_then(|k| k.as_type());
let ty = match opt_ty {
Some(t) => *t,
Some(t) => t,
None => {
let span = self.span.unwrap_or(DUMMY_SP);
span_bug!(
@ -291,7 +462,7 @@ impl<'a, 'gcx, 'tcx> SubstFolder<'a, 'gcx, 'tcx> {
source_ty,
p.idx,
self.root_ty,
self.substs);
self.substs.params);
}
};
@ -354,8 +525,8 @@ impl<'a, 'gcx, 'tcx> SubstFolder<'a, 'gcx, 'tcx> {
result
}
fn shift_region_through_binders(&self, region: ty::Region) -> ty::Region {
ty::fold::shift_region(region, self.region_binders_passed)
fn shift_region_through_binders(&self, region: &'tcx ty::Region) -> &'tcx ty::Region {
self.tcx().mk_region(ty::fold::shift_region(*region, self.region_binders_passed))
}
}
@ -367,12 +538,11 @@ impl<'a, 'gcx, 'tcx> ty::TraitRef<'tcx> {
substs: &Substs<'tcx>)
-> ty::TraitRef<'tcx> {
let defs = tcx.lookup_generics(trait_id);
let regions = substs.regions[..defs.regions.len()].to_vec();
let types = substs.types[..defs.types.len()].to_vec();
let params = substs.params[..defs.own_count()].iter().cloned();
ty::TraitRef {
def_id: trait_id,
substs: Substs::new(tcx, types, regions)
substs: Substs::new(tcx, params)
}
}
}
@ -381,13 +551,13 @@ impl<'a, 'gcx, 'tcx> ty::ExistentialTraitRef<'tcx> {
pub fn erase_self_ty(tcx: TyCtxt<'a, 'gcx, 'tcx>,
trait_ref: ty::TraitRef<'tcx>)
-> ty::ExistentialTraitRef<'tcx> {
let Substs { mut types, regions } = trait_ref.substs.clone();
types.remove(0);
// Assert there is a Self.
trait_ref.substs.type_at(0);
let params = trait_ref.substs.params[1..].iter().cloned();
ty::ExistentialTraitRef {
def_id: trait_ref.def_id,
substs: Substs::new(tcx, types, regions)
substs: Substs::new(tcx, params)
}
}
}
@ -404,13 +574,11 @@ impl<'a, 'gcx, 'tcx> ty::PolyExistentialTraitRef<'tcx> {
assert!(!self_ty.has_escaping_regions());
self.map_bound(|trait_ref| {
let Substs { mut types, regions } = trait_ref.substs.clone();
types.insert(0, self_ty);
let params = trait_ref.substs.params.iter().cloned();
let params = iter::once(Kind::from(self_ty)).chain(params);
ty::TraitRef {
def_id: trait_ref.def_id,
substs: Substs::new(tcx, types, regions)
substs: Substs::new(tcx, params)
}
})
}

Some files were not shown because too many files have changed in this diff Show More