danger of inference variables floating around without their inference
context.
The main insight here is that, when we are translating substitutions
between two impls, *we already know that the more specific impl holds*,
so we do not need to add its obligations to the parameter
environment. Instead, we can just thread through the inference context
we used to show select the more specific impl in the first place.
projection sensitive to "mode" (most importantly, trans vs middle).
This commit introduces several pieces of iteration infrastructure in the
specialization graph data structure, as well as various helpers for
finding the definition of a given item, given its kind and name.
In addition, associated type projection is now *mode-sensitive*, with
three possible modes:
- **Topmost**. This means that projection is only possible if there is a
non-`default` definition of the associated type directly on the
selected impl. This mode is a bit of a hack: it's used during early
coherence checking before we have built the specialization
graph (and therefore before we can walk up the specialization
parents to find other definitions). Eventually, this should be
replaced with a less "staged" construction of the specialization
graph.
- **AnyFinal**. Projection succeeds for any non-`default` associated
type definition, even if it is defined by a parent impl. Used
throughout typechecking.
- **Any**. Projection always succeeds. Used by trans.
The lasting distinction here is between `AnyFinal` and `Any` -- we wish
to treat `default` associated types opaquely for typechecking purposes.
In addition to the above, the commit includes a few other minor review fixes.
This commit leverages the specialization graph infrastructure to allow
specializing trait implementations to leave off associated types for
which their parents have provided defaults.
It also modifies the type projection code to avoid projecting associated
types unless either (1) all input types are fully known or (2) the
available associated type is "final", i.e. not marked `default`.
This restriction is required for soundness, due to examples like:
```rust
trait Foo {
type Assoc;
}
impl<T> Foo for T {
default type Assoc = ();
}
impl Foo for u8 {
type Assoc = String;
}
fn generic<T>() -> <T as Foo>::Assoc {
() //~ ERROR
}
fn main() {
let s: String = generic::<u8>();
println!("{}", s); // bad news
}
```
This commit leverages the specialization graph infrastructure to allow
specializing trait implementations to leave off methods for which their
parents have provided defaults.
It does not yet check that the `default` keyword is appropriately used
in such cases.
- Rewrites the overlap checker to instead build up a specialization
graph, checking for overlap errors in the process.
- Use the specialization order during impl selection.
This commit does not yet handle associated types correctly, and assumes
that all items are `default` and are overridden.
The module contains a few important components:
- The `specialize` function, which determines whether one impl is a
specialization of another.
- The `SpecializationGraph`, a per-trait graph recording the
specialization tree. The main purpose of the graph is to allow
traversals upwards (to less specialized impls) for discovering
un-overridden defaults, and for ensuring that overridden items are
allowed to be overridden.
typestrong const integers
~~It would be great if someone could run crater on this PR, as this has a high danger of breaking valid code~~ Crater ran. Good to go.
----
So this PR does a few things:
1. ~~const eval array values when const evaluating an array expression~~
2. ~~const eval repeat value when const evaluating a repeat expression~~
3. ~~const eval all struct and tuple fields when evaluating a struct/tuple expression~~
4. remove the `ConstVal::Int` and `ConstVal::Uint` variants and replace them with a single enum (`ConstInt`) which has variants for all integral types
* `usize`/`isize` are also enums with variants for 32 and 64 bit. At creation and various usage steps there are assertions in place checking if the target bitwidth matches with the chosen enum variant
5. enum discriminants (`ty::Disr`) are now `ConstInt`
6. trans has its own `Disr` type now (newtype around `u64`)
This obviously can't be done without breaking changes (the ones that are noticable in stable)
We could probably write lints that find those situations and error on it for a cycle or two. But then again, those situations are rare and really bugs imo anyway:
```rust
let v10 = 10 as i8;
let v4 = 4 as isize;
assert_eq!(v10 << v4 as usize, 160 as i8);
```
stops compiling because 160 is not a valid i8
```rust
struct S<T, S> {
a: T,
b: u8,
c: S
}
let s = S { a: 0xff_ff_ff_ffu32, b: 1, c: 0xaa_aa_aa_aa as i32 };
```
stops compiling because `0xaa_aa_aa_aa` is not a valid i32
----
cc @eddyb @pnkfelix
related: https://github.com/rust-lang/rfcs/issues/1071
Define AVX compare and blend intrinsics
This defines the following intrinsics:
* `_mm256_blendv_pd`
* `_mm256_blendv_ps`
* `_mm256_cmp_pd`
* `_mm256_cmp_ps`
I verified these locally.
Fixup stout/stderr on Windows
WriteConsoleW can fail if called with a large buffer so we need to slice
any stdout/stderr output.
However the current slicing has a few problems:
1. It slices by byte but still expects valid UTF-8.
2. The slicing happens even when not outputting to a console.
3. panic! output is not sliced.
This fixes these issues by moving the slice to right before
WriteConsoleW and slicing on a char boundary.
This defines the `_mm256_blendv_pd` and `_mm256_blendv_ps` intrinsics.
The `_mm256_blend_pd` and `_mm256_blend_ps` intrinsics are not available
as LLVM intrinsics. In Clang they are implemented using the
shufflevector builtin.
Intel reference: https://software.intel.com/en-us/node/524070.