Semaphores are not currently designed to handle this case correctly, leading to
very strange behavior. Semaphores as written are intended to count *resources*
and it's not possible to have a negative number of resources.
This alters the behavior and documentation to note that the task will be failed
if the initial count is 0.
Closes#15758
At the moment, writing generic functions for integer types that involve shifting is rather verbose. For example, a function at shifts an integer left by 1 currently requires
use std::num::One;
fn f<T: Int>(x : T) -> T {
x << One::one()
}
If the shift amount is not 1, it's even worse:
use std::num::FromPrimitive;
fn f<T: Int + FromPrimitive>(x: T) -> T {
x << FromPrimitive::from_int(2).unwrap()
}
This patch allows the much simpler implementation
fn f<T: Int>(x: T) -> T {
x << 2
}
It accomplishes this by changing the built-in integer types (and the `Int` trait) to implement `Shl<uint, T>` instead of `Shl<T, T>` as it currently is defined. Note that the internal implementations of `shl` already cast the right-hand side to `uint`. `BigInt` also implements `Shl<uint, BigInt>`, so this increases consistency.
All of the above applies similarly to right shifts, i.e., `Shr<uint, T>`.
This fixes naming conventions for `push`/`pop` from either end of a structure by partially implementing @erickt's suggestion from https://github.com/rust-lang/rust/issues/10852#issuecomment-30823343, namely:
* push/pop from the 'back' are called `push` and `pop`.
* push/pop from the 'front' are called `push_front` and `pop_front`.
* `push`/`pop` are declared on the `MutableSeq` trait.
* Implement `MutableSeq` for `Vec`, `DList`, and `RingBuf`.
* Add `MutableSeq` to the prelude.
I did not make any further refactorings because there is some more extensive thought that needs to be put into the collections traits. This is an easy first step that should close https://github.com/rust-lang/rust/issues/10852.
I left the `push_back` and `pop_back` methods on `DList` and `RingBuf` deprecated. Because `MutableSeq` is in the prelude it shouldn't break many, but it is a breaking change.
Implement for Vec, DList, RingBuf. Add MutableSeq to the prelude.
Since the collections traits are in the prelude most consumers of
these methods will continue to work without change.
[breaking-change]
The first commit reverts a similar fix that only solves the `make install` case. This adds the `--enable-dist-host-only` flag to configure to preserve the old behavior, which the nightly bots rely on. The bots will need to be updated soon after this lands (or they will ~double in size).
Closes https://github.com/rust-lang/rust/issues/15711
The allocas used in match expression currently don't get good lifetime
markers, in fact they only get lifetime start markers, because their
lifetimes don't match to cleanup scopes.
While the bindings themselves are bog standard and just need a matching
pair of start and end markers, they might need them twice, once for a
guard clause and once for the match body.
The __llmatch alloca OTOH needs a single lifetime start marker, but
when there's a guard clause, it needs two end markers, because its
lifetime ends either when the guard doesn't match or after the match
body.
With these intrinsics in place, LLVM can now, for example, optimize
code like this:
````rust
enum E {
A1(int),
A2(int),
A3(int),
A4(int),
}
pub fn variants(x: E) {
match x {
A1(m) => bar(&m),
A2(m) => bar(&m),
A3(m) => bar(&m),
A4(m) => bar(&m),
}
}
````
To a single call to bar, using only a single stack slot. It still fails
to eliminate some of checks.
````gas
.Ltmp5:
.cfi_def_cfa_offset 16
movb (%rdi), %al
testb %al, %al
je .LBB3_5
movzbl %al, %eax
cmpl $1, %eax
je .LBB3_5
cmpl $2, %eax
.LBB3_5:
movq 8(%rdi), %rax
movq %rax, (%rsp)
leaq (%rsp), %rdi
callq _ZN3bar20hcb7a0d8be8e17e37daaE@PLT
popq %rax
retq
````
1. Removed obsolete comment regarding recursive/iteration implementations of tree_find_with/tree_find_mut_with
2. Replaced easy breakable find_with example with simpler one (which only removes redundant allocation during search)
1. Removed obsolete comment regarding recursive/iteration implementations of tree_find_with/tree_find_mut_with
2. Replaced easy breakable find_with example with simpler one (which only removes redundant allocation during search)
Lifetime intrinsics help to reduce stack usage, because LLVM can apply
stack coloring to reuse the stack slots of dead allocas for new ones.
For example these functions now both use the same amount of stack, while
previous `bar()` used five times as much as `foo()`:
````rust
fn foo() {
println("{}", 5);
}
fn bar() {
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
println("{}", 5);
}
````
On top of that, LLVM can also optimize out certain operations when it
knows that memory is dead after a certain point. For example, it can
sometimes remove the zeroing used to cancel the drop glue. This is
possible when the glue drop itself was already removed because the
zeroing dominated the drop glue call. For example in:
````rust
pub fn bar(x: (Box<int>, int)) -> (Box<int>, int) {
x
}
````
With optimizations, this currently results in:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.memset.p0i8.i64(i8* %2, i8 0, i64 16, i32 8, i1 false)
ret void
}
````
But with lifetime intrinsics we get:
````llvm
define void @_ZN3bar20h330fa42547df8179niaE({ i64*, i64 }* noalias nocapture nonnull sret, { i64*, i64 }* noalias nocapture nonnull) unnamed_addr #0 {
"_ZN29_$LP$Box$LT$int$GT$$C$int$RP$39glue_drop.$x22glue_drop$x22$LP$1347$RP$17h88cf42702e5a322aE.exit":
%2 = bitcast { i64*, i64 }* %1 to i8*
%3 = bitcast { i64*, i64 }* %0 to i8*
tail call void @llvm.memcpy.p0i8.p0i8.i64(i8* %3, i8* %2, i64 16, i32 8, i1 false)
tail call void @llvm.lifetime.end(i64 16, i8* %2)
ret void
}
````
Fixes#15665
Adds a new runtime unwinding function that encapsulates the printing of the words "explicit failure" when `fail!()` is called w/o arguments.
The before/after optimized assembly:
```
leaq "str\"str\"(1412)"(%rip), %rax
movq %rax, 24(%rsp)
movq $16, 32(%rsp)
leaq "str\"str\"(1413)"(%rip), %rax
movq %rax, 8(%rsp)
movq $19, 16(%rsp)
leaq 24(%rsp), %rdi
leaq 8(%rsp), %rsi
movl $11, %edx
callq _ZN6unwind12begin_unwind21h15836560661922107792E
```
```
leaq "str\"str\"(1369)"(%rip), %rax
movq %rax, 8(%rsp)
movq $19, 16(%rsp)
leaq 8(%rsp), %rdi
movl $11, %esi
callq _ZN6unwind31begin_unwind_no_time_to_explain20hd1c720cdde6a116480dE@PLT
```
Before/after filesizes:
rwxrwxr-x 1 brian brian 21479503 Jul 20 22:09 stage2-old/lib/librustc-4e7c5e5c.so
rwxrwxr-x 1 brian brian 21475415 Jul 20 22:30 x86_64-unknown-linux-gnu/stage2/lib/librustc-4e7c5e5c.so
This is the lowest-hanging fruit in the fail-bloat wars. Further fixes are going to require harder tradeoffs.
r? @pcwalton