This commit is an implementation of [RFC 1193][rfc] which adds the ability to
the compiler to cap the lint level for the entire compilation session. This flag
will ensure that no lints will go above this level, and Cargo will primarily use
this flag passing `--cap-lints allow` to all upstream dependencies.
[rfc]: https://github.com/rust-lang/rfcs/pull/1193Closes#27259
`EmitterWriter::print_maybe_styled` was basically always used with `format!`, so this macro makes some code cleaner. It should also remove some unnecessary allocations (most `print_maybe_styled` invocations allocated a `String` previously, whereas the new macro uses `write_fmt` to write the formatted string directly to the terminal).
This probably could have been part of #26838, but it’s too late now. It’s also rebased on #26838’s branch because otherwise pretty much all of the changes in this PR would conflict with the other PR’s changes.
The following APIs were all marked with a `#[stable]` tag:
* process::Child::id
* error::Error::is
* error::Error::downcast
* error::Error::downcast_ref
* error::Error::downcast_mut
* io::Error::get_ref
* io::Error::get_mut
* io::Error::into_inner
* hash::Hash::hash_slice
* hash::Hasher::write_{i,u}{8,16,32,64,size}
These methods are all covered by [RFC 1158] and are currently all available on
stable Rust via the [`net2` crate][net2] on crates.io. This commit does not
touch the timeout related functions as they're still waiting on `Duration` which
is unstable anyway, so punting in favor of the `net2` crate wouldn't buy much.
[RFC 1158]: https://github.com/rust-lang/rfcs/pull/1158
[net2]: http://crates.io/crates/net2
Specifically, this commit deprecates:
* TcpStream::set_nodelay
* TcpStream::set_keepalive
* UdpSocket::set_broadcast
* UdpSocket::set_multicast_loop
* UdpSocket::join_multicast
* UdpSocket::set_multicast_time_to_live
* UdpSocket::set_time_to_live
This is trickier than it sounds (because the DST code was written
assuming that one could divide the sized and unsized portions of a
type strictly into a sized prefix and unsized suffix, when it reality
it is more like a sized prefix and sized suffix that surround the
single unsized field).
I chose to put in a pretty hack-ish approach to this because
drop-flags are scheduled to go away anyway, so its not really worth
the effort to to make an infrastructure that sounds as general as the
previous paragraph indicates.
Also, I have written notes of other fixes that need to be made to
really finish fixing #27023, namely more work needs to be done to
account for alignment when computing the size of a value.
Add dropflag hints (stack-local booleans) for unfragmented paths in trans. Part of #5016.
Added code to maintain these hints at runtime, and to conditionalize drop-filling and calls to destructors.
In this early stage of my incremental implementation strategy, we are using hints, so we are always free to leave out a flag for a path -- then we just pass `None` as the dropflag hint in the corresponding schedule cleanup call. But, once a path has a hint, we must at least maintain it: i.e. if the hint exists, we must ensure it is never set to "moved" if the data in question might actually have been initialized. It remains sound to conservatively set the hint to "initialized" as long as the true drop-flag embedded in the value itself is up-to-date.
I hope the commit series has been broken up to be readable; most of the commits in the series should build (though I did not always check this).
----
Oh, I think this technically qualifies as a:
[breaking-change]
because it removes drop-filling in some cases where one could previously observe it. That should only affect `unsafe` code; no safe code should be able to inspect whether the drop-fill was present or not. For an example of code that needed to change to account for this, see commit a81c24ae0216ab47df59acd724f8a33124fb6d97 (a unit test of the `move_val_init` intrinsic). I have not encountered actual code that needed to be updated to account for the change, since this should only be skipping the drop-filling on *moved* values, not on dropped one, and so even types that use `unsafe_no_drop_flag` should be unchanged by this particular PR. (Their time will come later.)
Updated all call sites that used the other contructors to uniformly
call `Lvalue::new_with_hint`, passing along the appropriate kind
of hint for each context.
Placated tidy in a few other places in datum.rs.
Added code to maintain these hints at runtime, and to conditionalize
drop-filling and calls to destructors.
In this early stage, we are using hints, so we are always free to
leave out a flag for a path -- then we just pass `None` as the
dropflag hint in the corresponding schedule cleanup call. But, once a
path has a hint, we must at least maintain it: i.e. if the hint
exists, we must ensure it is never set to "moved" if the data in
question might actually have been initialized. It remains sound to
conservatively set the hint to "initialized" as long as the true
drop-flag embedded in the value itself is up-to-date.
----
Here are some high-level details I want to point out:
* We maintain the hint in Lvalue::post_store, marking the lvalue as
moved. (But also continue drop-filling if necessary.)
* We update the hint on ExprAssign.
* We pass along the hint in once closures that capture-by-move.
* You only call `drop_ty` for state that does not have an associated hint.
If you have a hint, you must call `drop_ty_core` instead.
(Originally I passed the hint into `drop_ty` as well, to make the
connection to a hint more apparent, but the vast majority of
current calls to `drop_ty` are in contexts where no hint is
available, so it just seemed like noise in the resulting diff.)
Instrumented calls sites that construct Lvalues to ease tracking down
cases that we might need to change whether or not they carry a hint.
Note that this commit does not do anything to actually *construct*
the `lldropflag_hints` map, nor does it change anything about codegen
itself. Those parts are in follow-on commits.
(already thumbs-upped pre-rebase by nikomatsakis)
The refactoring here is trivial because `trans::datum::Lvalue`
currently carries no payload. However, future commits will start
adding a payload to `Lvalue`, and thus will force us either
1. to thread the payload through the `_match` code (a long term goal), or
2. to ensure the payload has some reasonable default value.
I had to modify some tests : since `wtf8buf_show` and `wtf8_show` were doing the exact same thing, I repurposed `wtf8_show` to `wtf8buf_show_str` which ensures `Wtf8Buf` `Debug`-formats the same as `str`.
`write_str_escaped` might also be shared amongst other `fmt` but I just left it there within `Wtf8::fmt` for review.
Closure variables represent the closure environment, not the closure
function, so the identifier used to ensure that the debuginfo is unique
for each kind of closure needs to be based on the closure upvars and not
the function signature.
The first paragraph of the docs of the Cursor struct contains a Markdown
link. In listings, this won't get rendered. (Rustdoc seems to split off the
first paragraph and after that convert Markdown to HTML.)
Improve siphash performance for longer data
Use `ptr::copy_nonoverlapping` (aka memcpy) to load an u64 from the
byte stream. This is correct for any alignment, and the compiler will
use the appropriate instruction to load the data.
Also contains small tweaks that should benefit hashing short data too,
both the commit that removes a variable and the autovectorization of
the hash state initialization (in SipHash::reset).
Benchmarks show that hashing longer data benefits for the improved word loading.
Before (using benchmarks from the first commit in the PR):
The before benchmark is a bit noisy.
```
test hash::sip::bench_bytes_4 ... bench: 41 ns/iter (+/- 0) = 97 MB/s
test hash::sip::bench_bytes_7 ... bench: 49 ns/iter (+/- 2) = 142 MB/s
test hash::sip::bench_bytes_8 ... bench: 42 ns/iter (+/- 4) = 190 MB/s
test hash::sip::bench_bytes_a_16 ... bench: 57 ns/iter (+/- 14) = 280 MB/s
test hash::sip::bench_bytes_b_32 ... bench: 85 ns/iter (+/- 74) = 376 MB/s
test hash::sip::bench_bytes_c_128 ... bench: 278 ns/iter (+/- 33) = 460 MB/s
test hash::sip::bench_long_str ... bench: 825 ns/iter (+/- 103)
test hash::sip::bench_str_of_8_bytes ... bench: 151 ns/iter (+/- 66)
test hash::sip::bench_str_over_8_bytes ... bench: 59 ns/iter (+/- 3)
test hash::sip::bench_str_under_8_bytes ... bench: 47 ns/iter (+/- 56)
test hash::sip::bench_u32 ... bench: 39 ns/iter (+/- 93) = 205 MB/s
test hash::sip::bench_u32_keyed ... bench: 40 ns/iter (+/- 88) = 200 MB/s
test hash::sip::bench_u64 ... bench: 54 ns/iter (+/- 96) = 148 MB/s
```
After:
```
test hash::sip::bench_bytes_4 ... bench: 41 ns/iter (+/- 3) = 97 MB/s
test hash::sip::bench_bytes_7 ... bench: 48 ns/iter (+/- 0) = 145 MB/s
test hash::sip::bench_bytes_8 ... bench: 35 ns/iter (+/- 1) = 228 MB/s
test hash::sip::bench_bytes_a_16 ... bench: 45 ns/iter (+/- 1) = 355 MB/s
test hash::sip::bench_bytes_b_32 ... bench: 60 ns/iter (+/- 0) = 533 MB/s
test hash::sip::bench_bytes_c_128 ... bench: 161 ns/iter (+/- 5) = 795 MB/s
test hash::sip::bench_long_str ... bench: 514 ns/iter (+/- 5)
test hash::sip::bench_str_of_8_bytes ... bench: 44 ns/iter (+/- 0)
test hash::sip::bench_str_over_8_bytes ... bench: 51 ns/iter (+/- 0)
test hash::sip::bench_str_under_8_bytes ... bench: 52 ns/iter (+/- 6)
test hash::sip::bench_u32 ... bench: 40 ns/iter (+/- 2) = 200 MB/s
test hash::sip::bench_u32_keyed ... bench: 39 ns/iter (+/- 1) = 205 MB/s
test hash::sip::bench_u64 ... bench: 36 ns/iter (+/- 1) = 222 MB/s
```