C structs predominately do not use camel case identifiers, and we have a clear
indicator for what's a C struct now, so excuse all of them from this stylistic
lint.
code bloat.
This didn't make a difference in any compile times that I saw, but it
fits what we're doing with `transmute` and seems prudent.
r? @alexcrichton
Allows use cases like this one:
``` rust
use std::io::Command;
fn main() {
let mut cmd = Command::new("ls");
cmd.arg("-l");
for &dir in ["a", "b", "c"].iter() {
println!("{}", cmd.clone().arg(dir));
}
}
```
Output:
```
ls '-l' 'a'
ls '-l' 'b'
ls '-l' 'c'
```
Without the `clone()`, you'll end up with:
```
ls '-l' 'a'
ls '-l' 'a' 'b'
ls '-l' 'a' 'b' 'c'
```
cc #15294
Similar to the stability attributes, a type annotated with `#[must_use =
"informative snippet"]` will print the normal warning message along with
"informative snippet". This allows the type author to provide some
guidance about why the type should be used.
---
It can be a little unintuitive that something like `v.iter().map(|x|
println!("{}", x));` does nothing: the majority of the iterator adaptors
are lazy and do not execute anything until something calls `next`, e.g.
a `for` loop, `collect`, `fold`, etc.
The majority of such errors can be seen by someone writing something
like the above, i.e. just calling an iterator adaptor and doing nothing
with it (and doing this is certainly useless), so we can co-opt the
`must_use` lint, using the message functionality to give a hint to the
reason why.
Fixes#14666.
This adds detection of the relevant LD_LIBRARY_PATH-like environment variable
and appropriately sets it when testing whether binaries can run or not.
Additionally, the installation prints a recommended value if one is necessary.
Closes#15545
It can be a little unintuitive that something like `v.iter().map(|x|
println!("{}", x));` does nothing: the majority of the iterator adaptors
are lazy and do not execute anything until something calls `next`, e.g.
a `for` loop, `collect`, `fold`, etc.
The majority of such errors can be seen by someone writing something
like the above, i.e. just calling an iterator adaptor and doing nothing
with it (and doing this is certainly useless), so we can co-opt the
`must_use` lint, using the message functionality to give a hint to the
reason why.
Fixes#14666.
Similar to the stability attributes, a type annotated with `#[must_use =
"informative snippet"]` will print the normal warning message along with
"informative snippet". This allows the type author to provide some
guidance about why the type should be used.
Add libunicode; move unicode functions from core
- created new crate, libunicode, below libstd
- split `Char` trait into `Char` (libcore) and `UnicodeChar` (libunicode)
- Unicode-aware functions now live in libunicode
- `is_alphabetic`, `is_XID_start`, `is_XID_continue`, `is_lowercase`,
`is_uppercase`, `is_whitespace`, `is_alphanumeric`, `is_control`, `is_digit`,
`to_uppercase`, `to_lowercase`
- added `width` method in UnicodeChar trait
- determines printed width of character in columns, or None if it is a non-NULL control character
- takes a boolean argument indicating whether the present context is CJK or not (characters with 'A'mbiguous widths are double-wide in CJK contexts, single-wide otherwise)
- split `StrSlice` into `StrSlice` (libcore) and `UnicodeStrSlice` (libunicode)
- functionality formerly in `StrSlice` that relied upon Unicode functionality from `Char` is now in `UnicodeStrSlice`
- `words`, `is_whitespace`, `is_alphanumeric`, `trim`, `trim_left`, `trim_right`
- also moved `Words` type alias into libunicode because `words` method is in `UnicodeStrSlice`
- unified Unicode tables from libcollections, libcore, and libregex into libunicode
- updated `unicode.py` in `src/etc` to generate aforementioned tables
- generated new tables based on latest Unicode data
- added `UnicodeChar` and `UnicodeStrSlice` traits to prelude
- libunicode is now the collection point for the `std::char` module, combining the libunicode functionality with the `Char` functionality from libcore
- thus, moved doc comment for `char` from `core::char` to `unicode::char`
- libcollections remains the collection point for `std::str`
The Unicode-aware functions that previously lived in the `Char` and `StrSlice` traits are no longer available to programs that only use libcore. To regain use of these methods, include the libunicode crate and `use` the `UnicodeChar` and/or `UnicodeStrSlice` traits:
extern crate unicode;
use unicode::UnicodeChar;
use unicode::UnicodeStrSlice;
use unicode::Words; // if you want to use the words() method
NOTE: this does *not* impact programs that use libstd, since UnicodeChar and UnicodeStrSlice have been added to the prelude.
closes#15224
[breaking-change]
- it allows to lookup using any str-equiv object, making TreeMaps finally usable (for example, it is much easier to work with JSON with lookup values being static strs)
- actually provides pretty flexible solution which could be extended to other equivalent types (although it might be not that performant)
This adds detection of the relevant LD_LIBRARY_PATH-like environment variable
and appropriately sets it when testing whether binaries can run or not.
Additionally, the installation prints a recommended value if one is necessary.
- unicode tests live in coretest crate
- libcollections str tests need UnicodeChar trait.
- libregex perlw tests were checking a char in the Alphabetic category,
\x2161. Confirmed perl 5.18 considers this a \w character. Changed to
\x2961, which is not \w as the test expects.
Removing recursion from TreeMap implementation, because we don't have TCO. No need to add ```O(logn)``` extra stack frames to search in a tree.
I find it curious that ```find_mut``` and ```find``` basically duplicated the same logic, but in different ways (iterative vs recursive), possibly to maneuvre around mutability rules, but that's a more fundamental issue to deal with elsewhere.
Thanks to acrichto for the magic trick to appease borrowck (another issue to deal with elsewhere).
- Treat WOFF as binary files so that git does not perform newline normalization.
- Replace corrupt Heuristica files with Source Serif Pro — italics are [almost in production](https://github.com/adobe/source-serif-pro/issues/2) so I left Heuristica Italic which makes a good pair with SSP. Overall, Source Serif Pro is I think a better fit for rustdoc (cc @TheHydroImpulse). This ought to fix#15527.
- Store Source Code Pro locally in order to make offline docs freestanding. Fixes#14778.
Preview: http://adrientetar.legtux.org/cached/rust-docs/core.html
r? @alexcrichton
Now, the lexer will categorize every byte in its input according to the
grammar. The parser skips over these while parsing, thus avoiding their
presence in the input to syntax extensions.
This removes a bunch of token types. Tokens now store the original, unaltered
numeric literal (that is still checked for correctness), which is parsed into
an actual number later, as needed, when creating the AST.
This can change how syntax extensions work, but otherwise poses no visible
changes.
[breaking-change]
This shuffles things around a bit so that LIT_CHAR and co store an Ident
which is the original, unaltered literal in the source. When creating the AST,
unescape and postprocess them.
This changes how syntax extensions can work, slightly, but otherwise poses no
visible changes. To get a useful value out of one of these tokens, call
`parse::{char_lit, byte_lit, bin_lit, str_lit}`
[breaking-change]
Rather than just dumping the id in the interner, which is useless, actually
print the interned string. Adjust the lexer logging to use Show instead of
Poly.