Ignore all newline characters in Base64 decoder to make it compatible with other Base64 decoders.
Most of the Base64 decoder implementations ignore all newline characters in the input string. There are some examples:
Python:
```python
>>> "
A
Q
=
=
".decode("base64")
'\x01'
```
Ruby:
```ruby
irb(main):001:0> "
A
Q
=
=
".unpack("m")
=> [""]
```
Erlang:
```erlang
1> base64:decode("
A
Q
=
=
").
<<1>>
```
Moreover some Base64 encoders append newline character at the end of the output string by default:
Python:
```python
>>> "".encode("base64")
'AQ==
'
```
Ruby:
```ruby
irb(main):001:0> [""].pack("m")
=> "AQ==
"
```
So I think it's fairly important for Rust Base64 decoder to accept Base64 inputs even with newline characters in the padding.
There was an old and barely used implementation of pow, which expected
both parameters to be uint and required more traits to be implemented.
Since a new implementation for `pow` landed, I'm proposing to remove
this old impl in favor of the new one.
The benchmark shows that the new implementation is faster than the one being removed:
```
test num::bench::bench_pow_function ..bench: 9429 ns/iter (+/- 2055)
test num::bench::bench_pow_with_uint_function ...bench: 28476 ns/iter (+/- 2202)
```
I update the example of json use to the last update of the json.rs code. I delete the old branch.
From my last request, I remove the example3 because it doesn't compile. I don't understand why and I don't have the time now to investigate.
This means that compilation continues for longer, and so we can see more
errors per compile. This is mildly more user-friendly because it stops
users having to run rustc n times to see n macro errors: just run it
once to see all of them.
If the library is in the working directory, its path won't have a "/"
which will cause dlopen to search /usr/lib etc. It turns out that Path
auto-normalizes during joins so Path::new(".").join(path) is actually a
no-op.
The `malloc` family of functions may return a null pointer for a
zero-size allocation, which should not be interpreted as an
out-of-memory error.
If the implementation does not return a null pointer, then handling
this will result in memory savings for zero-size types.
This also switches some code to `malloc_raw` in order to maintain a
centralized point for handling out-of-memory in `rt::global_heap`.
Closes#11634
NodeIds are sequential integers starting at zero, so we can achieve some
memory savings by just storing the items all in a line in a vector.
The occupancy for typical crates seems to be 75-80%, so we're already
more efficient than a HashMap (maximum occupancy 75%), not even counting
the extra book-keeping that HashMap does.
This is my first patch so feedback appreciated!
Bug when initialising `bitv:Bitv::new(int,bool)` when `bool=true`. It created a `Bitv` with underlying representation `!0u` rather than the actual desired bit layout ( e.g. `11111111` instead of `00001111`). This works OK because a size attribute is included which keeps access to legal bounds. However when using `BitvSet::from_bitv(Bitv)`, we then find that `bitvset.contains(i)` can return true when `i` should not in fact be in the set.
```
let bs = BitvSet::from_bitv(Bitv::new(100, true));
assert!(!bs.contains(&127)) //fails
```
The fix is to create the correct representation by treating various cases separately and using a bitshift `(1<<nbits) - 1` to generate correct number of `1`s where necessary.
I found the boxes diagram in the tutorial misleading about how the enum worked.
The current diagram makes it seem that there is a separate Cons struct when there is only one type of struct for the List type, and Nil is drawn almost as if it's not consuming memory.
I'm aware of the optimization that happens for this enum which takes advantage of the fact that pointer cannot be null but this is an implementation detail and not the only one that applies here. I can add a note below the diagram mentioning this if you like.
NodeIds are sequential integers starting at zero, so we can achieve some
memory savings by just storing the items all in a line in a vector.
The occupancy for typical crates seems to be 75-80%, so we're already
more efficient than a HashMap (maximum occupancy 75%), not even counting
the extra book-keeping that HashMap does.
This commit re-works how the monitor() function works and how it both receives
and transmits errors. There are a few cases in which the compiler can abort:
1. A normal compiler error. In this case, the compiler raises a FatalError as
the failure value of the task. If this happens, then the monitor task does
nothing. It ignores all stderr output of the child task and it also
suppresses the failure message of the main task itself. This means that on a
normal compiler error just the error message itself is printed.
2. A normal internal compiler error. These are invoked from sess.span_bug() and
friends. In these cases, they follow the same path (raising a FatalError),
but they will also print an ICE message which has a URL to go report a bug.
3. An actual compiler bug. This happens whenever anything calls fail!() instead
of going through the session itself. In this case, we print out stuff about
RUST_LOG=2 and we by default capture all stderr and print via warn!() so it's
only printed out with the RUST_LOG var set.
For `use` statements, this means disallowing qualifiers when in functions and
disallowing `priv` outside of functions.
For `extern mod` statements, this means disallowing everything everywhere. It
may have been envisioned for `pub extern mod foo` to be a thing, but it
currently doesn't do anything (resolve doesn't pick it up), so better to err on
the side of forwards-compatibility and forbid it entirely for now.
Closes#9957
There was an old and barely used implementation of pow, which expected
both parameters to be uint and required more traits to be implemented.
Since a new implementation for `pow` landed, I'm proposing to remove
this old impl in favor of the new one.
The benchmark shows that the new implementation is faster than the one
being removed:
test num::bench::bench_pow_function ..bench: 9429 ns/iter (+/- 2055)
test num::bench::bench_pow_with_uint_function ...bench: 28476 ns/iter (+/- 2202)
This commit re-works how the monitor() function works and how it both receives
and transmits errors. There are a few cases in which the compiler can abort:
1. A normal compiler error. In this case, the compiler raises a FatalError as
the failure value of the task. If this happens, then the monitor task does
nothing. It ignores all stderr output of the child task and it also
suppresses the failure message of the main task itself. This means that on a
normal compiler error just the error message itself is printed.
2. A normal internal compiler error. These are invoked from sess.span_bug() and
friends. In these cases, they follow the same path (raising a FatalError),
but they will also print an ICE message which has a URL to go report a bug.
3. An actual compiler bug. This happens whenever anything calls fail!() instead
of going through the session itself. In this case, we print out stuff about
RUST_LOG=2 and we by default capture all stderr and print via warn!() so it's
only printed out with the RUST_LOG var set.
For `use` statements, this means disallowing qualifiers when in functions and
disallowing `priv` outside of functions.
For `extern mod` statements, this means disallowing everything everywhere. It
may have been envisioned for `pub extern mod foo` to be a thing, but it
currently doesn't do anything (resolve doesn't pick it up), so better to err on
the side of forwards-compatibility and forbid it entirely for now.
Closes#9957
As part of #10387, this removes the `Primitive::{bits, bytes, is_signed}` methods and removes the trait's operator trait constraints for the reasons outlined below:
- The `Primitive::{bits, bytes}` associated functions were originally added to reflect the existing `BITS` and `BYTES`statics included in the numeric modules. These statics are only exist as a workaround for Rust's lack of CTFE, and should be deprecated in the future in favor of using the `std::mem::size_of` function (see #11621).
- `Primitive::is_signed` seems to be of little utility and does not seem to be used anywhere in the Rust compiler or libraries. It is also rather ugly to call due to the `Option<Self>` workaround for #8888.
- The operator trait constraints are already covered by the `Num` trait.
If the library is in the working directory, its path won't have a "/"
which will cause dlopen to search /usr/lib etc. It turns out that Path
auto-normalizes during joins so Path::new(".").join(path) is actually a
no-op.
The `malloc` family of functions may return a null pointer for a
zero-size allocation, which should not be interpreted as an
out-of-memory error.
If the implementation does not return a null pointer, then handling
this will result in memory savings for zero-size types.
This also switches some code to `malloc_raw` in order to maintain a
centralized point for handling out-of-memory in `rt::global_heap`.
Closes#11634
The patch adds the missing pow method for all the implementations of the
Integer trait. This is a small addition that will most likely be
improved by the work happening in #10387.
Fixes#11499
This stores the stack of iterators inline (we have a maximum depth with
`uint` keys), and then uses direct pointer offsetting to manipulate it,
in a blazing fast way:
Before:
bench_iter_large ... bench: 43187 ns/iter (+/- 3082)
bench_iter_small ... bench: 618 ns/iter (+/- 288)
After:
bench_iter_large ... bench: 13497 ns/iter (+/- 1575)
bench_iter_small ... bench: 220 ns/iter (+/- 91)
Also, removes `.each_{key,value}_reverse` as an offering to
placate the gods of external iterators for my heinous sin of
attempting to add new internal ones (in a previous version of this
PR).
The new macro loading infrastructure needs the ability to force a
procedural-macro crate to be built with the host architecture rather than the
target architecture (because the compiler is just about to dlopen it).
This stores the stack of iterators inline (we have a maximum depth with
`uint` keys), and then uses direct pointer offsetting to manipulate it,
in a blazing fast way:
Before:
bench_iter_large ... bench: 43187 ns/iter (+/- 3082)
bench_iter_small ... bench: 618 ns/iter (+/- 288)
After:
bench_iter_large ... bench: 13497 ns/iter (+/- 1575)
bench_iter_small ... bench: 220 ns/iter (+/- 91)