This commit adjusts the missing field diagnostic logic for struct
expressions in typeck to improve the diagnostic when the missing
fields are inaccessible.
Signed-off-by: David Wood <david@davidtw.co>
Add CONST_ITEM_MUTATION lint
Fixes#74053Fixes#55721
This PR adds a new lint `CONST_ITEM_MUTATION`.
Given an item `const FOO: SomeType = ..`, this lint fires on:
* Attempting to write directly to a field (`FOO.field = some_val`) or
array entry (`FOO.array_field[0] = val`)
* Taking a mutable reference to the `const` item (`&mut FOO`), including
through an autoderef `FOO.some_mut_self_method()`
The lint message explains that since each use of a constant creates a
new temporary, the original `const` item will not be modified.
Move `rustllvm` into `compiler/rustc_llvm`
The `rustllvm` directory is not self-contained, it contains C++ code built by a build script of the `rustc_llvm` crate which is then linked into that crate.
So it makes sense to make `rustllvm` a part of `rustc_llvm` and move it into its directory.
I replaced `rustllvm` with more obvious `llvm-wrapper` as the subdirectory name, but something like `llvm-adapter` would work as well, other suggestions are welcome.
To make things more confusing, the Rust side of FFI functions defined in `rustllvm` can be found in `rustc_codegen_llvm` rather than in `rustc_llvm`. Perhaps they need to be moved as well, but this PR doesn't do that.
The presence of multiple LLVM-related directories in `src` (`llvm-project`, `rustllvm`, `librustc_llvm`, `librustc_codegen_llvm` and their predecessors) historically confused me and made me wonder about their purpose.
With this PR we will have LLVM itself (`llvm-project`), a FFI crate (`rustc_llvm`, kind of `llvm-sys`) and a codegen backend crate using LLVM through the FFI crate (`rustc_codegen_llvm`).
Improve unresolved use error message
"use of undeclared type or module `foo`" doesn't mention that it could be a crate.
This error can happen when users forget to add a dependency to `Cargo.toml`, so I think it's important to mention that it could be a missing crate.
I've used a heuristic based on Rust's naming conventions. It complains about an unknown type if the ident starts with an upper-case letter, and crate or module otherwise. It seems to work very well. The expanded error help covers both an unknown type and a missing crate case.
Add `-Z combine_cgu` flag
Introduce a compiler option to let rustc combines all regular CGUs into a single one at the end of compilation.
Part of Issue #64191
VS code graphviz extensions use d3-graphviz, which supports `Courier`
fontname but does not support `monospace`. This caused graphs to render
poorly because the text sizes were wrong.
make `ConstEvaluatable` more strict
relevant zulip discussion: https://rust-lang.zulipchat.com/#narrow/stream/146212-t-compiler.2Fconst-eval/topic/.60ConstEvaluatable.60.20generic.20functions/near/204125452
Let's see how much this impacts. Depending on how this goes this should probably be a future compat warning.
Short explanation: we currently forbid anonymous constants which depend on generic types, e.g. `[0; std::mem::size_of::<T>]` currently errors.
We previously checked this by evaluating the constant and returned an error if that failed. This however allows things like
```rust
const fn foo<T>() -> usize {
if std::mem::size_of::<*mut T>() < 8 { // size of *mut T does not depend on T
std::mem::size_of::<T>()
} else {
8
}
}
fn test<T>() {
let _ = [0; foo::<T>()];
}
```
which is a backwards compatibility hazard. This also has worrying interactions with mir optimizations (https://github.com/rust-lang/rust/pull/74491#issuecomment-661890421) and intrinsics (#74538).
r? `@oli-obk` `@eddyb`
Many developers use a dark theme with editors and IDEs, but this
typically doesn't extend to graphviz output.
When I bring up a MIR graphviz document, the white background is
strikingly bright. This new option changes the colors used for graphviz
output to work better in dark-themed UIs.
Add help note to unconstrained const parameter
Resolves#68366, since it is currently intended behaviour.
If demonstrating `T -> U` is injective, there should be an additional word that it is not **yet** supported.
r? @lcnr
remove public visibility previously needed for rustfmt
`submod_path_from_attr` in rustc_expand::module was previously public because it was also consumed by rustfmt. However, we've done a bit of refactoring in rustfmt and no longer need to use this function.
This changes the visibility to the parent mod as was originally going to be done before the rustfmt dependency was realized (c189565edc (diff-cd1b379893bae95f7991d5a3f3c6d337R201))
Do not promote &mut of a non-ZST ever
Since ~pre-1.0~ 1.36, we have accepted code like this:
```rust
static mut TEST: &'static mut [i32] = {
let x = &mut [1,2,3];
x
};
```
I tracked it back to https://github.com/rust-lang/rust/pull/21744, but unfortunately could not find any discussion or RFC that would explain why we thought this was a good idea. And it's not, it breaks all sorts of things -- see https://github.com/rust-lang/rust/issues/75556.
To fix https://github.com/rust-lang/rust/issues/75556, we have to stop promoting non-ZST mutable references no matter the context, which is what this PR does. It's a breaking change.
Notice that this still works, since it does not rely on promotion:
```rust
static mut TEST: &'static mut [i32] = &mut [0,1,2];
```
Cc `@rust-lang/wg-const-eval`
Add derive macro for specifying diagnostics using attributes.
Introduces `#[derive(SessionDiagnostic)]`, a derive macro for specifying structs that can be converted to Diagnostics using directions given by attributes on the struct and its fields. Currently, the following attributes have been implemented:
- `#[code = "..."]` -- this sets the Diagnostic's error code, and must be provided on the struct iself (ie, not on a field). Equivalent to calling `code`.
- `#[message = "..."]` -- this sets the Diagnostic's primary error message.
- `#[label = "..."]` -- this must be applied to fields of type `Span`, and is equivalent to `span_label`
- `#[suggestion(..)]` -- this allows a suggestion message to be supplied. This attribute must be applied to a field of type `Span` or `(Span, Applicability)`, and is equivalent to calling `span_suggestion`. Valid arguments are:
- `message = "..."` -- this sets the suggestion message.
- (Optional) `code = "..."` -- this suggests code for the suggestion. Defaults to empty.
`suggestion`also comes with other variants: `#[suggestion_short(..)]`, `#[suggestion_hidden(..)]` and `#[suggestion_verbose(..)]` which all take the same keys.
Within the strings passed to each attribute, fields can be referenced without needing to be passed explicitly into the format string -- eg, `#[error = "{ident} already declared"] ` will set the error message to `format!("{} already declared", &self.ident)`. Any fields on the struct can be referenced in this way.
Additionally, for any of these attributes, Option fields can be used to only optionally apply the decoration -- for example:
```rust
#[derive(SessionDiagnostic)]
#[code = "E0123"]
struct SomeKindOfError {
...
#[suggestion(message = "informative error message")]
opt_sugg: Option<(Span, Applicability)>
...
}
```
will not emit a suggestion if `opt_sugg` is `None`.
We plan on iterating on this macro further; this PR is a start.
Closes#61132.
r? `@oli-obk`
Support dataflow problems on arbitrary lattices
This PR implements last of the proposed extensions I mentioned in the design meeting for the original dataflow refactor. It extends the current dataflow framework to work with arbitrary lattices, not just `BitSet`s. This is a prerequisite for dataflow-enabled MIR const-propagation. Personally, I am skeptical of the usefulness of doing const-propagation pre-monomorphization, since many useful constants only become known after monomorphization (e.g. `size_of::<T>()`) and users have a natural tendency to hand-optimize the rest. It's probably worth exprimenting with, however, and others have shown interest cc `@rust-lang/wg-mir-opt.`
The `Idx` associated type is moved from `AnalysisDomain` to `GenKillAnalysis` and replaced with an associated `Domain` type that must implement `JoinSemiLattice`. Like before, each `Analysis` defines the "bottom value" for its domain, but can no longer override the dataflow join operator. Analyses that want to use set intersection must now use the `lattice::Dual` newtype. `GenKillAnalysis` impls have an additional requirement that `Self::Domain: BorrowMut<BitSet<Self::Idx>>`, which effectively means that they must use `BitSet<Self::Idx>` or `lattice::Dual<BitSet<Self::Idx>>` as their domain.
Most of these changes were mechanical. However, because a `Domain` is no longer always a powerset of some index type, we can no longer use an `IndexVec<BasicBlock, GenKillSet<A::Idx>>>` to store cached block transfer functions. Instead, we use a boxed `dyn Fn` trait object. I discuss a few alternatives to the current approach in a commit message.
The majority of new lines of code are to preserve existing Graphviz diagrams for those unlucky enough to have to debug dataflow analyses. I find these diagrams incredibly useful when things are going wrong and considered regressing them unacceptable, especially the pretty-printing of `MovePathIndex`s, which are used in many dataflow analyses. This required a parallel `fmt` trait used only for printing dataflow domains, as well as a refactoring of the `graphviz` module now that we cannot expect the domain to be a `BitSet`. Some features did have to be removed, such as the gen/kill display mode (which I didn't use but existed to mirror the output of the old dataflow framework) and line wrapping. Since I had to rewrite much of it anyway, I took the opportunity to switch to a `Visitor` for printing dataflow state diffs instead of using cursors, which are error prone for code that must be generic over both forward and backward analyses. As a side-effect of this change, we no longer have quadratic behavior when writing graphviz diagrams for backward dataflow analyses.
r? `@pnkfelix`
Fixes#74053Fixes#55721
This PR adds a new lint `CONST_ITEM_MUTATION`.
Given an item `const FOO: SomeType = ..`, this lint fires on:
* Attempting to write directly to a field (`FOO.field = some_val`) or
array entry (`FOO.array_field[0] = val`)
* Taking a mutable reference to the `const` item (`&mut FOO`), including
through an autoderef `FOO.some_mut_self_method()`
The lint message explains that since each use of a constant creates a
new temporary, the original `const` item will not be modified.
* Adds missing "tail" spans (spans that continue beyond the end of
overlapping spans)
* Adds a caret to highlight empty spans associated with MIR elements
that have a position, but otherwise would not be visible.
* Adds visual pointing brackets at the beginning and end of each span
Use ops::ControlFlow in rustc_data_structures::graph::iterate
Since I only know about this because you mentioned it,
r? @ecstatic-morse
If we're not supposed to use new `core` things in compiler for a while then feel free to close, but it felt reasonable to merge the two types since they're the same, and it might be convenient for people to use `?` in their traversal code.
(This doesn't do the type parameter swap; NoraCodes has signed up to do that one.)
Implementation of incompatible features error
Proposal of a new error: Incompatible features
This error should happen if two features which are not compatible are used together.
For now the only incompatible features are `const_generics` and `min_const_generics`
fixes#76280
Allow try blocks as the argument to return expressions
Fixes#76271
I don't think this needs to be edition-aware (phew) since `return try` in 2015 is also the start of an expression, just with a struct literal instead of a block (`return try { x: 4, y: 5 }`).
Account for version number in NtIdent hack
Issue #74616 tracks a backwards-compatibility hack for certain macros.
This has is implemented by hard-coding the filenames and macro names of
certain code that we want to continue to compile.
However, the initial implementation of the hack was based on the
directory structure when building the crate from its repository (e.g.
`js-sys/src/lib.rs`). When the crate is build as a dependency, it will
include a version number from the clone from the cargo registry (e.g.
`js-sys-0.3.17/src/lib.rs`), which would fail the check.
This commit modifies the backwards-compatibility hack to check that
desired crate name (`js-sys` or `time-macros-impl`) is a prefix of the
proper part of the path.
See https://github.com/rust-lang/rust/issues/76070#issuecomment-687215646
for more details.
Disable use of `--eh-frame-hdr` on wasm32.
Set wasm32's `TargetOptions::eh_frame_header` to false so that we don't pass `--eh-frame-hdr` to `wasm-ld`, which doesn't support that flag.
r? @alexcrichton
Move jointness censoring to proc_macro
Proc-macro API currently exposes jointness in `Punct` tokens. That is,
`+` in `+one` is **non** joint.
Our lexer produces jointness info for all tokens, so we need to censor
it *somewhere*
Previously we did this in a lexer, but it makes more sense to do this
in a proc-macro server.
r? @petrochenkov
inliner: Check for codegen fn attributes compatibility
* Check for target features compatibility
* Check for no_sanitize attribute compatibility
Fixes#76259.
Refactor byteorder to std in rustc_middle
Use std::io::{Read, Write} and {to, from}_{le, be}_bytes methods in
order to remove byteorder from librustc_middle's dependency graph.
do not apply DerefMut on union field
This implements the part of [RFC 2514](https://github.com/rust-lang/rfcs/blob/master/text/2514-union-initialization-and-drop.md) about `DerefMut`. Unlike described in the RFC, we only apply this warning specifically when doing `DerefMut` of a `ManuallyDrop` field; that is really the case we are worried about here.
@matthewjasper suggested I patch `convert_place_derefs_to_mutable` and `convert_place_op_to_mutable` for this, but I could not find anything to do in `convert_place_op_to_mutable` and this is sufficient to make the test pass. However, maybe there are some other cases this misses? I have no familiarity with this code.
This is a breaking change *in theory*, if someone used `ManuallyDrop<T>` in a union field and relied on automatic `DerefMut`. But on stable this means `T: Copy`, so the `ManuallyDrop` is rather pointless.
Cc https://github.com/rust-lang/rust/issues/55149
Issue #74616 tracks a backwards-compatibility hack for certain macros.
This has is implemented by hard-coding the filenames and macro names of
certain code that we want to continue to compile.
However, the initial implementation of the hack was based on the
directory structure when building the crate from its repository (e.g.
`js-sys/src/lib.rs`). When the crate is build as a dependency, it will
include a version number from the clone from the cargo registry (e.g.
`js-sys-0.3.17/src/lib.rs`), which would fail the check.
This commit modifies the backwards-compatibility hack to check that
desired crate name (`js-sys` or `time-macros-impl`) is a prefix of the
proper part of the path.
See https://github.com/rust-lang/rust/issues/76070#issuecomment-687215646
for more details.
This commit adjusts MIR shim construction so that substitutions are
applied to function pointer shims during construction, rather than
during codegen (as determined by `substs_for_mir_body`) - as
substitutions will no longer occur during codegen, function pointer
shims can now be polymorphic without incurring double substitutions.
Signed-off-by: David Wood <david@davidtw.co>
Tools, tests, and experimenting with MIR-derived coverage counters
Leverages the new mir_dump output file in HTML+CSS (from #76074) to visualize coverage code regions
and the MIR features that they came from (including overlapping spans).
See example below.
The `run-make-fulldeps/instrument-coverage` test has been refactored to maximize test coverage and reduce code duplication. The new tests support testing with and without `-Clink-dead-code`, so Rust coverage can be tested on MSVC (which, currently, only works with `link-dead-code` _disabled_).
New tests validate coverage region generation and coverage reports with multiple counters per function. Starting with a simple `if-else` branch tests, coverage tests for each additional syntax type can be added by simply dropping in a new Rust sample program.
Includes a basic, MIR-block-based implementation of coverage injection,
available via `-Zexperimental-coverage`. This implementation has known
flaws and omissions, but is simple enough to validate the new tools and
tests.
The existing `-Zinstrument-coverage` option currently enables
function-level coverage only, which at least appears to generate
accurate coverage reports at that level.
Experimental coverage is not accurate at this time. When branch coverage
works as intended, the `-Zexperimental-coverage` option should be
removed.
This PR replaces the bulk of PR #75828, with the remaining parts of
that PR distributed among other separate and indentpent PRs.
This PR depends on two of those other PRs: #76002, #76003 and #76074
Rust compiler MCP rust-lang/compiler-team#278
Relevant issue: #34701 - Implement support for LLVMs code coverage
instrumentation
![Screen-Recording-2020-08-21-at-2](https://user-images.githubusercontent.com/3827298/90972923-ff417880-e4d1-11ea-92bb-8713c6198f6d.gif)
r? @tmandry
FYI: @wesleywiser
The non-use occurrence of the return place in var debug info does not
currently inhibit NRVO optimization, but it will fail assertion in
`visit_place` when optimization is performed.
Relax assertion check to allow the return place in var debug info.
This case might be impossible to hit in optimization pipelines as of
now, but can be encountered in customized mir-opt-level=2 pipeline with
copy propagation disabled. For example in:
```
pub fn b(s: String) -> String {
a(s)
}
#[inline]
pub fn a(s: String) -> String {
let x = s;
let y = x;
y
}
```
diagnostics: shorten paths of unique symbols
This is a step towards implementing a fix for #50310, and continuation of the discussion in [Pre-RFC: Nicer Types In Diagnostics - compiler - Rust Internals](https://internals.rust-lang.org/t/pre-rfc-nicer-types-in-diagnostics/11139). Impressed upon me from previous discussion in #21934 that an RFC for this is not needed, and I should just come up with code.
The recent improvements to `use` suggestions that I've contributed have given rise to this implementation. Contrary to previous suggestions, it's rather simple logic, and I believe it only reduces the amount of cognitive load that a developer would need when reading type errors.
-----
If a symbol name can only be imported from one place, and as long as it was not glob-imported anywhere in the current crate, we can trim its printed path to the last component.
This has wide implications on error messages with types, for example, shortening `std::vec::Vec` to just `Vec`, as long as there is no other `Vec` importable from anywhere.
When introducing argument temporaries during inlining, emit storage
marker statements just before the assignment and in the beginning of
the return block.
This ensures that such temporaries will not be considered live across
yield points after inlining inside a generator.
Proc-macro API currently exposes jointness in `Punct` tokens. That is,
`+` in `+one` is **non** joint.
Our lexer produces jointness info for all tokens, so we need to censor
it *somewhere*
Previously we did this in a lexer, but it makes more sense to do this
in a proc-macro server.
Adds a new mir_dump output file in HTML/CSS to visualize code regions
and the MIR features that they came from (including overlapping spans).
See example below:
Includes a basic, MIR-block-based implementation of coverage injection,
available via `-Zexperimental-coverage`. This implementation has known
flaws and omissions, but is simple enough to validate the new tools and
tests.
The existing `-Zinstrument-coverage` option currently enables
function-level coverage only, which at least appears to generate
accurate coverage reports at that level.
Experimental coverage is not accurate at this time. When branch coverage
works as intended, the `-Zexperimental-coverage` option should be
removed.
This PR replaces the bulk of PR #75828, with the remaining parts of
that PR distributed among other separate and indentpent PRs.
This PR depends on three of those other PRs: #76000, #76002, and
Rust compiler MCP rust-lang/compiler-team#278
Relevant issue: #34701 - Implement support for LLVMs code coverage
instrumentation
![Screen-Recording-2020-08-21-at-2](https://user-images.githubusercontent.com/3827298/90972923-ff417880-e4d1-11ea-92bb-8713c6198f6d.gif)
`run_compiler` is used by clippy and other tools, which should not have
the trimmed paths feature enabled by default, until we see it works well
for them.
Would also be nice to rename `TimePassesCallbacks` however it's a
submodule change.
inliner: Avoid query cycles when optimizing generators
The HIR Id trick is insufficient to prevent query cycles when optimizing
generators, since merely requesting a layout of a generator also
computes its `optimized_mir`.
Make no attempts to inline functions into generators within the same
crate to avoid query cycles.
Fixes#76181.
Avoid rehashing Fingerprint as a map key
This introduces a no-op `Unhasher` for map keys that are already hash-
like, for example `Fingerprint` and its wrapper `DefPathHash`. For these
we can directly produce the `u64` hash for maps. The first use of this
is `def_path_hash_to_def_id: Option<UnhashMap<DefPathHash, DefId>>`.
cc #56308
r? @eddyb
Improve recovery on malformed format call
The token following a format expression should be a comma. However, when it is replaced with a similar token (such as a dot), then the corresponding error is emitted, but the token is treated as a comma, and the parsing step continues.
r? @petrochenkov
If a symbol name can only be imported from one place for a type, and
as long as it was not glob-imported anywhere in the current crate, we
can trim its printed path and print only the name.
This has wide implications on error messages with types, for example,
shortening `std::vec::Vec` to just `Vec`, as long as there is no other
`Vec` importable anywhere.
This adds a new '-Z trim-diagnostic-paths=false' option to control this
feature.
On the good path, with no diagnosis printed, we should try to avoid
issuing this query, so we need to prevent trimmed_def_paths query on
several cases.
This change also relies on a previous commit that differentiates
between `Debug` and `Display` on various rustc types, where the latter
is trimmed and presented to the user and the former is not.
The first use case of this detection of regression for trimmed paths
computation, that is in the case of rustc, which should be computed only
in case of errors or warnings.
Our current user of this method is deeply nested, being a side effect
from `Display` formatting on lots of rustc types. So taking only the
caller to the error message is not enough - we should collect the
traceback instead.
While formatting for user diagnostics used `Display` for all most cases,
some small amount of cases used `Debug` instead. Until now, `Display`
and `Debug` yielded the same output for many types. However, with path
trimming, we want to show a shorter path for the user, these cases need
fixing.
compiler: use `OnceCell` from std
Fixes#76192
The only remaining direct use of `lazy_static` crate is in `src/bootstrap` but I am not sure how I can remove that dependency for now.
r? @matklad
This introduces a no-op `Unhasher` for map keys that are already hash-
like, for example `Fingerprint` and its wrapper `DefPathHash`. For these
we can directly produce the `u64` hash for maps. The first use of this
is `def_path_hash_to_def_id: Option<UnhashMap<DefPathHash, DefId>>`.
lexer: Tiny improvement to shebang detection
Lexer now discerns between regular comments and doc comments, so use that.
The change only affects the choice of reported errors.
Give a better error message for duplicate built-in macros
Minor follow-up to https://github.com/rust-lang/rust/pull/75176 giving a better error message for duplicate builtin macros. This would have made it a little easier to debug.
r? @petrochenkov
Add new `-Z dump-mir-spanview` option
Similar to `-Z dump-mir-graphviz`, this adds the option to write
HTML+CSS files that allow users to analyze the spans associated with MIR
elements (by individual statement, just terminator, or overall basic
block).
This PR was split out from PR #76004, and exposes an API for spanview
HTML+CSS files that is also used to analyze code regions chosen for
coverage instrumentation (in a follow-on PR).
Rust compiler MCP rust-lang/compiler-team#278
Relevant issue: #34701 - Implement support for LLVMs code coverage
instrumentation
r? @tmandry
FYI @wesleywiser
The HIR Id trick is insufficient to prevent query cycles when optimizing
generators, since merely requesting a layout of a generator also
computes its `optimized_mir`.
Make no attempts to inline functions into generators within the same
crate to avoid query cycles.
Run cfg-stripping on generic parameters before invoking derive macros
Fixes#75930
This changes the tokens seen by a proc-macro. However, ising a `#[cfg]` attribute
on a generic paramter is unusual, and combining it with a proc-macro
derive is probably even more unusual. I don't expect this to cause any
breakage.
Eliminate some other bound checks when index comes from an enum
#36962 introduced an assumption for the upper limit of the enum's value. This PR adds an assumption to the lower value as well.
I've modified the original codegen test to show that derived (in that case, adding 1) values also don't generate bounds checks.
However, this test is actually carefully crafted to not hit a bug: if the enum's variants are modified to 1 and 2 instead of 2 and 3, the test fails by adding a bounds check. I suppose this is an LLVM issue and #75525, while not exactly in this context should be tracking it.
I'm not at all confident if this patch can be accepted, or even if it _should_ be accepted in this state. But I'm curious about what others think :)
~Improves~ Should improve #13926 but does not close it because it's not exactly predictable, where bounds checks may pop up against the assumptions.
Previously, this would say no such macro existed, but this was
misleading, since the macro _did_ exist, it was just already seen.
- Say where the macro was previously defined
- Add long-form error message
Make to_immediate/from_immediate configurable by backends
`librustc_codegen_ssa` has the concept of an immediate vs. memory type, and `librustc_codegen_llvm` uses this distinction to implement `bool`s being `i8` in memory, and `i1` in immediate contexts. However, some of that implementation leaked into `codegen_ssa` when converting to/from immediate values. So, move those methods into builder traits, so that behavior can be configured by backends.
This is useful if a backend is able to keep bools as bools, or, needs to do more trickery than just bools to bytes.
(Note that there's already a large amount of things abstracted with "immediate types" - this is just bringing this particular thing in line to be abstracted as well)
---
Pinging @eddyb since that's who I was talking about this change with when they suggested I submit a PR.
Similar to `-Z dump-mir-graphviz`, this adds the option to write
HTML+CSS files that allow users to analyze the spans associated with MIR
elements (by individual statement, just terminator, or overall basic
block).
This PR was split out from PR #76004, and exposes an API for spanview
HTML+CSS files that is also used to analyze code regions chosen for
coverage instrumentation (in a follow-on PR).
Rust compiler MCP rust-lang/compiler-team#278
Relevant issue: #34701 - Implement support for LLVMs code coverage
instrumentation
Update expect-test to 1.0
The only change is that `expect_file` now uses path relative to the
current file (same as `include!`). Before, it used paths relative to
the workspace root, which makes no sense.
Adds two source span utility functions used in source-based coverage
`span.is_empty()` - returns true if `lo()` and `hi()` are equal. This is
not only a convenience, but makes it clear that a `Span` can be empty
(that is, retrieving the source for an empty `Span` will return an empty
string), and codifies the (otherwise undocumented--in the rustc_span
package, at least) fact that `Span` is a half-open interval (where
`hi()` is the open end).
`source_map.lookup_file_span()` - returns an enclosing `Span`
representing the start and end positions of the file enclosing the given
`BytePos`. This gives developers a clear way to quickly determine if any
any other `BytePos` or `Span` is also from the same file (for example,
by simply calling `file_span.contains(span)`).
This results in much simpler code and is much more runtime efficient
compared with the obvious alternative: calling `source_map.lookup_line()`
for any two `Span`'s byte positions, handle both arms of the `Result`
(both contain the file), and then compare files. It is also more
efficient than the non-public method `lookup_source_file_idx()` for each
`BytePos`, because, while comparing the internal source file indexes
would be efficient, looking up the source file index for every `BytePos`
or `Span` to be compared requires a binary search (worst case
performance being O(log n) for every lookup).
`source_map.lookup_file_span()` performs the binary search only once, to
get the `file_span` result that can be used to compare to any number of
other `BytePos` or `Span` values and those comparisons are always O(1).
This PR was split out from PR #75828 .
r? @tmandry
FYI: @wesleywiser
Fix `-Z instrument-coverage` on MSVC
Found that `-C link-dead-code` (which was enabled automatically
under `-Z instrument-coverage`) was causing the linking error that
resulted in segmentation faults in coverage instrumented binaries. Link
dead code is now disabled under MSVC, allowing `-Z instrument-coverage`
to be enabled under MSVC for the first time.
More details are included in Issue #76038 .
Note this PR makes it possible to support `Z instrument-coverage` but
does not enable instrument coverage for MSVC in existing tests. It will be
enabled in another PR to follow this one (both PRs coming from original
PR #75828).
r? @tmandry
FYI: @wesleywiser
`span.is_empty()` - returns true if `lo()` and `hi()` are equal. This is
not only a convenience, but makes it clear that a `Span` can be empty
(that is, retrieving the source for an empty `Span` will return an empty
string), and codifies the (otherwise undocumented--in the rustc_span
package, at least) fact that `Span` is a half-open interval (where
`hi()` is the open end).
`source_map.lookup_file_span()` - returns an enclosing `Span`
representing the start and end positions of the file enclosing the given
`BytePos`. This gives developers a clear way to quickly determine if any
any other `BytePos` or `Span` is also from the same file (for example,
by simply calling `file_span.contains(span)`).
This results in much simpler code and is much more runtime efficient
compared with the obvious alternative: calling `source_map.lookup_line()`
for any two `Span`'s byte positions, handle both arms of the `Result`
(both contain the file), and then compare files. It is also more
efficient than the non-public method `lookup_source_file_idx()` for each
`BytePos`, because, while comparing the internal source file indexes
would be efficient, looking up the source file index for every `BytePos`
or `Span` to be compared requires a binary search (worst case
performance being O(log n) for every lookup).
`source_map.lookup_file_span()` performs the binary search only once, to
get the `file_span` result that can be used to compare to any number of
other `BytePos` or `Span` values and those comparisons are always O(1).
Found that -C link-dead-code (which was enabled automatically
under -Z instrument-coverage) was causing the linking error that
resulted in segmentation faults in coverage instrumented binaries. Link
dead code is now disabled under MSVC, allowing `-Z instrument-coverage`
to be enabled under MSVC for the first time.
More details are included in Issue #76038.
(This PR was broken out from PR #75828)
The only change is that `expect_file` now uses path relative to the
current file (same as `include!`). Before, it used paths relative to
the workspace root, which makes no sense.
cg_llvm: `fewer_names` in `uncached_llvm_type`
This PR changes `uncached_llvm_type` so that a named struct type (with an empty name) is always created when the `fewer_names` option is enabled. By skipping the generation of names, we can improve perf. Giving `LLVMStructCreateNamed` an empty name works because LLVM will perform random renames to avoid collisions. Needs a perf run!
cc @eddyb (whom I've discussed this with)
Restore public visibility on some parsing functions for rustfmt
In #74826 the visibility of several parsing functions was reduced. However, rustfmt is an external consumer of some of these functions as well and needs the visibility to be public, similar to other elements in rustc_parse such as `parse_ident`
db534b3ac2/src/librustc_parse/parser/mod.rs (L433-L436)
This commit changes `uncached_llvm_type` so that a named struct type
(with an empty name) is always created when the `fewer_names` option
is enabled. By skipping the generation of names, we can improve perf.
Giving `LLVMStructCreateNamed` an empty name works because LLVM will
perform random renames to avoid collisions.
Signed-off-by: David Wood <david@davidtw.co>
ty: remove obsolete pretty printer
Fixes#61139.
This PR removes the obsolete printer and replaces all uses of it with `FmtPrinter`. Of the replaced uses, all but one use was in `debug!` logging, two cases were notable:
- `MonoItem::to_string` is used in `-Z print-mono-items` and therefore affects the output of all codegen-units tests (which have been updated).
- `DefPathBasedNames` was used in `librustc_codegen_llvm/type_of.rs` with `LLVMStructCreateNamed` and that'll now get different values, but nothing will break as a result of this.
cc @eddyb (whom I've discussed this with)
Add `-Z proc-macro-backtrace` to allow showing proc-macro panics
Fixes#75050
Previously, we would unconditionally suppress the panic hook during
proc-macro execution. This commit adds a new flag
`-Z proc-macro-backtrace`, which allows running the panic hook for
easier debugging.
Fixes#75050
Previously, we would unconditionally suppress the panic hook during
proc-macro execution. This commit adds a new flag
-Z proc-macro-backtrace, which allows running the panic hook for
easier debugging.
Previous implementation used the `Parser::parse_expr` function in order
to extract the format expression. If the first comma following the
format expression was mistakenly replaced with a dot, then the next
format expression was eaten by the function, because it looked as a
syntactically valid expression, which resulted in incorrectly spanned
error messages.
The way the format expression is exctracted is changed: we first look at
the first available token in the first argument supplied to the
`format!` macro call. If it is a string literal, then it is promoted as
a format expression immediatly, otherwise we fall back to the original
`parse_expr`-related method.
This allows us to ensure that the parser won't consume too much tokens
when a typo is made.
A test has been created so that it is ensured that the issue is properly
fixed.
I've tried a few ways of implementing this, but each fell short.
Adding an auxiliary `_Idx` associated type to `Analysis` that defaults
to `!` but is overridden in the blanket impl of `Analysis` for `A:
GenKillAnalysis` to `A::Idx` seems promising, but the trait solver is
unable to prove equivalence between `A::Idx` and `A::_Idx` within the
overridden version of `into_engine`. Without full-featured
specialization, removing `into_engine` or splitting it into a different
trait would have a significant ergonomic penalty.
Alternatively, we could erase the index type and store a
`GenKillSet<u32>` as well as a function pointer for transmuting between
`&mut A::Domain` and `&mut BitSet<u32>` in the hopes that LLVM can
devirtualize a simple function pointer better than the boxed closure.
However, this is brittle, requires `unsafe` code, and doesn't work for
index types that aren't the same size as a `u32` (e.g. `usize`) since
`GenKillSet` stores a `HybridBitSet`, which may be a `Vec<I>`. Perhaps
safe transmute could help here?
A few small cleanups to `BitSet` and friends:
- Overload `clone_from` for `BitSet`.
- Improve `Debug` represenation of `HybridBitSet`.
- Make `HybridBitSet::domain_size` public.
- Don't require `T: Idx` at the type level. The `Idx` bound is still on
most `BitSet` methods, but like `HashMap`, it doesn't need to be
satisfied for the type to exist.
This commit removes the obsolete printer and replaces all uses of it
with `FmtPrinter`. Of the replaced uses, all but one use was in `debug!`
logging, two cases were notable:
- `MonoItem::to_string` is used in `-Z print-mono-items` and therefore
affects the output of all codegen-units tests.
- `DefPathBasedNames` was used in `librustc_codegen_llvm/type_of.rs`
with `LLVMStructCreateNamed` and that'll now get different values, but
this should result in no functional change.
Signed-off-by: David Wood <david@davidtw.co>
This commit moves `transparent_newtype_field` and `is_zst` to
`LateContext` where they are used, rather than being on the `VariantDef`
and `TyS` types.
Signed-off-by: David Wood <david@davidtw.co>
StringReader is an intornal abstraction which at the moment changes a
lot, so these unit tests cause quite a bit of friction.
Moving them to rustc_lexer and more ingerated-testing style should
make them much less annoying, hopefully without decreasing their
usefulness much.
Note that coloncolon tests are removed (it's unclear what those are
testing).
\r\n tests are removed as well, as we normalize line endings even
before lexing.
Fixes#75930
This changes the tokens seen by a proc-macro. However, ising a `#[cfg]` attribute
on a generic paramter is unusual, and combining it with a proc-macro
derive is probably even more unusual. I don't expect this to cause any
breakage.