Suggest a case insensitive match name regardless of levenshtein distance
Fixes#86170
Currently, `find_best_match_for_name` only returns a case insensitive match name depending on a Levenshtein distance. It's a bit unfortunate that that hides some suggestions for typos like `Bar` -> `BAR`. That idea is from https://github.com/rust-lang/rust/pull/46347#discussion_r153701834, but I think it still makes some sense to show a candidate when we find a case insensitive match name as it's more like a typo.
Skipped the `candidate != lookup` check because the current (i.e, `levenshtein_match`) returns the exact same `Symbol` anyway but it doesn't seem to confuse anything on UI tests.
r? ``@estebank``
Add `const_eval_select` intrinsic
Adds an intrinsic that calls a given function when evaluated at compiler time, but generates a call to another function when called at runtime.
See https://github.com/rust-lang/const-eval/issues/7 for previous discussion.
r? `@oli-obk.`
Actually add the feature to the lints ui test
Add tracking issue to the feature declaration
Rename feature gate to non_exhaustive_omitted_patterns_lint
Add more omitted_patterns lint feature gate
By adding #![doc(cfg_hide(foobar))] to the crate attributes the cfg
#[cfg(foobar)] (and _only_ that _exact_ cfg) will not be implicitly
treated as a doc(cfg) to render a message in the documentation.
Add expansion to while desugar spans
In the same vein as #88163, this reverts a change in Clippy behavior as a result of #80357 (and reverts some `#[allow]`s): This changes `clippy::blocks_in_if_conditions` to not fire on `while` loops. Though we might actually want Clippy to lint those cases, we should introduce the change purposefully, with tests, and possibly under a different lint name.
The actual change here is to add a desugaring expansion to the spans when lowering a `while` loop.
r? `@Manishearth`
Avoid a couple of Symbol::as_str calls in cg_llvm
This should improve performance a tiny bit. Also remove `Symbol::len` and make `SymbolIndex` private.
Support `#[track_caller]` on closures and generators
## Lang team summary
This PR adds support for placing the `#[track_caller]` attribute on closure and generator expressions. This attribute's addition behaves identically (from a users perspective) to the attribute being placed on the method in impl Fn/FnOnce/FnMut for ... generated by compiler.
The attribute is currently "double" feature gated -- both `stmt_expr_attributes` (preexisting) and `closure_track_caller` (newly added) must be enabled in order to place these attributes on closures.
As the Fn* traits lack a `#[track_caller]` attribute in their definition, caller information does not propagate when invoking closures through dyn Fn*. There is no limitation that this PR adds in supporting this; it can be added in the future.
# Implementation details
This is implemented in the same way as for functions - an extra
location argument is appended to the end of the ABI. For closures,
this argument is *not* part of the 'tupled' argument storing the
parameters - the final closure argument for `#[track_caller]` closures
is no longer a tuple.
For direct (monomorphized) calls, the necessary support was already
implemented - we just needeed to adjust some assertions around checking
the ABI and argument count to take closures into account.
For calls through a trait object, more work was needed.
When creating a `ReifyShim`, we need to create a shim
for the trait method (e.g. `FnOnce::call_mut`) - unlike normal
functions, closures are never invoked directly, and always go through a
trait method.
Additional handling was needed for `InstanceDef::ClosureOnceShim`. In
order to pass location information throgh a direct (monomorphized) call
to `FnOnce::call_once` on an `FnMut` closure, we need to make
`ClosureOnceShim` aware of `#[tracked_caller]`. A new field
`track_caller` is added to `ClosureOnceShim` - this is used by
`InstanceDef::requires_caller` location, allowing codegen to
pass through the extra location argument.
Since `ClosureOnceShim.track_caller` is only used by codegen,
we end up generating two identical MIR shims - one for
`track_caller == true`, and one for `track_caller == false`. However,
these two shims are used by the entire crate (i.e. it's two shims total,
not two shims per unique closure), so this shouldn't a big deal.
This PR allows applying a `#[track_caller]` attribute to a
closure/generator expression. The attribute as interpreted as applying
to the compiler-generated implementation of the corresponding trait
method (`FnOnce::call_once`, `FnMut::call_mut`, `Fn::call`, or
`Generator::resume`).
This feature does not have its own feature gate - however, it requires
`#![feature(stmt_expr_attributes)]` in order to actually apply
an attribute to a closure or generator.
This is implemented in the same way as for functions - an extra
location argument is appended to the end of the ABI. For closures,
this argument is *not* part of the 'tupled' argument storing the
parameters - the final closure argument for `#[track_caller]` closures
is no longer a tuple.
For direct (monomorphized) calls, the necessary support was already
implemented - we just needeed to adjust some assertions around checking
the ABI and argument count to take closures into account.
For calls through a trait object, more work was needed.
When creating a `ReifyShim`, we need to create a shim
for the trait method (e.g. `FnOnce::call_mut`) - unlike normal
functions, closures are never invoked directly, and always go through a
trait method.
Additional handling was needed for `InstanceDef::ClosureOnceShim`. In
order to pass location information throgh a direct (monomorphized) call
to `FnOnce::call_once` on an `FnMut` closure, we need to make
`ClosureOnceShim` aware of `#[tracked_caller]`. A new field
`track_caller` is added to `ClosureOnceShim` - this is used by
`InstanceDef::requires_caller` location, allowing codegen to
pass through the extra location argument.
Since `ClosureOnceShim.track_caller` is only used by codegen,
we end up generating two identical MIR shims - one for
`track_caller == true`, and one for `track_caller == false`. However,
these two shims are used by the entire crate (i.e. it's two shims total,
not two shims per unique closure), so this shouldn't a big deal.
"Fix" an overflow in byte position math
r? `@estebank`
help! I fixed the ICE only to brick the diagnostic.
I mean, it was wrong previously (using an already expanded macro span), but it is really bad now XD
Implement `#[must_not_suspend]`
implements #83310
Some notes on the impl:
1. The code that searches for the attribute on the ADT is basically copied from the `must_use` lint. It's not shared, as the logic did diverge
2. The RFC does specify that the attribute can be placed on fn's (and fn-like objects), like `must_use`. I think this is a direct copy from the `must_use` reference definition. This implementation does NOT support this, as I felt that ADT's (+ `impl Trait` + `dyn Trait`) cover the usecase's people actually want on the RFC, and adding an imp for the fn call case would be significantly harder. The `must_use` impl can do a single check at fn call stmt time, but `must_not_suspend` would need to answer the question: "for some value X with type T, find any fn call that COULD have produced this value". That would require significant changes to `generator_interior.rs`, and I would need mentorship on that. `@eholk` and I are discussing it.
3. `@estebank` do you know a way I can make the user-provided `reason` note pop out? right now it seems quite hidden
Also, I am not sure if we should run perf on this
r? `@nikomatsakis`
This allows the format_args! macro to keep the pre-expansion code out of
the unsafe block without doing gymnastics with nested `match`
expressions. This reduces codegen.
Convert `debug_assert` to `assert` in `CachingSourceMapView`
I suspect that there's a bug somewhere in this code, which is
leading to the `predicates_of` ICE being seen in #89035
Move the Lock into symbol::Interner
This makes it easier to make the symbol interner (near) lock free in case of concurrent accesses in the future.
With https://github.com/rust-lang/rust/pull/87867 landed this shouldn't affect performance anymore.
Allow `panic!("{}", computed_str)` in const fn.
Special-case `panic!("{}", arg)` and translate it to `panic_display(&arg)`. `panic_display` will behave like `panic_any` in cosnt eval and behave like `panic!(format_args!("{}", arg))` in runtime.
This should bring Rust 2015 and 2021 to feature parity in terms of `const_panic`; and hopefully would unblock the stabilisation of #51999.
`@rustbot` modify labels: +T-compiler +T-libs +A-const-eval +A-const-fn
r? `@oli-obk`
Introduce a fast path that avoids the `debug_tuple` abstraction when deriving Debug for unit-like enum variants.
The intent here is to allow LLVM to remove the switch entirely in favor of an
indexed load from a table of constant strings, which is likely what the
programmer would write in C. Unfortunately, LLVM currently doesn't perform this
optimization due to a bug, but there is [a
patch](https://reviews.llvm.org/D109565) that fixes this issue. I've verified
that, with that patch applied on top of this commit, Debug for unit-like tuple
variants becomes a load, reducing the O(n) code bloat to O(1).
Note that inlining `DebugTuple::finish()` wasn't enough to allow LLVM to
optimize the code properly; I had to avoid the abstraction entirely. Not using
the abstraction is likely better for compile time anyway.
Part of #88793.
r? `@oli-obk`
cleanup(rustc_trait_selection): remove vestigial code from rustc_on_unimplemented
This isn't allowed by the validator, and seems to be unused.
When it was added in ed10a3faae,
it was used on `Sized`, and that usage is gone.
Revert anon union parsing
Revert PR #84571 and #85515, which implemented anonymous union parsing in a manner that broke the context-sensitivity for the `union` keyword and thus broke stable Rust code.
Fix#88583.
Using symbol::Interner makes it very easy to mixup UniqueTypeId symbols
with the global interner. In fact the Debug implementation of
UniqueTypeId did exactly this.
Using a separate interner type also avoids prefilling the interner with
unused symbols and allow for optimizing the symbol interner for parallel
access without negatively affecting the single threaded module codegen.
Encode spans relative to the enclosing item
The aim of this PR is to avoid recomputing queries when code is moved without modification.
MCP at https://github.com/rust-lang/compiler-team/issues/443
This is achieved by :
1. storing the HIR owner LocalDefId information inside the span;
2. encoding and decoding spans relative to the enclosing item in the incremental on-disk cache;
3. marking a dependency to the `source_span(LocalDefId)` query when we translate a span from the short (`Span`) representation to its explicit (`SpanData`) representation.
Since all client code uses `Span`, step 3 ensures that all manipulations
of span byte positions actually create the dependency edge between
the caller and the `source_span(LocalDefId)`.
This query return the actual absolute span of the parent item.
As a consequence, any source code motion that changes the absolute byte position of a node will either:
- modify the distance to the parent's beginning, so change the relative span's hash;
- dirty `source_span`, and trigger the incremental recomputation of all code that
depends on the span's absolute byte position.
With this scheme, I believe the dependency tracking to be accurate.
For the moment, the spans are marked during lowering.
I'd rather do this during def-collection,
but the AST MutVisitor is not practical enough just yet.
The only difference is that we attach macro-expanded spans
to their expansion point instead of the macro itself.
Debug for unit-like enum variants.
The intent here is to allow LLVM to remove the switch entirely in favor of an
indexed load from a table of constant strings, which is likely what the
programmer would write in C. Unfortunately, LLVM currently doesn't perform this
optimization due to a bug, but there is [a
patch](https://reviews.llvm.org/D109565) that fixes this issue. I've verified
that, with that patch applied on top of this commit, Debug for unit-like tuple
variants becomes a load, reducing the O(n) code bloat to O(1).
Note that inlining `DebugTuple::finish()` wasn't enough to allow LLVM to
optimize the code properly; I had to avoid the abstraction entirely. Not using
the abstraction is likely better for compile time anyway.
Part of #88793.