Don't trim trailing whitespace from doc comments
Don't trim trailing whitespace from doc comments as multiple trailing spaces indicates a hard line break in Markdown.
I'd have liked to add a unit test for `docs_from_attrs`, but couldn't find a reasonable way to get an `&Attrs` object for use in the test.
Fixes#15877.
fix: make callable fields not complete in method access no parens case
Follow up PR for #15879
Fixes the callable field completion appearing in the method access with no parens case.
fix: no code action 'introduce_named_generic' for impl inside types
Fix#15734.
### Changes Made
- Find params in `ancestors` instead of just `parent`
- Added tests (`replace_impl_with_mut` and `replace_impl_inside`)
fix: Correct references from `rust-analyzer.cargo.check` to `rust-analyzer.check`
When reading the manual, I noticed that the documentation referenced configurations that have since been renamed. This PR updates those references to their new names.
While reading through the code base, I stumbled across a piece of code that I found hard to read despite its simple purpose. This is my attempt at making the code easier to understand for future readers.
I won't be offended if this is too minor and not worth your time.
internal: Update world symbols request definiton, prefer focus range for macros
Prior to this, the symbol search would always jump to the defining macro call, not it jumps to the name in the macro call input if possible. This is a large improvement for assoc items in an attribute impl or trait.
Complete exported macros in `#[macro_use($0)]`
Closes https://github.com/rust-lang/rust-analyzer/issues/15657.
Originally added a test case for incomplete input:
```rust
#[test]
fn completes_incomplete_syntax() {
check(
r#"
//- /dep.rs crate:dep
#[macro_export]
macro_rules! foo {
() => {};
}
//- /main.rs crate:main deps:dep
#[macro_use($0
extern crate dep;
"#,
expect![[r#"
ma foo
"#]],
)
}
```
but couldn't make it pass and removed it 😅 Our current recovering logic doesn't work for token trees and for this code:
```rust
#[macro_use(
extern crate lazy_static;
fn main() {}
```
we ended up with this syntax tree:
```
SOURCE_FILE@0..53
ATTR@0..52
POUND@0..1 "#"
L_BRACK@1..2 "["
META@2..52
PATH@2..11
PATH_SEGMENT@2..11
NAME_REF@2..11
IDENT@2..11 "macro_use"
TOKEN_TREE@11..52
L_PAREN@11..12 "("
WHITESPACE@12..13 "\n"
EXTERN_KW@13..19 "extern"
WHITESPACE@19..20 " "
CRATE_KW@20..25 "crate"
WHITESPACE@25..26 " "
IDENT@26..37 "lazy_static"
SEMICOLON@37..38 ";"
WHITESPACE@38..40 "\n\n"
FN_KW@40..42 "fn"
WHITESPACE@42..43 " "
IDENT@43..47 "main"
TOKEN_TREE@47..49
L_PAREN@47..48 "("
R_PAREN@48..49 ")"
WHITESPACE@49..50 " "
TOKEN_TREE@50..52
L_CURLY@50..51 "{"
R_CURLY@51..52 "}"
WHITESPACE@52..53 "\n"
```
Maybe we can try to parse the token tree in `crates/ide-completion/src/context/analysis.rs` but I'm not sure what's the best way forward.
fix: Correctly set and mark the proc-macro spans
This slows down analysis by 2-3s on self for me unfortunately (~2.5% slowdown)
Noisy diff due to two simple refactoring in the first 2 commits. Relevant changes are [7d762d1](7d762d18ed) and [1e1113c](1e1113cf5f) which introduce def site spans and correct marking for proc-macros respectively.
fix: Update metavariable expression implementation
Fixes https://github.com/rust-lang/rust-analyzer/issues/16154
This duplicates behavior of that before and after PR https://github.com/rust-lang/rust/pull/117050 based on the toolchain version. There are some 1.76 nightlies that are still broken (any before that PR basically) but fetching and storing the commit makes little sense to me (opposed to the toolchain version).
minor: Use reserve when removing markdown from text
After markdown syntax removal the length of the text is roughly the same so we can reserve memory beforehand
fix(mbe): desugar doc correctly for mbe
Fixes#16110.
The way rust desugars doc comments when expanding macros is rendering it as raw strings delimited with hashes. Rust-analyzer wasn't aware of this, so the desugared doc comments wouldn't match correctly when on the LHS of macro declarations.
This PR fixes this by porting the code used by rustc:
59096cdad0/compiler/rustc_ast/src/tokenstream.rs (L662-L671)
Fixes#16110.
The way rust desugars doc comments when expanding macros
is rendering it as raw strings delimited with hashes.
Rust-analyzer wasn't aware of this, so the desugared doc
comments wouldn't match correctly when on the LHS of macro
declarations.
This PR fixes this by porting the code used by rustc: 4cfdbd328b/compiler/rustc_ast/src/tokenstream.rs (L6837)
internal: Move proc-macro knowledge out of base-db into hir-expand
It does not make much sense to me to have that live in base-db, additionally, it kind of conflicts with moving span things out into a separate crate
Fix incorrectly replacing references in macro invocation in "Convert to named struct" assist
Fixes#15630.
Complements #13647 (same assist but missed this one), #14920 (inverse action assist).
Let `reuse` look inside git submodules
Changes `collect-license-metadata` and `generate-copyright` so they can now look at the git submodules.
Unfortunately `reuse` chokes on the LLVM submodule - it finds the word "Copyright" or the unicode copyright symbol in all kinds of places, including UTF-8 test cases. The `reuse` tool expressly won't let you ignore folders, so we let it scan everything and then strip out the LLVM sub-folder in post. Instead, we add in a hand-curated list of copyright information gleaned by reading the LLVM codebase carefully, which is stored in `.reuse/dep5` in Debian format where `reuse` can find and use it.
The `.reuse/dep5` continues to track copyright info for files in the tree that do not have SPDX metadata in them (i.e. all of them)
Use a u64 for the rmeta root position
Waffle noticed this in https://github.com/rust-lang/rust/pull/117301#discussion_r1405410174
We've upgraded the other file offsets to u64, and this one only costs 4 bytes per file. Also the way the truncation was being done before was extremely easy to miss, I sure missed it! It's not clear to me if not having this change effectively made the other upgrades from u32 to u64 ineffective, but we can have it now.
r? `@WaffleLapkin`
fix: Don't emit "missing items" diagnostic for negative impls
Negative impls can't have items, so there is no reason for this diagnostic.
LMK if I should add a test somewhere. Also LMK if that's not how we usually check multiple things in an if in r-a.
fix: Fix view mir, hir and eval function not working when cursor is inside macros
I broke the view ones completely by inverting the macro check by accident a few days ago but we don't talk about that.
detects redundant imports that can be eliminated.
for #117772 :
In order to facilitate review and modification, split the checking code and
removing redundant imports code into two PR.
feat: Prioritize import suggestions based on the expected type
Hi, this is a draft PR to solve #15384. `Adt` types work and now I have a few questions :)
1. What other types make sense in this context? Looking at [ModuleDef](05666441ba/crates/hir/src/lib.rs (L275)) I am thinking everything except Modules.
2. Is there an existing way of converting between `ModeuleDef` and `hir::Type` in the rustanalyzer code base?
3. Does this approach seem sound to you?
Ups: Upon writing this I just realised that the enum test is invalided as there are no enum variants and this no variant is passed as a function argument.
fix: resolve Self type references in delegate method assist
This PR makes the delegate method assist resolve any `Self` type references in the parameters or return type. It also works across macros such as the `uint_impl!` macro used for `saturating_mul` in the issue example.
Closes#14485
fix: Fix item tree lowering pub(self) to pub()
Prior to this, the item tree lowered `pub(self)` visibility to `pub()`
Fix#15134 - tested with a unit test and
a manual end-to-end test of building rust-analyzer from my branch and opening the reproduction repository
Before
Private functions have RawVisibility module, but were
missed because take_types returned None early. After resolve_visibility
returned None, Visibility::Public was set instead and private functions
ended up being offered in autocompletion.
Choosing such a function results in an immediate error diagnostic
about using a private function.
After
Pattern match of take_types that returns None and
query for Module-level visibility from the original_module
Fix#15134 - tested with a unit test and a manual end-to-end
test of building rust-analyzer from my branch and opening
the reproduction repository
REVIEW
Refactor to move scope_def_applicable and check function visibility
from a module
Please let me know what's the best way to add a unit tests to
nameres, which is where the root cause was
Fix panic with closure inside array len
I was working on #15947 and found out that we panic on this test:
```
fn main() {
let x = [(); &(&'static: loop { |x| {}; }) as *const _ as usize]
}
```
This PR fixes the panic. Closures in array len are still broken, but closure in const eval is not stable anyway.
feat: Allow navigation targets to be duplicated when the focus range lies in the macro definition site
![Code_KI1EfbAHRZ](https://github.com/rust-lang/rust-analyzer/assets/3757771/2cc82e5c-320f-4de2-9d55-fe975d180f2a)
Basically if a name of an item originates from the macro definition we now point to that as well as the creating macro call.
Big diff because I also made `FileId`s field private due to some debugging I had to do (having a searchable constructor makes things easier).
Add support for making lib features internal
We have the notion of an "internal" lang feature: a feature that is never intended to be stabilized, and using which can cause ICEs and other issues without that being considered a bug.
This extends that idea to lib features as well. It is an alternative to https://github.com/rust-lang/rust/pull/115623: instead of using an attribute to declare lib features internal, we simply do this based on the name. Everything ending in `_internals` or `_internal` is considered internal.
Then we rename `core_intrinsics` to `core_intrinsics_internal`, which fixes https://github.com/rust-lang/rust/issues/115597.
fix: Insert fn call parens only if the parens inserted around field name
Fixes#16014.
Sorry I missed it in previous PR. I've added a test as level to prevent regressions again.
Give any suggestions to improve the test if anything.
TokenMap -> SpanMap rewrite
Opening early so I can have an overview over the full diff more easily, still very unfinished and lots of work to be done.
The gist of what this PR does is move away from assigning IDs to tokens in arguments and expansions and instead gives the subtrees the text ranges they are sourced from (made relative to some item for incrementality). This means we now only have a single map per expension, opposed to map for expansion and arguments.
A few of the things that are not done yet (in arbitrary order):
- [x] generally clean up the current mess
- [x] proc-macros, have been completely ignored so far
- [x] syntax fixups, has been commented out for the time being needs to be rewritten on top of some marker SyntaxContextId
- [x] macro invocation syntax contexts are not properly passed around yet, so $crate hygiene does not work in all cases (but most)
- [x] builtin macros do not set spans properly, $crate basically does not work with them rn (which we use)
~~- [ ] remove all uses of dummy spans (or if that does not work, change the dummy entries for dummy spans so that tests will not silently pass due to havin a file id for the dummy file)~~
- [x] de-queryfy `macro_expand`, the sole caller of it is `parse_macro_expansion`, and both of these are lru-cached with the same limit so having it be a query is pointless
- [x] docs and more docs
- [x] fix eager macro spans and other stuff
- [x] simplify include! handling
- [x] Figure out how to undo the sudden `()` expression wrapping in expansions / alternatively prioritize getting invisible delimiters working again
- [x] Simplify InFile stuff and HirFIleId extensions
~~- [ ] span crate containing all the file ids, span stuff, ast ids. Then remove the dependency injection generics from tt and mbe~~
Fixes https://github.com/rust-lang/rust-analyzer/issues/10300
Fixes https://github.com/rust-lang/rust-analyzer/issues/15685
Implement completion for the callable fields.
Fixes#14656
PR is opened with basic changes. It could be improved by having a new `SymbolKind` for the callable fields and implementing a separate render function similar to the `render_method` for the new `SymbolKind`.
It could also be done without any changes to the `SymbolKind` of course, have the new function called based on the type of field.
I prefer the former method.
Please give any thoughts or changes you think is appropriate for this method. I could start working on that in this same PR.
chore: remove unused `PhantomData`
This PR removes an unused `PhantomData` in `FileItemTreeId`.
*Note:* I am not sure how this should be implemented, maybe as a type instead of a wrapper struct? I'd be happy to do so if needed 👍
This commit addresses the issue of excessive and unrelated errors
generated by top-level `let` statements. Now, only a single error is
produced, indicating that `let` statements are invalid at the top level.
internal: simplify the removal of dulicate workspaces.
### Summary:
Refactoring the duplicate removal process for `workspaces` in `fetch_workspaces`.
### Changes Made:
Replaced `[].iter().enumerate().skip(...).filter_map(...)` with a more concise `[i+1..].positions(...)` provided by `itertools`, which enhances clarity without changing functionality
### Impact:
This change aims to enhance the duplicate removal process for `workspaces`. This change has been tested on my machine.
Please review and provide feedback. Thanks!
fix: Dedup duplicate crates with differing origins in CrateGraph construction
Partially fixes#15656 . Until now the condition for deduplication in crate graphs were the strict equality of two crates. One problem that arises from this is that in certain conditions when we see the same crate having different `CrateOrigin`s the first occurrence would be kept. This approach however results in some unwanted results such as making renaming forbidden as this has been recently only made available for local crates. The given example in #15656 can still not be resolved with this PR as that involves taking inconsistencies between dependencies into consideration. This will be addressed in a future PR.
Make data reflect a case where dev deps are existent.
base-db::CrateGraph::extend now adds dev dependencies for a crate
in case of its upgrading from a CrateOrigin::Lib kind of a crate to a
CrateOrigin::Local one.
Partially fixes#15656 . When a crate graph is extended which is the case when new workspaces are added to the project
the rules for deduplication were too strict. One problem that arises from this is that in certain conditions
when we see the same crate having different `CrateOrigin`s the first form would be maintained. This approach however
results in some unwanted results such as making renaming forbidden as this has been recently only made available for
local crates. The given example in #15656 can still not be resolved with this PR as that involves taking inconsistencies
between dependencies into consideration. This will be addressed in a future PR.
ensure renames happen after edit
This is a bugfix for an issue I fould while working on helix. Rust-analyzer currently always sends any filesystem edits (rename/file creation) before any other edits. When renaming a file that is also being edited that would mean that the edit would be discarded and therefore an incomplete/incorrect refactor (or even cause the creation of a new file in helix altough that is probably a pub on our side).
Example:
* create a module: `mod foo` containing a `pub sturct Bar;`
* reexport the struct uneder a different name in the `foo` module using a *fully qualified path*: `pub use crate::foo::Bar as Bar2`.
* rename the `foo` module to `foo2` using rust-analyzer
* obsereve that the path is not correctly updated (rust-analyer first sends a rename `foo.rs` to `foo2.rs` and then edits `foo.rs` after)
This PR fixes that issue by simply executing all rename operations after all edit operations (while still executing file creation operations first). I also added a testcase similar to the example above.
Relevent excerpt from the LSP standard:
> Since version 3.13.0 a workspace edit can contain resource operations (create, delete or rename files and folders) as well. If resource operations are present clients need to execute the operations in the order in which they are provided. So a workspace edit for example can consist of the following two changes: (1) create file a.txt and (2) a text document edit which insert text into file a.txt. An invalid sequence (e.g. (1) delete file a.txt and (2) insert text into file a.txt) will cause failure of the operation. How the client recovers from the failure is described by the client capability: workspace.workspaceEdit.failureHandling