9703: docs: Fix several typos and grammar mistakes r=matklad a=alexfertel

I took some time to clean up the dev docs a bit since I spent the whole week reading them. I am not a native speaker, so if you find something wrong please tell me and I'll fix it 😁 

Co-authored-by: Alexander Gonzalez <alexfertel97@gmail.com>
This commit is contained in:
bors[bot] 2021-07-26 22:23:35 +00:00 committed by GitHub
commit 9ca81edb7c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 28 additions and 30 deletions

View File

@ -79,7 +79,7 @@ group of files which are assumed to rarely change. It's mostly an optimization
and does not change the fundamental picture. and does not change the fundamental picture.
The `set_crate_graph` method allows us to control how the input files are partitioned The `set_crate_graph` method allows us to control how the input files are partitioned
into compilation unites -- crates. It also controls (in theory, not implemented into compilation units -- crates. It also controls (in theory, not implemented
yet) `cfg` flags. `CrateGraph` is a directed acyclic graph of crates. Each crate yet) `cfg` flags. `CrateGraph` is a directed acyclic graph of crates. Each crate
has a root `FileId`, a set of active `cfg` flags and a set of dependencies. Each has a root `FileId`, a set of active `cfg` flags and a set of dependencies. Each
dependency is a pair of a crate and a name. It is possible to have two crates dependency is a pair of a crate and a name. It is possible to have two crates
@ -95,7 +95,6 @@ function, and will be inserted into the crate graph just like dependencies.
Soon we'll talk how we build an LSP server on top of `Analysis`, but first, Soon we'll talk how we build an LSP server on top of `Analysis`, but first,
let's deal with that paths issue. let's deal with that paths issue.
## Source roots (a.k.a. "Filesystems are horrible") ## Source roots (a.k.a. "Filesystems are horrible")
This is a non-essential section, feel free to skip. This is a non-essential section, feel free to skip.
@ -104,18 +103,18 @@ The previous section said that the filesystem path is an attribute of a file,
but this is not the whole truth. Making it an absolute `PathBuf` will be bad for but this is not the whole truth. Making it an absolute `PathBuf` will be bad for
several reasons. First, filesystems are full of (platform-dependent) edge cases: several reasons. First, filesystems are full of (platform-dependent) edge cases:
* it's hard (requires a syscall) to decide if two paths are equivalent * It's hard (requires a syscall) to decide if two paths are equivalent.
* some filesystems are case-sensitive (e.g. on macOS) * Some filesystems are case-sensitive (e.g. macOS).
* paths are not necessary UTF-8 * Paths are not necessarily UTF-8.
* symlinks can form cycles * Symlinks can form cycles.
Second, this might hurt reproducibility and hermeticity of builds. In theory, Second, this might hurt the reproducibility and hermeticity of builds. In theory,
moving a project from `/foo/bar/my-project` to `/spam/eggs/my-project` should moving a project from `/foo/bar/my-project` to `/spam/eggs/my-project` should
not change a bit in the output. However, if the absolute path is a part of the not change a bit in the output. However, if the absolute path is a part of the
input, it is at least in theory observable, and *could* affect the output. input, it is at least in theory observable, and *could* affect the output.
Yet another problem is that we really *really* want to avoid doing I/O, but with Yet another problem is that we really *really* want to avoid doing I/O, but with
Rust the set of "input" files is not necessary known up-front. In theory, you Rust the set of "input" files is not necessarily known up-front. In theory, you
can have `#[path="/dev/random"] mod foo;`. can have `#[path="/dev/random"] mod foo;`.
To solve (or explicitly refuse to solve) these problems rust-analyzer uses the To solve (or explicitly refuse to solve) these problems rust-analyzer uses the
@ -205,7 +204,7 @@ fact that most of the changes are small, and that analysis results are unlikely
to change significantly between invocations. to change significantly between invocations.
To do this we use [salsa]: a framework for incremental on-demand computation. To do this we use [salsa]: a framework for incremental on-demand computation.
You can skip the rest of the section if you are familiar with rustc's red-green You can skip the rest of the section if you are familiar with `rustc`'s red-green
algorithm (which is used for incremental compilation). algorithm (which is used for incremental compilation).
[salsa]: https://github.com/salsa-rs/salsa [salsa]: https://github.com/salsa-rs/salsa
@ -220,12 +219,11 @@ of type `V`. Queries come in two basic varieties:
like. like.
* **Functions**: pure functions (no side effects) that transform your inputs * **Functions**: pure functions (no side effects) that transform your inputs
into other values. The results of queries is memoized to avoid recomputing into other values. The results of queries are memoized to avoid recomputing
them a lot. When you make changes to the inputs, we'll figure out (fairly them a lot. When you make changes to the inputs, we'll figure out (fairly
intelligently) when we can re-use these memoized values and when we have to intelligently) when we can re-use these memoized values and when we have to
recompute them. recompute them.
For further discussion, its important to understand one bit of "fairly For further discussion, its important to understand one bit of "fairly
intelligently". Suppose we have two functions, `f1` and `f2`, and one input, intelligently". Suppose we have two functions, `f1` and `f2`, and one input,
`z`. We call `f1(X)` which in turn calls `f2(Y)` which inspects `i(Z)`. `i(Z)` `z`. We call `f1(X)` which in turn calls `f2(Y)` which inspects `i(Z)`. `i(Z)`
@ -267,13 +265,13 @@ The bulk of the rust-analyzer is transforming input text into a semantic model o
Rust code: a web of entities like modules, structs, functions and traits. Rust code: a web of entities like modules, structs, functions and traits.
An important fact to realize is that (unlike most other languages like C# or An important fact to realize is that (unlike most other languages like C# or
Java) there isn't a one-to-one mapping between source code and the semantic model. A Java) there is not a one-to-one mapping between the source code and the semantic model. A
single function definition in the source code might result in several semantic single function definition in the source code might result in several semantic
functions: for example, the same source file might be included as a module into functions: for example, the same source file might get included as a module in
several crate, or a single "crate" might be present in the compilation DAG several crates or a single crate might be present in the compilation DAG
several times, with different sets of `cfg`s enabled. The IDE-specific task of several times, with different sets of `cfg`s enabled. The IDE-specific task of
mapping source code position into a semantic model is inherently imprecise for mapping source code into a semantic model is inherently imprecise for
this reason, and is handled by the [`source_binder`]. this reason and gets handled by the [`source_binder`].
[`source_binder`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/hir/src/source_binder.rs [`source_binder`]: https://github.com/rust-analyzer/rust-analyzer/blob/guide-2019-01/crates/hir/src/source_binder.rs
@ -533,18 +531,18 @@ To conclude the overview of the rust-analyzer, let's trace the request for
We start by [receiving a message] from the language client. We decode the We start by [receiving a message] from the language client. We decode the
message as a request for completion and [schedule it on the threadpool]. This is message as a request for completion and [schedule it on the threadpool]. This is
the also place where we [catch] canceled errors if, immediately after completion, the the place where we [catch] canceled errors if, immediately after completion, the
client sends some modification. client sends some modification.
In [the handler] we a deserialize LSP request into the rust-analyzer specific data In [the handler], we deserialize LSP requests into rust-analyzer specific data
types (by converting a file url into a numeric `FileId`), [ask analysis for types (by converting a file url into a numeric `FileId`), [ask analysis for
completion] and serializer results to LSP. completion] and serialize results into the LSP.
The [completion implementation] is finally the place where we start doing the actual The [completion implementation] is finally the place where we start doing the actual
work. The first step is to collect the `CompletionContext` -- a struct which work. The first step is to collect the `CompletionContext` -- a struct which
describes the cursor position in terms of Rust syntax and semantics. For describes the cursor position in terms of Rust syntax and semantics. For
example, `function_syntax: Option<&'a ast::FnDef>` stores a reference to example, `function_syntax: Option<&'a ast::FnDef>` stores a reference to
enclosing function *syntax*, while `function: Option<hir::Function>` is the the enclosing function *syntax*, while `function: Option<hir::Function>` is the
`Def` for this function. `Def` for this function.
To construct the context, we first do an ["IntelliJ Trick"]: we insert a dummy To construct the context, we first do an ["IntelliJ Trick"]: we insert a dummy

View File

@ -656,7 +656,7 @@ interface TestInfo {
} }
``` ```
## Hover Actions ## Move Item
**Issue:** https://github.com/rust-analyzer/rust-analyzer/issues/6823 **Issue:** https://github.com/rust-analyzer/rust-analyzer/issues/6823

View File

@ -170,7 +170,7 @@ More than one mark per test / code branch doesn't add significantly to understan
Do not use `#[should_panic]` tests. Do not use `#[should_panic]` tests.
Instead, explicitly check for `None`, `Err`, etc. Instead, explicitly check for `None`, `Err`, etc.
**Rationale:** `#[should_panic]` is a tool for library authors, to makes sure that API does not fail silently, when misused. **Rationale:** `#[should_panic]` is a tool for library authors to make sure that the API does not fail silently when misused.
`rust-analyzer` is not a library, we don't need to test for API misuse, and we have to handle any user input without panics. `rust-analyzer` is not a library, we don't need to test for API misuse, and we have to handle any user input without panics.
Panic messages in the logs from the `#[should_panic]` tests are confusing. Panic messages in the logs from the `#[should_panic]` tests are confusing.
@ -333,7 +333,7 @@ impl Foo {
} }
``` ```
Prefer `Default` even it has to be implemented manually. Prefer `Default` even if it has to be implemented manually.
**Rationale:** less typing in the common case, uniformity. **Rationale:** less typing in the common case, uniformity.
@ -343,7 +343,7 @@ Use `Vec::new` rather than `vec![]`.
Avoid using "dummy" states to implement a `Default`. Avoid using "dummy" states to implement a `Default`.
If a type doesn't have a sensible default, empty value, don't hide it. If a type doesn't have a sensible default, empty value, don't hide it.
Let the caller explicitly decide what's the right initial state is. Let the caller explicitly decide what the right initial state is.
## Functions Over Objects ## Functions Over Objects
@ -526,7 +526,7 @@ if words.len() != 2 {
} }
``` ```
**Rationale:** not allocating is almost often faster. **Rationale:** not allocating is almost always faster.
## Push Allocations to the Call Site ## Push Allocations to the Call Site
@ -998,9 +998,9 @@ match output.status.code() {
}; };
``` ```
**Rationale:** like blocks, single-use variables are a cognitively cheap abstraction, as they have access to all the context. **Rationale:** Like blocks, single-use variables are a cognitively cheap abstraction, as they have access to all the context.
Extra variables help during debugging, they make it easy to print/view important intermediate results. Extra variables help during debugging, they make it easy to print/view important intermediate results.
Giving a name to a condition in `if` expression often improves clarity and leads to a nicer formatted code. Giving a name to a condition inside an `if` expression often improves clarity and leads to nicely formatted code.
## Token names ## Token names

View File

@ -6,7 +6,7 @@ This guide describes the current state of syntax trees and parsing in rust-analy
## Source Code ## Source Code
The things described are implemented in two places The things described are implemented in three places
* [rowan](https://github.com/rust-analyzer/rowan/tree/v0.9.0) -- a generic library for rowan syntax trees. * [rowan](https://github.com/rust-analyzer/rowan/tree/v0.9.0) -- a generic library for rowan syntax trees.
* [ra_syntax](https://github.com/rust-analyzer/rust-analyzer/tree/cf5bdf464cad7ceb9a67e07985a3f4d3799ec0b6/crates/ra_syntax) crate inside rust-analyzer which wraps `rowan` into rust-analyzer specific API. * [ra_syntax](https://github.com/rust-analyzer/rust-analyzer/tree/cf5bdf464cad7ceb9a67e07985a3f4d3799ec0b6/crates/ra_syntax) crate inside rust-analyzer which wraps `rowan` into rust-analyzer specific API.
@ -15,9 +15,9 @@ The things described are implemented in two places
## Design Goals ## Design Goals
* Syntax trees are lossless, or full fidelity. All comments and whitespace are preserved. * Syntax trees are lossless, or full fidelity. All comments and whitespace get preserved.
* Syntax trees are semantic-less. They describe *strictly* the structure of a sequence of characters, they don't have hygiene, name resolution or type information attached. * Syntax trees are semantic-less. They describe *strictly* the structure of a sequence of characters, they don't have hygiene, name resolution or type information attached.
* Syntax trees are simple value type. It is possible to create trees for a syntax without any external context. * Syntax trees are simple value types. It is possible to create trees for a syntax without any external context.
* Syntax trees have intuitive traversal API (parent, children, siblings, etc). * Syntax trees have intuitive traversal API (parent, children, siblings, etc).
* Parsing is lossless (even if the input is invalid, the tree produced by the parser represents it exactly). * Parsing is lossless (even if the input is invalid, the tree produced by the parser represents it exactly).
* Parsing is resilient (even if the input is invalid, parser tries to see as much syntax tree fragments in the input as it can). * Parsing is resilient (even if the input is invalid, parser tries to see as much syntax tree fragments in the input as it can).