Auto merge of #13771 - noritada:feature/release-notes-on-github-releases, r=lnicola

Add xtask for publishing release notes in Markdown on GitHub Releases from a changelog in AsciiDoc

This PR provides `xtask publish-release-notes` to convert a changelog written in AsciiDoc to Markdown and update descriptions (release notes) of a corresponding entry on GitHub Releases.

This AsciiDoc parser is not capable of processing every AsciiDoc document, but I have surveyed a set of existing changelog entries and have confirmed that the following notations used can be converted properly. In the future, I would like to improve the parser to accept any AsciiDoc.  Alternatively, the parser could be moved out of the project.

Your feedback would be appreciated!

Closes #13191

### Supported AsciiDoc syntax

many occurrences
- [x] documentation header
- [x] section header
- [x] `*`-prefixed basic unordered single level list item
- [x] list continuation using `+`
- [x] block image macro `image::...[]` with empty alt
- [x] block image macro `image::...[]` with non-empty alt
- [x] block video marco `video::...[]` with `options=loop`
- [x] inline hard line break `+`
- [x] inline custom macro `commit:...[]`
- [x] inline custom macro `release:...[]`
- [x] inline custom macro `pr:...[]`
- [x] inline unconstrained bold text `**...**`
- [x] inline constrained monospace ``` `...`  ```

[thisweek/_posts/2019-07-24-changelog-0.adoc](https://github.com/rust-analyzer/rust-analyzer.github.io/tree/src/thisweek/_posts#:~:text=2019%2D07%2D24%2Dchangelog%2D0.adoc)
- [x] paragraphs
- [x] mixture of `*` and `-` for unordered list item prefix
- [x] inline external link `https://...[]`

[thisweek/_posts/2020-01-13-changelog-7.adoc](https://github.com/rust-analyzer/rust-analyzer.github.io/tree/src/thisweek/_posts#:~:text=2020%2D01%2D13%2Dchangelog%2D7.adoc)
- [x] list item with multiline principal text with indent
- [x] inline image macro `image:...[]`

[thisweek/_posts/2020-03-02-changelog-14.adoc](https://github.com/rust-analyzer/rust-analyzer.github.io/blob/src/thisweek/_posts/2020-03-02-changelog-14.adoc)
- [x] empty lines between list items
- [x] nested unordered list item with `**`
- [x] inline macro `kbd:[...]`

[thisweek/_posts/2020-03-16-changelog-16.adoc](https://github.com/rust-analyzer/rust-analyzer.github.io/blob/src/thisweek/_posts/2020-03-16-changelog-16.adoc)
- [x] `[source]`-prefixed listing

[thisweek/_posts/2020-04-06-changelog-19.adoc](https://github.com/rust-analyzer/rust-analyzer.github.io/blob/src/thisweek/_posts/2020-04-06-changelog-19.adoc)
- [x] list item with multiline principal text without indent
- [x] `[source,lang]`-prefixed listing
- [x] `.`-prefiexed ordered list item
- [x] list item immediately after list continuation paragraph without an empty line in between

[thisweek/_posts/2020-04-20-changelog-21.adoc](https://github.com/rust-analyzer/rust-analyzer.github.io/blob/src/thisweek/_posts/2020-04-20-changelog-21.adoc)
- [x] title line for block image

[thisweek/_posts/2020-12-21-changelog-56.adoc](https://github.com/rust-analyzer/rust-analyzer.github.io/blob/src/thisweek/_posts/2020-12-21-changelog-56.adoc)
- [x] block video `video::...[]` with `options="autoplay,loop"`
This commit is contained in:
bors 2022-12-21 18:37:10 +00:00
commit 3c00b19b0a
7 changed files with 932 additions and 3 deletions

View File

@ -200,7 +200,7 @@ Look for `fn benchmark_xxx` tests for a quick way to reproduce performance probl
## Release Process ## Release Process
Release process is handled by `release`, `dist` and `promote` xtasks, `release` being the main one. Release process is handled by `release`, `dist`, `publish-release-notes` and `promote` xtasks, `release` being the main one.
`release` assumes that you have checkouts of `rust-analyzer`, `rust-analyzer.github.io`, and `rust-lang/rust` in the same directory: `release` assumes that you have checkouts of `rust-analyzer`, `rust-analyzer.github.io`, and `rust-lang/rust` in the same directory:
@ -231,8 +231,9 @@ Release steps:
* create a new changelog in `rust-analyzer.github.io` * create a new changelog in `rust-analyzer.github.io`
3. While the release is in progress, fill in the changelog 3. While the release is in progress, fill in the changelog
4. Commit & push the changelog 4. Commit & push the changelog
5. Tweet 5. Run `cargo xtask publish-release-notes <CHANGELOG>` -- this will convert the changelog entry in AsciiDoc to Markdown and update the body of GitHub Releases entry.
6. Inside `rust-analyzer`, run `cargo xtask promote` -- this will create a PR to rust-lang/rust updating rust-analyzer's subtree. 6. Tweet
7. Inside `rust-analyzer`, run `cargo xtask promote` -- this will create a PR to rust-lang/rust updating rust-analyzer's subtree.
Self-approve the PR. Self-approve the PR.
If the GitHub Actions release fails because of a transient problem like a timeout, you can re-run the job from the Actions console. If the GitHub Actions release fails because of a transient problem like a timeout, you can re-run the job from the Actions console.

View File

@ -34,6 +34,13 @@ xflags::xflags! {
cmd dist { cmd dist {
optional --client-patch-version version: String optional --client-patch-version version: String
} }
/// Read a changelog AsciiDoc file and update the GitHub Releases entry in Markdown.
cmd publish-release-notes {
/// Only run conversion and show the result.
optional --dry-run
/// Target changelog file.
required changelog: String
}
cmd metrics { cmd metrics {
optional --dry-run optional --dry-run
} }
@ -59,6 +66,7 @@ pub enum XtaskCmd {
Release(Release), Release(Release),
Promote(Promote), Promote(Promote),
Dist(Dist), Dist(Dist),
PublishReleaseNotes(PublishReleaseNotes),
Metrics(Metrics), Metrics(Metrics),
Bb(Bb), Bb(Bb),
} }
@ -90,6 +98,13 @@ pub struct Dist {
pub client_patch_version: Option<String>, pub client_patch_version: Option<String>,
} }
#[derive(Debug)]
pub struct PublishReleaseNotes {
pub changelog: String,
pub dry_run: bool,
}
#[derive(Debug)] #[derive(Debug)]
pub struct Metrics { pub struct Metrics {
pub dry_run: bool, pub dry_run: bool,

View File

@ -15,6 +15,7 @@ mod flags;
mod install; mod install;
mod release; mod release;
mod dist; mod dist;
mod publish;
mod metrics; mod metrics;
use anyhow::bail; use anyhow::bail;
@ -36,6 +37,7 @@ fn main() -> anyhow::Result<()> {
flags::XtaskCmd::Release(cmd) => cmd.run(sh), flags::XtaskCmd::Release(cmd) => cmd.run(sh),
flags::XtaskCmd::Promote(cmd) => cmd.run(sh), flags::XtaskCmd::Promote(cmd) => cmd.run(sh),
flags::XtaskCmd::Dist(cmd) => cmd.run(sh), flags::XtaskCmd::Dist(cmd) => cmd.run(sh),
flags::XtaskCmd::PublishReleaseNotes(cmd) => cmd.run(sh),
flags::XtaskCmd::Metrics(cmd) => cmd.run(sh), flags::XtaskCmd::Metrics(cmd) => cmd.run(sh),
flags::XtaskCmd::Bb(cmd) => { flags::XtaskCmd::Bb(cmd) => {
{ {

109
xtask/src/publish.rs Normal file
View File

@ -0,0 +1,109 @@
mod notes;
use crate::flags;
use anyhow::{anyhow, bail, Result};
use std::env;
use xshell::{cmd, Shell};
impl flags::PublishReleaseNotes {
pub(crate) fn run(self, sh: &Shell) -> Result<()> {
let asciidoc = sh.read_file(&self.changelog)?;
let mut markdown = notes::convert_asciidoc_to_markdown(std::io::Cursor::new(&asciidoc))?;
let file_name = check_file_name(self.changelog)?;
let tag_name = &file_name[0..10];
let original_changelog_url = create_original_changelog_url(&file_name);
let additional_paragraph =
format!("\nSee also [original changelog]({original_changelog_url}).");
markdown.push_str(&additional_paragraph);
if self.dry_run {
println!("{markdown}");
} else {
update_release(sh, tag_name, &markdown)?;
}
Ok(())
}
}
fn check_file_name<P: AsRef<std::path::Path>>(path: P) -> Result<String> {
let file_name = path
.as_ref()
.file_name()
.ok_or_else(|| anyhow!("file name is not specified as `changelog`"))?
.to_string_lossy();
let mut chars = file_name.chars();
if file_name.len() >= 10
&& chars.next().unwrap().is_ascii_digit()
&& chars.next().unwrap().is_ascii_digit()
&& chars.next().unwrap().is_ascii_digit()
&& chars.next().unwrap().is_ascii_digit()
&& chars.next().unwrap() == '-'
&& chars.next().unwrap().is_ascii_digit()
&& chars.next().unwrap().is_ascii_digit()
&& chars.next().unwrap() == '-'
&& chars.next().unwrap().is_ascii_digit()
&& chars.next().unwrap().is_ascii_digit()
{
Ok(file_name.to_string())
} else {
bail!("unexpected file name format; no date information prefixed")
}
}
fn create_original_changelog_url(file_name: &str) -> String {
let year = &file_name[0..4];
let month = &file_name[5..7];
let day = &file_name[8..10];
let mut stem = &file_name[11..];
if let Some(stripped) = stem.strip_suffix(".adoc") {
stem = stripped;
}
format!("https://rust-analyzer.github.io/thisweek/{year}/{month}/{day}/{stem}.html")
}
fn update_release(sh: &Shell, tag_name: &str, release_notes: &str) -> Result<()> {
let token = match env::var("GITHUB_TOKEN") {
Ok(token) => token,
Err(_) => bail!("Please obtain a personal access token from https://github.com/settings/tokens and set the `GITHUB_TOKEN` environment variable."),
};
let accept = "Accept: application/vnd.github+json";
let authorization = format!("Authorization: Bearer {token}");
let api_version = "X-GitHub-Api-Version: 2022-11-28";
let release_url = "https://api.github.com/repos/rust-lang/rust-analyzer/releases";
let release_json = cmd!(
sh,
"curl -sf -H {accept} -H {authorization} -H {api_version} {release_url}/tags/{tag_name}"
)
.read()?;
let release_id = cmd!(sh, "jq .id").stdin(release_json).read()?;
let mut patch = String::new();
write_json::object(&mut patch)
.string("tag_name", tag_name)
.string("target_commitish", "master")
.string("name", tag_name)
.string("body", release_notes)
.bool("draft", false)
.bool("prerelease", false);
let _ = cmd!(
sh,
"curl -sf -X PATCH -H {accept} -H {authorization} -H {api_version} {release_url}/{release_id} -d {patch}"
)
.read()?;
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn original_changelog_url_creation() {
let input = "2019-07-24-changelog-0.adoc";
let actual = create_original_changelog_url(input);
let expected = "https://rust-analyzer.github.io/thisweek/2019/07/24/changelog-0.html";
assert_eq!(actual, expected);
}
}

631
xtask/src/publish/notes.rs Normal file
View File

@ -0,0 +1,631 @@
use anyhow::{anyhow, bail};
use std::{
borrow::Cow,
io::{BufRead, Lines},
iter::Peekable,
};
const LISTING_DELIMITER: &str = "----";
const IMAGE_BLOCK_PREFIX: &str = "image::";
const VIDEO_BLOCK_PREFIX: &str = "video::";
struct Converter<'a, 'b, R: BufRead> {
iter: &'a mut Peekable<Lines<R>>,
output: &'b mut String,
}
impl<'a, 'b, R: BufRead> Converter<'a, 'b, R> {
fn new(iter: &'a mut Peekable<Lines<R>>, output: &'b mut String) -> Self {
Self { iter, output }
}
fn process(&mut self) -> anyhow::Result<()> {
self.process_document_header()?;
self.skip_blank_lines()?;
self.output.push('\n');
loop {
let line = self.iter.peek().unwrap().as_deref().map_err(|e| anyhow!("{e}"))?;
if get_title(line).is_some() {
let line = self.iter.next().unwrap().unwrap();
let (level, title) = get_title(&line).unwrap();
self.write_title(level, title);
} else if get_list_item(line).is_some() {
self.process_list()?;
} else if line.starts_with('[') {
self.process_source_code_block(0)?;
} else if line.starts_with(LISTING_DELIMITER) {
self.process_listing_block(None, 0)?;
} else if line.starts_with('.') {
self.process_block_with_title(0)?;
} else if line.starts_with(IMAGE_BLOCK_PREFIX) {
self.process_image_block(None, 0)?;
} else if line.starts_with(VIDEO_BLOCK_PREFIX) {
self.process_video_block(None, 0)?;
} else {
self.process_paragraph(0, |line| line.is_empty())?;
}
self.skip_blank_lines()?;
if self.iter.peek().is_none() {
break;
}
self.output.push('\n');
}
Ok(())
}
fn process_document_header(&mut self) -> anyhow::Result<()> {
self.process_document_title()?;
while let Some(line) = self.iter.next() {
let line = line?;
if line.is_empty() {
break;
}
if !line.starts_with(':') {
self.write_line(&line, 0)
}
}
Ok(())
}
fn process_document_title(&mut self) -> anyhow::Result<()> {
if let Some(Ok(line)) = self.iter.next() {
if let Some((level, title)) = get_title(&line) {
let title = process_inline_macros(title)?;
if level == 1 {
self.write_title(level, &title);
return Ok(());
}
}
}
bail!("document title not found")
}
fn process_list(&mut self) -> anyhow::Result<()> {
let mut nesting = ListNesting::new();
while let Some(line) = self.iter.peek() {
let line = line.as_deref().map_err(|e| anyhow!("{e}"))?;
if get_list_item(line).is_some() {
let line = self.iter.next().unwrap()?;
let line = process_inline_macros(&line)?;
let (marker, item) = get_list_item(&line).unwrap();
nesting.set_current(marker);
self.write_list_item(item, &nesting);
self.process_paragraph(nesting.indent(), |line| {
line.is_empty() || get_list_item(line).is_some() || line == "+"
})?;
} else if line == "+" {
let _ = self.iter.next().unwrap()?;
let line = self
.iter
.peek()
.ok_or_else(|| anyhow!("list continuation unexpectedly terminated"))?;
let line = line.as_deref().map_err(|e| anyhow!("{e}"))?;
let indent = nesting.indent();
if line.starts_with('[') {
self.write_line("", 0);
self.process_source_code_block(indent)?;
} else if line.starts_with(LISTING_DELIMITER) {
self.write_line("", 0);
self.process_listing_block(None, indent)?;
} else if line.starts_with('.') {
self.write_line("", 0);
self.process_block_with_title(indent)?;
} else if line.starts_with(IMAGE_BLOCK_PREFIX) {
self.write_line("", 0);
self.process_image_block(None, indent)?;
} else if line.starts_with(VIDEO_BLOCK_PREFIX) {
self.write_line("", 0);
self.process_video_block(None, indent)?;
} else {
self.write_line("", 0);
let current = nesting.current().unwrap();
self.process_paragraph(indent, |line| {
line.is_empty()
|| get_list_item(line).filter(|(m, _)| m == current).is_some()
|| line == "+"
})?;
}
} else {
break;
}
self.skip_blank_lines()?;
}
Ok(())
}
fn process_source_code_block(&mut self, level: usize) -> anyhow::Result<()> {
if let Some(Ok(line)) = self.iter.next() {
if let Some(styles) = line.strip_prefix("[source").and_then(|s| s.strip_suffix(']')) {
let mut styles = styles.split(',');
if !styles.next().unwrap().is_empty() {
bail!("not a source code block");
}
let language = styles.next();
return self.process_listing_block(language, level);
}
}
bail!("not a source code block")
}
fn process_listing_block(&mut self, style: Option<&str>, level: usize) -> anyhow::Result<()> {
if let Some(Ok(line)) = self.iter.next() {
if line == LISTING_DELIMITER {
self.write_indent(level);
self.output.push_str("```");
if let Some(style) = style {
self.output.push_str(style);
}
self.output.push('\n');
while let Some(line) = self.iter.next() {
let line = line?;
if line == LISTING_DELIMITER {
self.write_line("```", level);
return Ok(());
} else {
self.write_line(&line, level);
}
}
bail!("listing block is not terminated")
}
}
bail!("not a listing block")
}
fn process_block_with_title(&mut self, level: usize) -> anyhow::Result<()> {
if let Some(Ok(line)) = self.iter.next() {
let title =
line.strip_prefix('.').ok_or_else(|| anyhow!("extraction of the title failed"))?;
let line = self
.iter
.peek()
.ok_or_else(|| anyhow!("target block for the title is not found"))?;
let line = line.as_deref().map_err(|e| anyhow!("{e}"))?;
if line.starts_with(IMAGE_BLOCK_PREFIX) {
return self.process_image_block(Some(title), level);
} else if line.starts_with(VIDEO_BLOCK_PREFIX) {
return self.process_video_block(Some(title), level);
} else {
bail!("title for that block type is not supported");
}
}
bail!("not a title")
}
fn process_image_block(&mut self, caption: Option<&str>, level: usize) -> anyhow::Result<()> {
if let Some(Ok(line)) = self.iter.next() {
if let Some((url, attrs)) = parse_media_block(&line, IMAGE_BLOCK_PREFIX) {
let alt = if let Some(stripped) =
attrs.strip_prefix('"').and_then(|s| s.strip_suffix('"'))
{
stripped
} else {
attrs
};
if let Some(caption) = caption {
self.write_caption_line(caption, level);
}
self.write_indent(level);
self.output.push_str("![");
self.output.push_str(alt);
self.output.push_str("](");
self.output.push_str(url);
self.output.push_str(")\n");
return Ok(());
}
}
bail!("not a image block")
}
fn process_video_block(&mut self, caption: Option<&str>, level: usize) -> anyhow::Result<()> {
if let Some(Ok(line)) = self.iter.next() {
if let Some((url, attrs)) = parse_media_block(&line, VIDEO_BLOCK_PREFIX) {
let html_attrs = match attrs {
"options=loop" => "controls loop",
r#"options="autoplay,loop""# => "autoplay controls loop",
_ => bail!("unsupported video syntax"),
};
if let Some(caption) = caption {
self.write_caption_line(caption, level);
}
self.write_indent(level);
self.output.push_str(r#"<video src=""#);
self.output.push_str(url);
self.output.push_str(r#"" "#);
self.output.push_str(html_attrs);
self.output.push_str(">Your browser does not support the video tag.</video>\n");
return Ok(());
}
}
bail!("not a video block")
}
fn process_paragraph<P>(&mut self, level: usize, predicate: P) -> anyhow::Result<()>
where
P: Fn(&str) -> bool,
{
while let Some(line) = self.iter.peek() {
let line = line.as_deref().map_err(|e| anyhow!("{e}"))?;
if predicate(line) {
break;
}
self.write_indent(level);
let line = self.iter.next().unwrap()?;
let line = line.trim_start();
let line = process_inline_macros(line)?;
if let Some(stripped) = line.strip_suffix('+') {
self.output.push_str(stripped);
self.output.push('\\');
} else {
self.output.push_str(&line);
}
self.output.push('\n');
}
Ok(())
}
fn skip_blank_lines(&mut self) -> anyhow::Result<()> {
while let Some(line) = self.iter.peek() {
if !line.as_deref().unwrap().is_empty() {
break;
}
self.iter.next().unwrap()?;
}
Ok(())
}
fn write_title(&mut self, indent: usize, title: &str) {
for _ in 0..indent {
self.output.push('#');
}
self.output.push(' ');
self.output.push_str(title);
self.output.push('\n');
}
fn write_list_item(&mut self, item: &str, nesting: &ListNesting) {
let (marker, indent) = nesting.marker();
self.write_indent(indent);
self.output.push_str(marker);
self.output.push_str(item);
self.output.push('\n');
}
fn write_caption_line(&mut self, caption: &str, indent: usize) {
self.write_indent(indent);
self.output.push('_');
self.output.push_str(caption);
self.output.push_str("_\\\n");
}
fn write_indent(&mut self, indent: usize) {
for _ in 0..indent {
self.output.push(' ');
}
}
fn write_line(&mut self, line: &str, indent: usize) {
self.write_indent(indent);
self.output.push_str(line);
self.output.push('\n');
}
}
pub(crate) fn convert_asciidoc_to_markdown<R>(input: R) -> anyhow::Result<String>
where
R: BufRead,
{
let mut output = String::new();
let mut iter = input.lines().peekable();
let mut converter = Converter::new(&mut iter, &mut output);
converter.process()?;
Ok(output)
}
fn get_title(line: &str) -> Option<(usize, &str)> {
strip_prefix_symbol(line, '=')
}
fn get_list_item(line: &str) -> Option<(ListMarker, &str)> {
const HYPHEN_MARKER: &str = "- ";
if let Some(text) = line.strip_prefix(HYPHEN_MARKER) {
Some((ListMarker::Hyphen, text))
} else if let Some((count, text)) = strip_prefix_symbol(line, '*') {
Some((ListMarker::Asterisk(count), text))
} else if let Some((count, text)) = strip_prefix_symbol(line, '.') {
Some((ListMarker::Dot(count), text))
} else {
None
}
}
fn strip_prefix_symbol(line: &str, symbol: char) -> Option<(usize, &str)> {
let mut iter = line.chars();
if iter.next()? != symbol {
return None;
}
let mut count = 1;
loop {
match iter.next() {
Some(ch) if ch == symbol => {
count += 1;
}
Some(' ') => {
break;
}
_ => return None,
}
}
Some((count, iter.as_str()))
}
fn parse_media_block<'a>(line: &'a str, prefix: &str) -> Option<(&'a str, &'a str)> {
if let Some(line) = line.strip_prefix(prefix) {
if let Some((url, rest)) = line.split_once('[') {
if let Some(attrs) = rest.strip_suffix(']') {
return Some((url, attrs));
}
}
}
None
}
#[derive(Debug)]
struct ListNesting(Vec<ListMarker>);
impl ListNesting {
fn new() -> Self {
Self(Vec::<ListMarker>::with_capacity(6))
}
fn current(&mut self) -> Option<&ListMarker> {
self.0.last()
}
fn set_current(&mut self, marker: ListMarker) {
let Self(markers) = self;
if let Some(index) = markers.iter().position(|m| *m == marker) {
markers.truncate(index + 1);
} else {
markers.push(marker);
}
}
fn indent(&self) -> usize {
self.0.iter().map(|m| m.in_markdown().len()).sum()
}
fn marker(&self) -> (&str, usize) {
let Self(markers) = self;
let indent = markers.iter().take(markers.len() - 1).map(|m| m.in_markdown().len()).sum();
let marker = match markers.last() {
None => "",
Some(marker) => marker.in_markdown(),
};
(marker, indent)
}
}
#[derive(Debug, PartialEq, Eq)]
enum ListMarker {
Asterisk(usize),
Hyphen,
Dot(usize),
}
impl ListMarker {
fn in_markdown(&self) -> &str {
match self {
ListMarker::Asterisk(_) => "- ",
ListMarker::Hyphen => "- ",
ListMarker::Dot(_) => "1. ",
}
}
}
fn process_inline_macros(line: &str) -> anyhow::Result<Cow<'_, str>> {
let mut chars = line.char_indices();
loop {
let (start, end, a_macro) = match get_next_line_component(&mut chars) {
Component::None => break,
Component::Text => continue,
Component::Macro(s, e, m) => (s, e, m),
};
let mut src = line.chars();
let mut processed = String::new();
for _ in 0..start {
processed.push(src.next().unwrap());
}
processed.push_str(a_macro.process()?.as_str());
for _ in start..end {
let _ = src.next().unwrap();
}
let mut pos = end;
loop {
let (start, end, a_macro) = match get_next_line_component(&mut chars) {
Component::None => break,
Component::Text => continue,
Component::Macro(s, e, m) => (s, e, m),
};
for _ in pos..start {
processed.push(src.next().unwrap());
}
processed.push_str(a_macro.process()?.as_str());
for _ in start..end {
let _ = src.next().unwrap();
}
pos = end;
}
for ch in src {
processed.push(ch);
}
return Ok(Cow::Owned(processed));
}
Ok(Cow::Borrowed(line))
}
fn get_next_line_component(chars: &mut std::str::CharIndices<'_>) -> Component {
let (start, mut macro_name) = match chars.next() {
None => return Component::None,
Some((_, ch)) if ch == ' ' || !ch.is_ascii() => return Component::Text,
Some((pos, ch)) => (pos, String::from(ch)),
};
loop {
match chars.next() {
None => return Component::None,
Some((_, ch)) if ch == ' ' || !ch.is_ascii() => return Component::Text,
Some((_, ':')) => break,
Some((_, ch)) => macro_name.push(ch),
}
}
let mut macro_target = String::new();
loop {
match chars.next() {
None => return Component::None,
Some((_, ' ')) => return Component::Text,
Some((_, '[')) => break,
Some((_, ch)) => macro_target.push(ch),
}
}
let mut attr_value = String::new();
let end = loop {
match chars.next() {
None => return Component::None,
Some((pos, ']')) => break pos + 1,
Some((_, ch)) => attr_value.push(ch),
}
};
Component::Macro(start, end, Macro::new(macro_name, macro_target, attr_value))
}
enum Component {
None,
Text,
Macro(usize, usize, Macro),
}
struct Macro {
name: String,
target: String,
attrs: String,
}
impl Macro {
fn new(name: String, target: String, attrs: String) -> Self {
Self { name, target, attrs }
}
fn process(&self) -> anyhow::Result<String> {
let name = &self.name;
let text = match name.as_str() {
"https" => {
let url = &self.target;
let anchor_text = &self.attrs;
format!("[{anchor_text}](https:{url})")
}
"image" => {
let url = &self.target;
let alt = &self.attrs;
format!("![{alt}]({url})")
}
"kbd" => {
let keys = self.attrs.split('+').map(|k| Cow::Owned(format!("<kbd>{k}</kbd>")));
keys.collect::<Vec<_>>().join("+")
}
"pr" => {
let pr = &self.target;
let url = format!("https://github.com/rust-analyzer/rust-analyzer/pull/{pr}");
format!("[`#{pr}`]({url})")
}
"commit" => {
let hash = &self.target;
let short = &hash[0..7];
let url = format!("https://github.com/rust-analyzer/rust-analyzer/commit/{hash}");
format!("[`{short}`]({url})")
}
"release" => {
let date = &self.target;
let url = format!("https://github.com/rust-analyzer/rust-analyzer/releases/{date}");
format!("[`{date}`]({url})")
}
_ => bail!("macro not supported: {name}"),
};
Ok(text)
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::fs::read_to_string;
#[test]
fn test_asciidoc_to_markdown_conversion() {
let input = read_to_string("test_data/input.adoc").unwrap();
let expected = read_to_string("test_data/expected.md").unwrap();
let actual = convert_asciidoc_to_markdown(std::io::Cursor::new(&input)).unwrap();
assert_eq!(actual, expected);
}
macro_rules! test_inline_macro_processing {
($((
$name:ident,
$input:expr,
$expected:expr
),)*) => ($(
#[test]
fn $name() {
let input = $input;
let actual = process_inline_macros(&input).unwrap();
let expected = $expected;
assert_eq!(actual, expected)
}
)*);
}
test_inline_macro_processing! {
(inline_macro_processing_for_empty_line, "", ""),
(inline_macro_processing_for_line_with_no_macro, "foo bar", "foo bar"),
(
inline_macro_processing_for_macro_in_line_start,
"kbd::[Ctrl+T] foo",
"<kbd>Ctrl</kbd>+<kbd>T</kbd> foo"
),
(
inline_macro_processing_for_macro_in_line_end,
"foo kbd::[Ctrl+T]",
"foo <kbd>Ctrl</kbd>+<kbd>T</kbd>"
),
(
inline_macro_processing_for_macro_in_the_middle_of_line,
"foo kbd::[Ctrl+T] foo",
"foo <kbd>Ctrl</kbd>+<kbd>T</kbd> foo"
),
(
inline_macro_processing_for_several_macros,
"foo kbd::[Ctrl+T] foo kbd::[Enter] foo",
"foo <kbd>Ctrl</kbd>+<kbd>T</kbd> foo <kbd>Enter</kbd> foo"
),
(
inline_macro_processing_for_several_macros_without_text_in_between,
"foo kbd::[Ctrl+T]kbd::[Enter] foo",
"foo <kbd>Ctrl</kbd>+<kbd>T</kbd><kbd>Enter</kbd> foo"
),
}
}

View File

@ -0,0 +1,81 @@
# Changelog #256
Hello!
Commit: [`0123456`](https://github.com/rust-analyzer/rust-analyzer/commit/0123456789abcdef0123456789abcdef01234567) \
Release: [`2022-01-01`](https://github.com/rust-analyzer/rust-analyzer/releases/2022-01-01)
## New Features
- **BREAKING** [`#1111`](https://github.com/rust-analyzer/rust-analyzer/pull/1111) shortcut <kbd>ctrl</kbd>+<kbd>r</kbd>
- hyphen-prefixed list item
- nested list item
- `foo` -> `foofoo`
- `bar` -> `barbar`
- listing in the secondary level
1. install
1. add to config
```json
{"foo":"bar"}
```
- list item with continuation
![](https://example.com/animation.gif)
![alt text](https://example.com/animation.gif)
<video src="https://example.com/movie.mp4" controls loop>Your browser does not support the video tag.</video>
<video src="https://example.com/movie.mp4" autoplay controls loop>Your browser does not support the video tag.</video>
_Image_\
![](https://example.com/animation.gif)
_Video_\
<video src="https://example.com/movie.mp4" controls loop>Your browser does not support the video tag.</video>
```bash
rustup update nightly
```
```
This is a plain listing.
```
- single line item followed by empty lines
- multiline list
item followed by empty lines
- multiline list
item with indent
- multiline list
item not followed by empty lines
- multiline list
item followed by different marker
- foo
- bar
- multiline list
item followed by list continuation
paragraph
paragraph
## Another Section
- foo bar baz
- list item with an inline image
![](https://example.com/animation.gif)
The highlight of the month is probably [`#1111`](https://github.com/rust-analyzer/rust-analyzer/pull/1111).
See [online manual](https://example.com/manual) for more information.
```bash
rustup update nightly
```
```
rustup update nightly
```
```
This is a plain listing.
```

View File

@ -0,0 +1,90 @@
= Changelog #256
:sectanchors:
:page-layout: post
Hello!
Commit: commit:0123456789abcdef0123456789abcdef01234567[] +
Release: release:2022-01-01[]
== New Features
* **BREAKING** pr:1111[] shortcut kbd:[ctrl+r]
- hyphen-prefixed list item
* nested list item
** `foo` -> `foofoo`
** `bar` -> `barbar`
* listing in the secondary level
. install
. add to config
+
[source,json]
----
{"foo":"bar"}
----
* list item with continuation
+
image::https://example.com/animation.gif[]
+
image::https://example.com/animation.gif["alt text"]
+
video::https://example.com/movie.mp4[options=loop]
+
video::https://example.com/movie.mp4[options="autoplay,loop"]
+
.Image
image::https://example.com/animation.gif[]
+
.Video
video::https://example.com/movie.mp4[options=loop]
+
[source,bash]
----
rustup update nightly
----
+
----
This is a plain listing.
----
* single line item followed by empty lines
* multiline list
item followed by empty lines
* multiline list
item with indent
* multiline list
item not followed by empty lines
* multiline list
item followed by different marker
** foo
** bar
* multiline list
item followed by list continuation
+
paragraph
paragraph
== Another Section
* foo bar baz
* list item with an inline image
image:https://example.com/animation.gif[]
The highlight of the month is probably pr:1111[].
See https://example.com/manual[online manual] for more information.
[source,bash]
----
rustup update nightly
----
[source]
----
rustup update nightly
----
----
This is a plain listing.
----