Commit Graph

98 Commits

Author SHA1 Message Date
bors
047f9c4bc4 Auto merge of #94539 - tmiasko:string-attributes, r=nikic
Pass LLVM string attributes as string slices
2022-03-04 10:38:11 +00:00
bors
62ff2bcf94 Auto merge of #94159 - erikdesjardins:align-load, r=nikic
Add !align metadata on loads of &/&mut/Box

Note that this refers to the alignment of what the loaded value points
to, _not_ the alignment of the loaded value itself.

r? `@ghost` (blocked on #94158)
2022-03-04 08:14:31 +00:00
Tomasz Miąsko
926bf1a371 Pass LLVM string attributes as string slices 2022-03-03 00:28:50 +01:00
Guillaume Gomez
628fbdf9b7 Fix unused_doc_comments lint errors 2022-03-02 20:06:35 +01:00
bors
c42d846add Auto merge of #94229 - erikdesjardins:rem2, r=nikic
Remove LLVM attribute removal

This was necessary before, because `declare_raw_fn` would always apply
the default optimization attributes to every declared function.
Then `attributes::from_fn_attrs` would have to remove the default
attributes in the case of, e.g. `#[optimize(speed)]` in a `-Os` build.
(see [`src/test/codegen/optimize-attr-1.rs`](03a8cc7df1/src/test/codegen/optimize-attr-1.rs (L33)))

However, every relevant callsite of `declare_raw_fn` (i.e. where we
actually generate code for the function, and not e.g. a call to an
intrinsic, where optimization attributes don't [?] matter)
calls `from_fn_attrs`, so we can remove the attribute setting
from `declare_raw_fn`, and rely on `from_fn_attrs` to apply the correct
attributes all at once.

r? `@ghost` (blocked on #94221)
`@rustbot` label S-blocked
2022-03-02 08:48:33 +00:00
bors
4a56cbec59 Auto merge of #94402 - erikdesjardins:revert-coldland, r=nagisa
Revert "Auto merge of #92419 - erikdesjardins:coldland, r=nagisa"

Should fix (untested) #94390

Reopens #46515, #87055

r? `@ehuss`
2022-03-01 08:57:46 +00:00
Erik Desjardins
69ae4233cf Add !align metadata on loads of &/&mut/Box
Note that this refers to the alignment of what the loaded value points
to, _not_ the alignment of the loaded value itself.
2022-02-28 20:04:36 -05:00
Erik Desjardins
dce14cfacc Remove LLVM attribute removal
This was necessary before, because `declare_raw_fn` would always apply
the default optimization attributes to every declared function,
and then `attributes::from_fn_attrs` would have to remove the default
attributes in the case of, e.g. `#[optimize(speed)]` in a `-Os` build.

However, every relevant callsite of `declare_raw_fn` (i.e. where we
actually generate code for the function, and not e.g. a call to an
intrinsic, where optimization attributes don't [?] matter)
calls `from_fn_attrs`, so we can simply remove the attribute setting
from `declare_raw_fn`, and rely on `from_fn_attrs` to apply the correct
attributes all at once.
2022-02-28 00:02:11 -05:00
Erik Desjardins
851fcc7a54 Revert "Auto merge of #92419 - erikdesjardins:coldland, r=nagisa"
This reverts commit 4f49627c6f, reversing
changes made to 028c6f1454.
2022-02-27 23:11:03 -05:00
Erik Desjardins
fec4335407 Apply noundef metadata to loads of types that do not permit raw init
This matches the noundef attributes we apply on arguments/return types.
2022-02-27 12:16:16 -05:00
Erik Desjardins
30d3ce0674 Add LLVM attributes in batches instead of individually
This should improve performance.
2022-02-26 13:14:55 -05:00
Matthias Krüger
0bb72a2c66
Rollup merge of #91675 - ivanloz:memtagsan, r=nagisa
Add MemTagSanitizer Support

Add support for the LLVM [MemTagSanitizer](https://llvm.org/docs/MemTagSanitizer.html).

On hardware which supports it (see caveats below), the MemTagSanitizer can catch bugs similar to AddressSanitizer and HardwareAddressSanitizer, but with lower overhead.

On a tag mismatch, a SIGSEGV is signaled with code SEGV_MTESERR / SEGV_MTEAERR.

# Usage

`-Zsanitizer=memtag -C target-feature="+mte"`

# Comments/Caveats

* MemTagSanitizer is only supported on AArch64 targets with hardware support
* Requires `-C target-feature="+mte"`
* LLVM MemTagSanitizer currently only performs stack tagging.

# TODO

* Tests
* Example
2022-02-18 23:23:03 +01:00
Ivan Lozano
568aeda9e9 MemTagSanitizer Support
Adds support for the LLVM MemTagSanitizer.
2022-02-16 09:39:03 -05:00
Augie Fackler
0958c8f4ca llvm: migrate to new parameter-bearing uwtable attr
In https://reviews.llvm.org/D114543 the uwtable attribute gained a flag
so that we can ask for sync uwtables instead of async, as the former are
much cheaper. The default is async, so that's what I've done here, but I
left a TODO that we might be able to do better.

While in here I went ahead and dropped support for removing uwtable
attributes in rustc: we never did it, so I didn't write the extra C++
bridge code to make it work. Maybe I should have done the same thing
with the `sync|async` parameter but we'll see.
2022-02-14 16:09:53 -05:00
Erik Desjardins
8cb0b6ca5b Apply noundef attribute to &T, &mut T, Box<T>, bool
This doesn't handle `char` because it's a bit awkward to distinguish it
from u32 at this point in codegen.

Note that for some types (like `&Struct` and `&mut Struct`),
we already apply `dereferenceable`, which implies `noundef`,
so the IR does not change.
2022-02-05 01:09:52 -05:00
Eric Huss
e1eff1b0e8 Windows: Disable LLVM crash dialog boxes. 2022-01-27 16:53:17 -08:00
Jacob Bramley
e02e9582d2 Use error-on-mismatch policy for PAuth module flags.
This agrees with Clang, and avoids an error when using LTO with mixed
C/Rust. LLVM considers different behaviour flags to be a mismatch,
even when the flag value itself is the same.

This also makes the flag setting explicit for all uses of
LLVMRustAddModuleFlag.
2022-01-24 16:50:10 +00:00
Matthias Krüger
7f02604f3d
Rollup merge of #92877 - Amanieu:remove_llvm_nounwind, r=Mark-Simulacrum
Remove LLVMRustMarkAllFunctionsNounwind

This was originally introduced in #10916 as a way to remove all landing
pads when performing LTO. However this is no longer necessary today
since rustc properly marks all functions and call-sites as nounwind
where appropriate.

In fact this is incorrect in the presence of `extern "C-unwind"` which
must create a landing pad when compiled with `-C panic=abort` so that
foreign exceptions are caught and properly turned into aborts.
2022-01-17 20:07:07 +01:00
Amanieu d'Antras
606d9c0c0e Remove LLVMRustMarkAllFunctionsNounwind
This was originally introduced in #10916 as a way to remove all landing
pads when performing LTO. However this is no longer necessary today
since rustc properly marks all functions and call-sites as nounwind
where appropriate.

In fact this is incorrect in the presence of `extern "C-unwind"` which
must create a landing pad when compiled with `-C panic=abort` so that
foreign exceptions are caught and properly turned into aborts.
2022-01-14 00:36:12 +00:00
Tomasz Miąsko
000b36c505 Remove deprecated LLVM-style inline assembly 2022-01-12 18:51:31 +01:00
Krasimir Georgiev
4ce56b414d RustWrapper: adapt for an LLVM API change
No functional changes intended.

The LLVM commit
ec501f15a8
removed the signed version of `createExpression`. This adapts the Rust
LLVM wrappers accordingly.
2022-01-03 11:25:33 +01:00
bors
4f49627c6f Auto merge of #92419 - erikdesjardins:coldland, r=nagisa
Mark drop calls in landing pads `cold` instead of `noinline`

Now that deferred inlining has been disabled in LLVM (#92110), this shouldn't cause catastrophic size blowup.

I confirmed that the test cases from https://github.com/rust-lang/rust/issues/41696#issuecomment-298696944 still compile quickly (<1s) after this change. ~Although note that I wasn't able to reproduce the original issue using a recent rustc/llvm with deferred inlining enabled, so those tests may no longer be representative. I was also unable to create a modified test case that reproduced the original issue.~ (edit: I reproduced it on CI by accident--the first commit timed out on the LLVM 12 builder, because I forgot to make it conditional on LLVM version)

r? `@nagisa`
cc `@arielb1` (this effectively reverts #42771 "mark calls in the unwind path as !noinline")
cc `@RalfJung` (fixes #46515)

edit: also fixes #87055
2022-01-01 13:28:13 +00:00
Erik Desjardins
e4463b2453 keep noinline for system llvm < 14 2021-12-30 00:15:51 -05:00
bors
1b3a5f29dd Auto merge of #91125 - eskarn:llvm-passes-plugin-support, r=nagisa
Allow loading LLVM plugins with both legacy and new pass manager

Opening a draft PR to get feedback and start discussion on this feature. There is already a codegen option `passes` which allow giving a list of LLVM pass names, however we currently can't use a LLVM pass plugin (as described here : https://llvm.org/docs/WritingAnLLVMPass.html), the only available passes are the LLVM built-in ones.

The proposed modification would be to add another codegen option `pass-plugins`, which can be set with a list of paths to shared library files. These libraries are loaded using the LLVM function `PassPlugin::Load`, which calls the expected symbol `lvmGetPassPluginInfo`, and register the pipeline parsing and optimization callbacks.

An example usage with a single plugin and 3 passes would look like this in the `.cargo/config`:

```toml
rustflags = [
    "-C", "pass-plugins=/tmp/libLLVMPassPlugin",
    "-C", "passes=pass1 pass2 pass3",
]
```
This would give the same functionality as the opt LLVM tool directly integrated in rust build system.

Additionally, we can also not specify the `passes` option, and use a plugin which inserts passes in the optimization pipeline, as one could do using clang.
2021-12-30 02:53:09 +00:00
LegionMammal978
4937a55dfb Remove in_band_lifetimes from rustc_codegen_llvm
See #91867 for more information.
2021-12-16 14:43:32 -05:00
Axel Cohen
c4f29fa0ed Use the existing llvm-plugins option for both legacy and new pm registration 2021-12-13 10:41:43 +01:00
Axel Cohen
97cf461b8f Add a codegen option to allow loading LLVM pass plugins 2021-12-13 10:40:44 +01:00
cynecx
491dd1f387 Adjust llvm wrapper for unwinding support for inlineasm 2021-12-03 23:51:49 +01:00
Matthias Krüger
d93df5775c
Rollup merge of #91207 - richkadel:rk-bump-coverage-version, r=tmandry
Add support for LLVM coverage mapping format versions 5 and 6

This PR cherry-pick's Swatinem's initial commit in unsubmitted PR #90047.

My additional commit augments Swatinem's great starting point, but adds full support for LLVM
Coverage Mapping Format version 6, conditionally, if compiling with LLVM 13.

Version 6 requires adding the compilation directory when file paths are
relative, and since Rustc coverage maps use relative paths, we should
add the expected compilation directory entry.

Note, however, that with the compilation directory, coverage reports
from `llvm-cov show` can now report file names (when the report includes
more than one file) with the full absolute path to the file.

This would be a problem for test results, but the workaround (for the
rust coverage tests) is to include an additional `llvm-cov show`
parameter: `--compilation-dir=.`
2021-12-01 10:50:20 +01:00
Matthias Krüger
67762ffe35
Rollup merge of #90833 - tmiasko:optimization-remarks, r=nikic
Emit LLVM optimization remarks when enabled with `-Cremark`

The default diagnostic handler considers all remarks to be disabled by
default unless configured otherwise through LLVM internal flags:
`-pass-remarks`, `-pass-remarks-missed`, and `-pass-remarks-analysis`.
This behaviour makes `-Cremark` ineffective on its own.

Fix this by configuring a custom diagnostic handler that enables
optimization remarks based on the value of `-Cremark` option. With
`-Cremark=all` enabling all remarks.

Fixes #90924.

r? `@nikic`
2021-11-28 23:45:17 +01:00
Arpad Borsos
566ad8da45 Update CoverageMappingFormat Support to Version6
Version 5 adds Branch Regions which are a prerequisite for branch coverage.
Version 6 can use the zeroth filename as prefix for other relative files.
2021-11-23 15:49:03 -08:00
Benjamin A. Bjørnseth
bb9dee95ed add rustc option for using LLVM stack smash protection
LLVM has built-in heuristics for adding stack canaries to functions. These
heuristics can be selected with LLVM function attributes. This patch adds a
rustc option `-Z stack-protector={none,basic,strong,all}` which controls the use
of these attributes. This gives rustc the same stack smash protection support as
clang offers through options `-fno-stack-protector`, `-fstack-protector`,
`-fstack-protector-strong`, and `-fstack-protector-all`. The protection this can
offer is demonstrated in test/ui/abi/stack-protector.rs. This fills a gap in the
current list of rustc exploit
mitigations (https://doc.rust-lang.org/rustc/exploit-mitigations.html),
originally discussed in #15179.

Stack smash protection adds runtime overhead and is therefore still off by
default, but now users have the option to trade performance for security as they
see fit. An example use case is adding Rust code in an existing C/C++ code base
compiled with stack smash protection. Without the ability to add stack smash
protection to the Rust code, the code base artifacts could be exploitable in
ways not possible if the code base remained pure C/C++.

Stack smash protection support is present in LLVM for almost all the current
tier 1/tier 2 targets: see
test/assembly/stack-protector/stack-protector-target-support.rs. The one
exception is nvptx64-nvidia-cuda. This patch follows clang's example, and adds a
warning message printed if stack smash protection is used with this target (see
test/ui/stack-protector/warn-stack-protector-unsupported.rs). Support for tier 3
targets has not been checked.

Since the heuristics are applied at the LLVM level, the heuristics are expected
to add stack smash protection to a fraction of functions comparable to C/C++.
Some experiments demonstrating how Rust code is affected by the different
heuristics can be found in
test/assembly/stack-protector/stack-protector-heuristics-effect.rs. There is
potential for better heuristics using Rust-specific safety information. For
example it might be reasonable to skip stack smash protection in functions which
transitively only use safe Rust code, or which uses only a subset of functions
the user declares safe (such as anything under `std.*`). Such alternative
heuristics could be added at a later point.

LLVM also offers a "safestack" sanitizer as an alternative way to guard against
stack smashing (see #26612). This could possibly also be included as a
stack-protection heuristic. An alternative is to add it as a sanitizer (#39699).
This is what clang does: safestack is exposed with option
`-fsanitize=safe-stack`.

The options are only supported by the LLVM backend, but as with other codegen
options it is visible in the main codegen option help menu. The heuristic names
"basic", "strong", and "all" are hopefully sufficiently generic to be usable in
other backends as well.

Reviewed-by: Nikita Popov <nikic@php.net>

Extra commits during review:

- [address-review] make the stack-protector option unstable

- [address-review] reduce detail level of stack-protector option help text

- [address-review] correct grammar in comment

- [address-review] use compiler flag to avoid merging functions in test

- [address-review] specify min LLVM version in fortanix stack-protector test

  Only for Fortanix test, since this target specifically requests the
  `--x86-experimental-lvi-inline-asm-hardening` flag.

- [address-review] specify required LLVM components in stack-protector tests

- move stack protector option enum closer to other similar option enums

- rustc_interface/tests: sort debug option list in tracking hash test

- add an explicit `none` stack-protector option

Revert "set LLVM requirements for all stack protector support test revisions"

This reverts commit a49b74f92a4e7d701d6f6cf63d207a8aff2e0f68.
2021-11-22 20:06:22 +01:00
Tomasz Miąsko
6846674c75 Emit LLVM optimization remarks when enabled with -Cremark
The default diagnostic handler considers all remarks to be disabled by
default unless configured otherwise through LLVM internal flags:
`-pass-remarks`, `-pass-remarks-missed`, and `-pass-remarks-analysis`.
This behaviour makes `-Cremark` ineffective on its own.

Fix this by configuring a custom diagnostic handler that enables
optimization remarks based on the value of `-Cremark` option. With
`-Cremark=all` enabling all remarks.
2021-11-16 08:19:20 +01:00
Tomasz Miąsko
5a09e12135 Initialize LLVM time trace profiler on each code generation thread
In https://reviews.llvm.org/D71059 LLVM 11, the time trace profiler was
extended to support multiple threads.

`timeTraceProfilerInitialize` creates a thread local profiler instance.
When a thread finishes `timeTraceProfilerFinishThread` moves a thread
local instance into a global collection of instances. Finally when all
codegen work is complete `timeTraceProfilerWrite` writes data from the
current thread local instance and the instances in global collection
of instances.

Previously, the profiler was intialized on a single thread only. Since
this thread performs no code generation on its own, the resulting
profile was empty.

Update LLVM codegen to initialize & finish time trace profiler on each
code generation thread.
2021-11-05 17:47:11 +01:00
bors
a8f6e614f8 Auto merge of #89652 - rcvalle:rust-cfi, r=nagisa
Add LLVM CFI support to the Rust compiler

This PR adds LLVM Control Flow Integrity (CFI) support to the Rust compiler. It initially provides forward-edge control flow protection for Rust-compiled code only by aggregating function pointers in groups identified by their number of arguments.

Forward-edge control flow protection for C or C++ and Rust -compiled code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code share the same virtual address space) will be provided in later work as part of this project by defining and using compatible type identifiers (see Type metadata in the design document in the tracking issue #89653).

LLVM CFI can be enabled with -Zsanitizer=cfi and requires LTO (i.e., -Clto).

Thank you, `@eddyb` and `@pcc,` for all the help!
2021-10-27 09:19:42 +00:00
Ramon de C Valle
5d30e93189 Add LLVM CFI support to the Rust compiler
This commit adds LLVM Control Flow Integrity (CFI) support to the Rust
compiler. It initially provides forward-edge control flow protection for
Rust-compiled code only by aggregating function pointers in groups
identified by their number of arguments.

Forward-edge control flow protection for C or C++ and Rust -compiled
code "mixed binaries" (i.e., for when C or C++ and Rust -compiled code
share the same virtual address space) will be provided in later work as
part of this project by defining and using compatible type identifiers
(see Type metadata in the design document in the tracking issue #89653).

LLVM CFI can be enabled with -Zsanitizer=cfi and requires LTO (i.e.,
-Clto).
2021-10-25 16:23:01 -07:00
Matthias Krüger
2f67647606
Rollup merge of #89581 - jblazquez:master, r=Mark-Simulacrum
Add -Z no-unique-section-names to reduce ELF header bloat.

This change adds a new compiler flag that can help reduce the size of ELF binaries that contain many functions.

By default, when enabling function sections (which is the default for most targets), the LLVM backend will generate different section names for each function. For example, a function `func` would generate a section called `.text.func`. Normally this is fine because the linker will merge all those sections into a single one in the binary. However, starting with [LLVM 12](https://github.com/llvm/llvm-project/commit/ee5d1a04), the backend will also generate unique section names for exception handling, resulting in thousands of `.gcc_except_table.*` sections ending up in the final binary because some linkers like LLD don't currently merge or strip these EH sections (see discussion [here](https://reviews.llvm.org/D83655)). This can bloat the ELF headers and string table significantly in binaries that contain many functions.

The new option is analogous to Clang's `-fno-unique-section-names`, and instructs LLVM to generate the same `.text` and `.gcc_except_table` section for each function, resulting in a smaller final binary.

The motivation to add this new option was because we have a binary that ended up with so many ELF sections (over 65,000) that it broke some existing ELF tools, which couldn't handle so many sections.

Here's our old binary:

```
$ readelf --sections old.elf | head -1
There are 71746 section headers, starting at offset 0x2a246508:

$ readelf --sections old.elf | grep shstrtab
  [71742] .shstrtab      STRTAB          0000000000000000 2977204c ad44bb 00      0   0  1
```

That's an 11MB+ string table. Here's the new binary using this option:

```
$ readelf --sections new.elf | head -1
There are 43 section headers, starting at offset 0x29143ca8:

$ readelf --sections new.elf | grep shstrtab
  [40] .shstrtab         STRTAB          0000000000000000 29143acc 0001db 00      0   0  1
```

The whole binary size went down by over 20MB, which is quite significant.
2021-10-25 22:59:46 +02:00
bors
56694b0453 Auto merge of #89808 - tmiasko:llvm-multithreaded, r=nagisa
Cleanup LLVM multi-threading checks

The support for runtime multi-threading was removed from LLVM. Calls to
`LLVMStartMultithreaded` became no-ops equivalent to checking if LLVM
was compiled with support for threads http://reviews.llvm.org/D4216.
2021-10-25 05:28:07 +00:00
Tomasz Miąsko
aa3bf01889 Cleanup LLVM multi-threading checks
The support for runtime multi-threading was removed from LLVM. Calls to
`LLVMStartMultithreaded` became no-ops equivalent to checking if LLVM
was compiled with support for threads http://reviews.llvm.org/D4216.
2021-10-12 13:27:16 +02:00
Tomasz Miąsko
ce7713d6b4 Remap ssa RealPredicate to llvm RealPredicate
to avoid relying on the discriminant of the former for FFI purposes
2021-10-12 11:55:45 +02:00
Javier Blazquez
4ed846ad4d Add -Z no-unique-section-names to reduce ELF header bloat.
This change adds a new compiler flag that can help reduce the size of
ELF binaries that contain many functions.

By default, when enabling function sections (which is the default for most
targets), the LLVM backend will generate different section names for each
function. For example, a function "func" would generate a section called
".text.func". Normally this is fine because the linker will merge all those
sections into a single one in the binary. However, starting with LLVM 12
(llvm/llvm-project@ee5d1a0), the backend will
also generate unique section names for exception handling, resulting in
thousands of ".gcc_except_table.*" sections ending up in the final binary
because some linkers don't currently merge or strip these EH sections.
This can bloat the ELF headers and string table significantly in
binaries that contain many functions.

The new option is analogous to Clang's -fno-unique-section-names, and
instructs LLVM to generate the same ".text" and ".gcc_except_table"
section for each function, resulting in smaller object files and
potentially a smaller final binary.
2021-10-11 12:09:32 -07:00
Jubilee
6c17601a2e
Rollup merge of #89025 - ricobbe:raw-dylib-link-ordinal, r=michaelwoerister
Implement `#[link_ordinal(n)]`

Allows the use of `#[link_ordinal(n)]` with `#[link(kind = "raw-dylib")]`, allowing Rust to link against DLLs that export symbols by ordinal rather than by name.  As long as the ordinal matches, the name of the function in Rust is not required to match the name of the corresponding function in the exporting DLL.

Part of #58713.
2021-10-07 20:26:11 -07:00
Michael Benfield
a17193dbb9 Enable AutoFDO.
This largely involves implementing the options debug-info-for-profiling
and profile-sample-use and forwarding them on to LLVM.

AutoFDO can be used on x86-64 Linux like this:
rustc -O -Cdebug-info-for-profiling main.rs -o main
perf record -b ./main
create_llvm_prof --binary=main --out=code.prof
rustc -O -Cprofile-sample-use=code.prof main.rs -o main2

Now `main2` will have feedback directed optimization applied to it.

The create_llvm_prof tool can be obtained from this github repository:
https://github.com/google/autofdo

Fixes #64892.
2021-10-06 19:36:52 +00:00
Guillaume Gomez
759eba0a08 Fix clippy lints 2021-10-01 23:17:19 +02:00
Richard Cobbe
142f6c0b07 Implement #[link_ordinal] attribute in the context of #[link(kind = "raw-dylib")]. 2021-09-20 14:50:35 -07:00
Yilin Chen
d5de680e20 Work around invalid DWARF bugs for fat LTO
Signed-off-by: Yilin Chen <sticnarf@gmail.com>
2021-09-17 23:19:38 +08:00
Nikita Popov
621f5146c3 Handle SrcMgr diagnostics
This is how InlineAsm diagnostics with source information are
reported now. Previously a separate InlineAsm diagnostic handler
was used.
2021-08-16 18:28:17 +02:00
Josh Stone
183d79cc09 Prepare call/invoke for opaque pointers
Rather than relying on `getPointerElementType()` from LLVM function
pointers, we now pass the function type explicitly when building `call`
or `invoke` instructions.
2021-08-05 10:58:55 -07:00
Tomasz Miąsko
8e0df32ad6 Replace LLVMConstInBoundsGEP with LLVMConstInBoundsGEP2*
A custom reimplementation of LLVMConstInBoundsGEP2 is used, since the
LLVM contains a declaration of LLVMConstInBoundsGEP2 but not the
implementation.
2021-08-04 15:51:30 +02:00
Tomasz Miąsko
77e5e17231 Prepare inbounds_gep for opaque pointers
Implement inbounds_gep using LLVMBuildInBoundsGEP2 which takes an
explicit type argument instead of deriving it from a pointer type.
2021-08-04 15:51:30 +02:00