XSA-216 Issue Description:
> The block interface response structure has some discontiguous fields.
> Certain backends populate the structure fields of an otherwise
> uninitialized instance of this structure on their stacks, leaking
> data through the (internal or trailing) padding field.
More: https://xenbits.xen.org/xsa/advisory-216.html
XSA-217 Issue Description:
> Domains controlling other domains are permitted to map pages owned by
> the domain being controlled. If the controlling domain unmaps such a
> page without flushing the TLB, and if soon after the domain being
> controlled transfers this page to another PV domain (via
> GNTTABOP_transfer or, indirectly, XENMEM_exchange), and that third
> domain uses the page as a page table, the controlling domain will have
> write access to a live page table until the applicable TLB entry is
> flushed or evicted. Note that the domain being controlled is
> necessarily HVM, while the controlling domain is PV.
More: https://xenbits.xen.org/xsa/advisory-217.html
XSA-218 Issue Description:
> We have discovered two bugs in the code unmapping grant references.
>
> * When a grant had been mapped twice by a backend domain, and then
> unmapped by two concurrent unmap calls, the frontend may be informed
> that the page had no further mappings when the first call completed rather
> than when the second call completed.
>
> * A race triggerable by an unprivileged guest could cause a grant
> maptrack entry for grants to be "freed" twice. The ultimate effect of
> this would be for maptrack entries for a single domain to be re-used.
More: https://xenbits.xen.org/xsa/advisory-218.html
XSA-219 Issue Description:
> When using shadow paging, writes to guest pagetables must be trapped and
> emulated, so the shadows can be suitably adjusted as well.
>
> When emulating the write, Xen maps the guests pagetable(s) to make the final
> adjustment and leave the guest's view of its state consistent.
>
> However, when mapping the frame, Xen drops the page reference before
> performing the write. This is a race window where the underlying frame can
> change ownership.
>
> One possible attack scenario is for the frame to change ownership and to be
> inserted into a PV guest's pagetables. At that point, the emulated write will
> be an unaudited modification to the PV pagetables whose value is under guest
> control.
More: https://xenbits.xen.org/xsa/advisory-219.html
XSA-220 Issue Description:
> Memory Protection Extensions (MPX) and Protection Key (PKU) are features in
> newer processors, whose state is intended to be per-thread and context
> switched along with all other XSAVE state.
>
> Xen's vCPU context switch code would save and restore the state only
> if the guest had set the relevant XSTATE enable bits. However,
> surprisingly, the use of these features is not dependent (PKU) or may
> not be dependent (MPX) on having the relevant XSTATE bits enabled.
>
> VMs which use MPX or PKU, and context switch the state manually rather
> than via XSAVE, will have the state leak between vCPUs (possibly,
> between vCPUs in different guests). This in turn corrupts state in
> the destination vCPU, and hence may lead to weakened protections
>
> Experimentally, MPX appears not to make any interaction with BND*
> state if BNDCFGS.EN is set but XCR0.BND{CSR,REGS} are clear. However,
> the SDM is not clear in this case; therefore MPX is included in this
> advisory as a precaution.
More: https://xenbits.xen.org/xsa/advisory-220.html
XSA-221 Issue Description:
> When polling event channels, in general arbitrary port numbers can be
> specified. Specifically, there is no requirement that a polled event
> channel ports has ever been created. When the code was generalised
> from an earlier implementation, introducing some intermediate
> pointers, a check should have been made that these intermediate
> pointers are non-NULL. However, that check was omitted.
More: https://xenbits.xen.org/xsa/advisory-221.html
XSA-222 Issue Description:
> Certain actions require removing pages from a guest's P2M
> (Physical-to-Machine) mapping. When large pages are in use to map
> guest pages in the 2nd-stage page tables, such a removal operation may
> incur a memory allocation (to replace a large mapping with individual
> smaller ones). If this allocation fails, these errors are ignored by
> the callers, which would then continue and (for example) free the
> referenced page for reuse. This leaves the guest with a mapping to a
> page it shouldn't have access to.
>
> The allocation involved comes from a separate pool of memory created
> when the domain is created; under normal operating conditions it never
> fails, but a malicious guest may be able to engineer situations where
> this pool is exhausted.
More: https://xenbits.xen.org/xsa/advisory-222.html
XSA-224 Issue Description:
> We have discovered a number of bugs in the code mapping and unmapping
> grant references.
>
> * If a grant is mapped with both the GNTMAP_device_map and
> GNTMAP_host_map flags, but unmapped only with host_map, the device_map
> portion remains but the page reference counts are lowered as though it
> had been removed. This bug can be leveraged cause a page's reference
> counts and type counts to fall to zero while retaining writeable
> mappings to the page.
>
> * Under some specific conditions, if a grant is mapped with both the
> GNTMAP_device_map and GNTMAP_host_map flags, the operation may not
> grab sufficient type counts. When the grant is then unmapped, the
> type count will be erroneously reduced. This bug can be leveraged
> cause a page's reference counts and type counts to fall to zero while
> retaining writeable mappings to the page.
>
> * When a grant reference is given to an MMIO region (as opposed to a
> normal guest page), if the grant is mapped with only the
> GNTMAP_device_map flag set, a mapping is created at host_addr anyway.
> This does *not* cause reference counts to change, but there will be no
> record of this mapping, so it will not be considered when reporting
> whether the grant is still in use.
More: https://xenbits.xen.org/xsa/advisory-224.html
The rsync binary was previously built without iconv support which is needed
for utf-8 conversions on darwin. Fixes#26864.
Additionally rsync used to be built with bundled versions of zlib and popt
that were outdated. This decreases the size of the rsync binary by ~82KB.
Removing all `config.mpv.*` options will improve readability. MPV has many
configurable options, and using the config approach is prone to confusion and
unnecessary code duplication. If needed, the user can `override` the relevant
variables in the function itself, so no functionality is lost.
Closes issue #26786
The following errors occur when you start Chromium prior to this commit:
[2534:2534:0625/202928.673160:ERROR:gl_implementation.cc(246)] Failed to
load .../libexec/chromium/swiftshader/libGLESv2.so:
../libexec/chromium/swiftshader/libGLESv2.so: cannot open shared object
file: No such file or directory
[2534:2534:0625/202928.674434:ERROR:gpu_child_thread.cc(174)] Exiting
GPU process due to errors during initialization
While in theory we do not strictly need libGLESv2.so, in practice this
means that the GPU process isn't starting up at all which in turn leads
to crawling rendering performance on some sites.
So let's install all shared libraries in swiftshader.
I've tested this with the chromium.stable NixOS VM test and also locally
on my machine and the errors as well as the performance issues are gone.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This allows users to specify a custom registry src,
because currently every packager would need to create
an outdated Cargo.lock just to be compatible with the
probably outdated rustRegistry in nixpkgs.
Currently there is no easy way to convince cargo to
do that, so this makes that workaround unnecessary.
| Warning: This package indirectly depends on multiple versions of the same
| package. This is highly likely to cause a compile failure.
| package wai-app-static-3.1.6.1 requires optparse-applicative-0.13.2.0
| package tasty-rerun-1.1.6 requires optparse-applicative-0.13.2.0
| package tasty-0.11.2.1 requires optparse-applicative-0.13.2.0
| package git-annex-6.20170520 requires optparse-applicative-0.14.0.0
Qt 5.8 is immediately removed because its support window is ended.
The qtlocation module is built with `enableParallelBuilding = false` so that the
clipper library will be built before the components which link to it.
kjs now depends directly on pcre. The dependency was previously propagated from
qtbase, which now depends on pcre2.
Use standardized implementation of attribute set extensibility mechanism
instead of manually re-implementing it.
Suggested by @cstrahan at https://github.com/NixOS/nixpkgs/pull/26668.
First of all, we need a newer version of Vc, because at least version
1.1.0 is required for Krita 3.1.3.
Also, qtmultimedia and qtx11extras were missing.
Built and tested successfully on my machine.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Cc: @abbradar
The merge of the version bump in
6fb9f89238 didn't take care of our patch
for the hardening mode and thus enabling VirtualBox without also
force-disabling hardening mode will result in a build error.
While the patch is largely identical with the old version, I've removed
one particular change around the following code:
if (pFsObjState->Stat.st_mode & S_IWOTH)
return supR3HardenedSetError3(VERR_SUPLIB_WORLD_WRITABLE, pErrInfo,
"World writable: '", pszPath, "'");
In the old version of the patch we have checked whether the path is
within the Nix store and suppressed the error return if that's the case.
The reason why I did that in the first place was because we had a bunch
of symlinks which were writable.
In VirtualBox 5.1.22 the code specifically checks whether the file is a
symlink, so we can safely drop our change.
Tested via all of the "virtualbox" NixOS VM subtests and they now all
succeed.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
The setup hooks for many kdeFrameworks and plasma5 packages were erroneously
running before $outputDev was set. This lead to .dev outputs being propagated
into the user environment.
Without this, a `#include <float.h>` resolves incorrectly. Either the
headers weren't on the include path at all, or they only were for
local, not system, imports.
What's weird is this used to not be a problem. Not sure what other
change in e.g. cc-wrapper would affect this.
I think it's ok to export things which aren't wrapped. The cc-wrapper
can be thought of as responsible for all of binutils and the c
compiler, only wrapping those binaries which are necessary to
interposition---as opposed to all binaries it thinks are relevaant.
Conversely, adding the setup hook to the unwrapped compilers would be
unforunate as hooks are ugly hacks and the compilers themselves take
a long time to rebuild. Better to wholely separate "pure packages" from
hacks.
Packages get --host and --target by default, but can explicitly request
any subset to be passed as needed. See docs for more info.
rustc: Avoid hash breakage by using the old (ignored)
dontSetConfigureCross when not cross building
Eventually we should avoid this "pre-wrapping" and just update those
files in nixpkgs. This mass-rebuild change is best done along with
those needed to reduce the disparity between native and cross (i.e.
making native the "identity cross").
We now (on cross) require per-target flag interposition by putting the
triple in the names of the relevant environment variables, e.g:
export NIX_arm_unknown_linux_gnu_CFLAGS_COMPILE=...
The wrapper also has a `infixSalt` attribute (and "_" prefixed and
suffixed variants) to assist downstream packages.
Note how that the dashes are replaced to keep the identifier valid.
Using names like this allows us to keep the settings for different
compilers seperate.
I think it might be even better to use names like `NIX_{BUILD,HOST}...`
using the platform's role rather than the platform itself, but this
would be more work as the previous stages' tools would have to be re-
wrapped to take on their new role. I therefore didn't do this for now,
but that route should be thoroughly explored in the future.
Since 9c57f3b5c0 bumped the protobuf
version because the new upstream requires it, electrum now gets
protobuf3_0 *and* protobuf3_2 instead of just one version.
This leads to the following build errer:
Found duplicated packages in closure for dependency 'protobuf':
protobuf 3.0.2 (...-python2.7-protobuf-3.0.2/lib/python2.7/site-packages)
protobuf 3.2.0 (...-python2.7-protobuf-3.2.0/lib/python2.7/site-packages)
Using protobuf3_2 for keepkey and electrum fixes the build.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Cc: @np
Regression introduced by 76beb08313.
With version 0.7.15 a few additional dependencies are needed by trezor,
mainly a newer version of protobuf bindings and requests.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Cc: @np
The motivation is to be able to get rid of common configuration
when initial packages differs since common configuration assumes
a very specific version set.
cc @jmitchell @peti
With newer Nix it's (fortunately) no longer possible to create a file
with setuid bits, even though the permissions are fixed later the build
will fail during installPhase already.
I've verified whether the contents of the output path are the same as
before this change and the contents match.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Upstream changes:
* Added OpenSSL.X509Store.set_time() to set a custom verification time
when verifying certificate chains. pyca/pyopenssl#567
* Added a collection of functions for working with OCSP stapling. None
of these functions make it possible to validate OCSP assertions, only
to staple them into the handshake and to retrieve the stapled
assertion if provided. Users will need to write their own code to
handle OCSP assertions. We specifically added:
Context.set_ocsp_server_callback, Context.set_ocsp_client_callback,
and Connection.request_ocsp. pyca/pyopenssl#580
* Changed the SSL module's memory allocation policy to avoid zeroing
memory it allocates when unnecessary. This reduces CPU usage and
memory allocation time by an amount proportional to the size of the
allocation. For applications that process a lot of TLS data or that
use very lage allocations this can provide considerable performance
improvements. pyca/pyopenssl#578
* Automatically set SSL_CTX_set_ecdh_auto() on OpenSSL.SSL.Context.
pyca/pyopenssl#575
* Fix empty exceptions from OpenSSL.crypto.load_privatekey().
pyca/pyopenssl#581
The full upstream changelog can be found at:
https://pyopenssl.readthedocs.io/en/17.0.0/changelog.html
I've also added a patch from pyca/pyopenssl#637 in order to fix the
tests, which was the main reason for the version bump because that patch
won't apply for 16.2.0.
According to the upstream changelog there should be no
backwards-incompatible changes, but I've tested building against some of
the packages depending on pyopenssl anyway. Regardless of this, the
build for pyopenssl fails right now anyway, so the worst that could
happen via this commit would be that we break something that's already
broken.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Escape things by default in derivation names (i.e. digit cannot be the
first character etc.)
Update Quicklisp (tracking upstream); list new missing dependencies
Add some minimal README about ql-to-nix
> http://opus-codec.org/release/stable/2017/06/20/libopus-1_2.html
Changes since 1.1.x include:
- Speech quality improvements especially in the 12-20 kbit/s range
- Improved VBR encoding for hybrid mode
- More aggressive use of wider speech bandwidth, including fullband speech starting at 14 kbit/s
- Music quality improvements in the 32-48 kb/s range
- Generic and SSE CELT optimizations
- Support for directly encoding packets up to 120 ms
- DTX support for CELT mode
- SILK CBR improvements
- Support for all of the fixes in draft-ietf-codec-opus-update-06 (the mono downmix and the folding fixes need --enable-update-draft)
- Many bug fixes, including integer wrap-arounds discovered through fuzzing (no security implications)
Before one had to turn it on manually and update the preview script in dotfiles
manually when ranger updates.
Now it requires zero configuration. Just run `ranger` and it works, and
should continue to work automagically when ranger updates.
Everything still can be (de)configured via `rc.conf` in dotfiles.
Includes a more recent version of antlr to nixpkgs. Previous
versions exist already, but version 4 brings many changes
to the generated code and runtime targets.
The install location has been changed from previous versions
of antlr to make use of the set-java-classpath hook, which
is required to make use of both the runtime and the binary.
Also includes the testing rig as a script to allow graphical
inspection of parse trees.
llvm-config is a tool to output compile and linker flags, when compiling against llvm.
The tool however outputs static library names despite libllvm is build
as shared library on nixos. This was fixed for llvm 3.4, 3.5 and 3.7.
For llvm 3.8 and 3.9 it printed the library extension twice (.so.so).
This was fixed in 4.0 and the patch is backported to 3.8 and 3.9 in
this pull request.
```
$ for i in 34 35 37 38 39; do echo "\nllvm-$i"; nix-shell -p llvmPackages_$i.llvm --run 'llvm-config --libnames'; done
llvm-34
libLLVMInstrumentation.so libLLVMIRReader.so libLLVMAsmParser.so
...
llvm-35
libLLVMLTO.so libLLVMObjCARCOpts.so libLLVMLinker.so libLLVMipo.so
...
llvm-37
libLLVMLTO.so libLLVMObjCARCOpts.so libLLVMLinker.so libLLVMBitWriter.so
...
llvm-38
libLLVM-3.8.1.so
llvm-39
libLLVM-3.9.so
```
fixes#26713