Without this it'll complain and abort when clicking "Take Screenshot" or
"Browse Local" when creating a new VM and looking for an CD-ROM image to boot
from:
GLib-GIO-ERROR **: Settings schema 'org.gtk.Settings.FileChooser' is not installed
Now that both self and super are available to prefFun, we can use self, where appropriate to access late bound versions of most
packages.
When extensions are not used, there is no difference between self and super.
The existing knot-tying code I felt was a bit incoherent with result, finalReturn, self, refering to different various forms of the "haskellPackages" value and often
different forms in the same place.
This commit instills some object-oriented discipline to the construction of hasekllPackages using a small number of fundamental OO concepts:
* An class is a open recursive function of the form (self : fooBody) where fooBody is a set.
* An instance of a class is the fixed point of the class.
This value is sometimes refered to as an object and the values in the resulting set are sometimes refered to as methods.
* A class, foo = self : fooBody, can be extended by an extension which is a function bar = (self : super : barBody) where barBody a set of overrides for fooBody.
The result of a class extension is a new class whose value is self : foo self // bar self (foo self).
The super parameter gives access to the original methods that barBody may be overriding.
This commit turns the haskell-packages value into a "class".
The knot-tying, a.k.a the object instanitation, is moved into haskells-defaults. The "finalReturn" is no longer needed and is eliminated from the body of
haskell-packages. All the work done by prefFun is moved to haskell-defaults, so that parameter is eliminated form haskell-packages. Notice that the old prefFun took
two pameters named "self" and "super", but both parameters got passed the same value "result". There seems to have been some confusion in the old code.
Inside haskell-defaults, the haskell-packages class is extended twice before instantiation. The first extension is done using prefFun argument.
The second extension is done the extension argument, which is a renamed version of extraPrefs.
This two stage approach means that extension's super gets access to the post "perfFun" object while previously the extraPrefs only had access to the pre "prefFun"
object. Also the extension function has access to both the super (post "perfFun") object and to self, the final object. With extraPrefs, one needed to use the
"finalReturn" of the haskell packages to get access to the final object. Due to significant changes in semantics, I thought it best to replace extraPrefs with
extension so that people using extraPrefs know to update thier cod.
Lastly, all the Prefs functions have renamed the "self" parameter to "super". This is because "self" was never actually a self-reference in the object oriented sense
of the word. For example
Cabal_1_18_1_3 = self.Cabal_1_18_1_3.override { deepseq = self.deepseq_1_3_0_2; };
doesn't actually make sense from an object oriented standpoint because, barring further method overriding, the value of Cabal_1_18_1_3 would be trying to override it's
own value which simply causes a loop exception. Thankfully all these uses of self were really uses of super:
Cabal_1_18_1_3 = super.Cabal_1_18_1_3.override { deepseq = super.deepseq_1_3_0_2; };
In this notation the overriden Cabal_1_18_1_3 method calls the Cabal_1_18_1_3 of the super-class, which is a well-founded notion.
Below is an example use of using "extension" parameter
{
packageOverrides = pkgs : {
testHaskellPackages = pkgs.haskellPackages.override {
extension = self : super : {
transformers_0_4_1_0 = self.cabal.mkDerivation (pkgs: {
pname = "transformers";
version = "0.4.1.0";
sha256 = "0jlnz86f87jndv4sifg1zpv5b2g2cxy1x2575x727az6vyaarwwg";
meta = {
description = "Concrete functor and monad transformers";
license = pkgs.stdenv.lib.licenses.bsd3;
platforms = pkgs.ghc.meta.platforms;
maintainers = [ pkgs.stdenv.lib.maintainers.andres ];
};
});
transformers = self.transformers_0_4_1_0;
lensFamilyCore = super.lensFamilyCore.override { transformers = self.transformers_0_3_0_0; };
};
};
};
}
Notice the use of self in the body of the override of the transformers method which references the newly defined transformers_0_4_1_0 method.
With the previous code, one would have to instead akwardly write
transformers = super.finalReturn.transformers_0_4_1_0;
or use a rec clause, which would prevent futher overriding of transformers_0_4_1_0.
This should make it easier to enable proprietary pepper API plugins
though nixpkgs config, so it can be easily installed using something
like:
nix-env -i chromium-stable
With something like:
{ chromium.enablePepperFlash = true; }
In ~/.nixpkgs/config.nix to enable pepper API based Flash and to avoid
the browser wrapper from Firefox entirely.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This is a small wrapper around fetchzip. It allows you to say:
src = fetchGitHub {
owner = "NixOS";
repo = "nix";
rev = "924e19341a5ee488634bc9ce1ea9758ac496afc3"; # or a tag
sha256 = "1ld1jc26wy0smkg63chvdzsppfw6zy1ykf3mmc50hkx397wcbl09";
};
This function downloads and unpacks a file in one fixed-output
derivation. This is primarily useful for dynamically generated zip
files, such as GitHub's /archive URLs, where the unpacked content of
the zip file doesn't change, but the zip file itself may (e.g. due to
minor changes in the compression algorithm, or changes in timestamps).
Fetchzip is implemented by extending fetchurl with a "postFetch" hook
that is executed after the file has been downloaded. This hook can
thus perform arbitrary checks or transformations on the downloaded
file.
COPRTHR is a very excellent little SDK implementing OpenCL and related
tech for regular multicore processors, as well as things like my new
Parallella (along with remote/networked OpenCL compute support).
Signed-off-by: Austin Seipp <aseipp@pobox.com>
This allows fonts to be installed from anywhere in an unzipped file
rather than having to cd deep into the directory and come back out in
order for e.g. `forceCopy` to work correctly.
This ensures that the intermediate machine is shut down only after the
migration has finished writing the memory dump to disk, to ensure we
don't end up with empty state files depending on how fast the migration
finished before we actually shut down the VM.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This ensures that the builder isn't waiting forever if the Windows VM
drops dead while we're waiting for the controller VM to signal that a
particular command has been executed on the Windows VM. It won't ever
happen in such cases so it doesn't make sense to wait for the timeout.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
allows to write neat expressions like (as we're still generating an
expression string):
```
{
build = haskellPackages.buildLocalCabalWithArgs {
inherit src name;
cabalDrvArgs = {
jailbreak = false;
doCheck = false;
};
};
}
```
without resorting to weird kung-fu like darcs does:
```
darcs = haskellPackages.darcs.override {
# A variant of the Darcs derivation that containts only the
# executable and
# thus has no dependencies on other Haskell packages.
cabal = { mkDerivation = x: rec { final = haskellPackages.cabal.mkDerivation (self: (x final) // {
isLibrary = false;
configureFlags = "-f-library"; }); }.final;
};
};
```
While here, move the `jailbreak = true;` as the default `cabalDrvArgs`
option.
The whole notion of per-compiler HP-compliant environments has failed
anyway and I'll try to get rid of that ASAP, so it feels pointless to
configure that stuff for GHC 7.8.2 to begin with.
This fixes build for version 36, which i accidentally broke in commit
f6e31fadd8.
The reason this happened, was that my Hydra didn't pick up the latest
commit and I actually tested and built the parent commit instead of the
update commit.
So, this commit is the real "builds fine, tested" for all channels.
Also, the sandbox client initalization has moved into
setuid_sandbox_client.cc, so we need to move the lookup of the
CHROMIUM_SANDBOX_BINARY_PATH environment variable there.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
The system attribute was already there in the function head of the
shared update helper but it actually wasn't used and thus later the
import of <nixpkgs> was done using builtins.currentSystem instead of the
system attribute inherited from the source derivation.
Now we correctly propagate the attribute, so that even when running a
64bit kernel you can run a 32bit Chromium with binary plugins.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
systemd-tmpfiles-setup.service pulls in local-fs.target, which
interferes with NixOps' send-keys feature (since sshd.service depends
indirectly on sysinit.target). Since in NixOS we don't use
systemd-tmpfiles for creating files (that's done by activation scripts
and preStart scripts), it's not a problem to start it a bit later.
Backport: 14.04
These packages come with R, but if we install them as part of this build, then
we cannot update them without re-building R as well. Instead, we add those
packages to the R environment through the r-wrapper. This means that
recommended packages can be updated in cran-packgaes.nix, and those updates
have an effect on the installation without re-building R itself.
This fixes the issue of Chromium not being able to load the pulseaudio
librarp
We could also propagate the build inputs, but it would end up being the
same as just directoly linking against the library.
Thanks to @aristidb for noticing this in #2421:
https://github.com/NixOS/nixpkgs/pull/2421#issuecomment-42113656
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
'qgis', one of the few 'qwt' dependees in nixpkgs, fails to build with
qwt 6. So I'm not moving the default version away from 5.x. Also, not
changing the default allows easy/safe cherry-picking to the stable
branch.
compiler/simplCore/CoreMonad.lhs:835:10:
Non type-variable argument in the constraint: MonadPlus IO
(Use -XFlexibleContexts to permit this)
In the context: (MonadPlus IO)
While checking an instance declaration
In the instance declaration for `A.Alternative CoreM'
compiler/simplCore/CoreMonad.lhs:841:10:
Non type-variable argument in the constraint: MonadPlus IO
(Use -XFlexibleContexts to permit this)
In the context: (MonadPlus IO)
While checking an instance declaration
In the instance declaration for `MonadPlus CoreM'
These two expressions greatly simplify using the clang-analyzer or
Coverity static analyzer on your C/C++ projects. In fact, they are
identical to nixBuild in every way out of the box, and should 'Just
Work' providing your code can be compiled with Clang already.
The trick is that when running 'make', we actually just alias it to the
appropriate scan build tool, and add a post-build hook that will bundle
up the results appropriately and unalias it.
For Clang, we put the results in $out/analysis and add an 'analysis'
report to $out/nix-support/hydra-build-products pointing to the result
HTML - this means that if the analyzer finds any bugs, the HTML results
will automatically show up Hydra for easy viewing.
For Coverity, it's slightly different. Instead we run the build tool and
after we're done, we tar up the results in a format that Coverity Scan's
service understands. We put the tarball in $out/tarballs under the name
'foo-cov-int.xz' and add an entry for the file to hydra-build-products
as well for easy viewing.
Of course for Coverity you must then upload the build. A Hydra plugin to
do this is on the way, and it will automatically pick up the
cov-int.tar.xz for uploading.
Note that coverityAnalysis requires allowUnfree = true;, as well as the
cov-build tools, which you can download from https://scan.coverity.com -
they're not linked to your account or anything, it's just an annoying
registration wall.
Note this is a first draft. In particular, scan-build fixes the C/C++
compiler to be Clang, and it's perfectly reasonable to want to use Clang
for the analyzer but have scan-build invoke GCC instead.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
When using scan-build, you're often going to want to use it in the
context of a Nix expression with buildInputs, and the default wrapper
scripts will put things like include locations for those inputs
$NIX_CFLAGS_COMPILE. Thus, scan-build also needs to pass them to the
analyzer - while the link flags aren't relevant, the include flags are.
This is because the analyzer executable that gets run by scan-build is
*not* clang-wrapper, but the actual clang executable, so it doesn't
implicitly add such arguments. The build is two-stage - it runs the real
clang wrapper once, and then the analyzer once.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
Copy-pasta error, and compcert doesn't really make sense on Darwin or
64bit linux (it's callPackage_i686 anyway).
Signed-off-by: Austin Seipp <aseipp@pobox.com>
Personal information management application that provides integrated mail,
calendaring and address book functionality
https://wiki.gnome.org/Apps/Evolution
By default, we now build all the optional nginx modules, including the
out-of-band ones like moreheaders and rtmp support.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
This overhauls the Datadog module a bit to be much more useful. In
particular, it adds support for nginx and postgresql monitoring
integrations to dd-agent. These have to exist in separate files under
/etc/dd-agent, so the module just exposes then as separate options. In
the future, more integrations could be added this way.
In the process of doing this, I also had to rename the dd-agent user to
datadog. Note the UIDs did not change, so this is strictly backwards
compatible. The reason for this is to make it easier to create a
'datadog' postgres user with access to pg_stats, as 'dd-agent' typically
isn't a valid username. This allows the out of the box configurations to
be used.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
This reverts commit a2a398fbda. The
issue *does* still exist in GHC 7.8.2. Compiled binaries have no -rpath
into their own install directory ("$out") and thus cannot find their own
shared libraries. To work around this issue, we pass an explicit -rpath
argument at configure time. We do that only on Linux, though, because
-rpath is known to cause trouble on Darwin, which was the reason I
originally reverted that patch.