2021-08-17 01:16:29 +00:00
{ lib , config }:
stdenv :
2017-07-05 21:56:53 +00:00
check-env: don't execute check-meta.nix 15,000 times
Generated from https://github.com/NixOS/nix/pull/2761:
```
ns calls ns/call
- /home/grahamc/projects/github.com/NixOS/nixpkgs/pkgs/stdenv/generic/check-meta.nix:22:5 591200 15026 39.3451
+ /home/grahamc/projects/github.com/NixOS/nixpkgs/pkgs/stdenv/generic/check-meta.nix:22:5 8744 308 28.3896
```
more, generated by:
```
$ NIX_SHOW_STATS=1 NIX_COUNT_CALLS=1 nix-instantiate ./pkgs/top-level/release.nix -A unstable > before 2>&1
$ jq -r '.functions | map((.name + ":" + .file + ":" + (.line|tostring) + ":" + (.column|tostring) + " " + (.count|tostring))) | .[]' before | sort > before.list
```
applying this patch, then:
```
$ NIX_SHOW_STATS=1 NIX_COUNT_CALLS=1 nix-instantiate ./pkgs/top-level/release.nix -A unstable > after 2>&1
$ jq -r '.functions | map((.name + ":" + .file + ":" + (.line|tostring) + ":" + (.column|tostring) + " " + (.count|tostring))) | .[]' after | sort > after.list
```
and then diffing before.list and after.list to get:
```
calls
- :/home/grahamc/projects/github.com/NixOS/nixpkgs/pkgs/stdenv/generic/check-meta.nix:4:1 7513
+ :/home/grahamc/projects/github.com/NixOS/nixpkgs/pkgs/stdenv/generic/check-meta.nix:4:1 154
- mutuallyExclusive:/home/grahamc/projects/github.com/NixOS/nixpkgs/lib/lists.nix:658:23 7513
+ mutuallyExclusive:/home/grahamc/projects/github.com/NixOS/nixpkgs/lib/lists.nix:658:23 154
- mutuallyExclusive:/home/grahamc/projects/github.com/NixOS/nixpkgs/lib/lists.nix:658:26 7513
+ mutuallyExclusive:/home/grahamc/projects/github.com/NixOS/nixpkgs/lib/lists.nix:658:26 154
- onlyLicenses:/home/grahamc/projects/github.com/NixOS/nixpkgs/pkgs/stdenv/generic/check-meta.nix:21:18 15026
+ onlyLicenses:/home/grahamc/projects/github.com/NixOS/nixpkgs/pkgs/stdenv/generic/check-meta.nix:21:18 308
```
The following information is from `NIX_SHOW_STATS=1 GC_INITIAL_HEAP_SIZE=4g nix-env -f ./outpaths.nix -qaP --no-name --out-path --arg checkMeta true`:
| stat | before | after | Δ | Δ% |
|:---------------------------|---------------:|---------------:|:----------------|--------:|
| **cpuTime** | 179.915 | 145.543 | 🡖 34.372 | -19.10% |
| **envs-bytes** | 3,900,878,824 | 3,599,483,208 | 🡖 301,395,616 | -7.73% |
| **envs-elements** | 214,426,071 | 185,881,709 | 🡖 28,544,362 | -13.31% |
| **envs-number** | 136,591,891 | 132,026,846 | 🡖 4,565,045 | -3.34% |
| **gc-heapSize** | 11,400,048,640 | 12,314,890,240 | 🡕 914,841,600 | 8.02% |
| **gc-totalBytes** | 25,976,902,560 | 24,510,740,176 | 🡖 1,466,162,384 | -5.64% |
| **list-bytes** | 1,665,290,080 | 1,665,290,080 | 0 | |
| **list-concats** | 7,264,417 | 7,264,417 | 0 | |
| **list-elements** | 208,161,260 | 208,161,260 | 0 | |
| **nrAvoided** | 191,359,386 | 179,693,661 | 🡖 11,665,725 | -6.10% |
| **nrFunctionCalls** | 119,665,062 | 116,348,547 | 🡖 3,316,515 | -2.77% |
| **nrLookups** | 80,996,257 | 76,069,825 | 🡖 4,926,432 | -6.08% |
| **nrOpUpdateValuesCopied** | 213,930,649 | 213,930,649 | 0 | |
| **nrOpUpdates** | 12,025,937 | 12,025,937 | 0 | |
| **nrPrimOpCalls** | 88,105,604 | 86,451,598 | 🡖 1,654,006 | -1.88% |
| **nrThunks** | 196,842,044 | 175,126,701 | 🡖 21,715,343 | -11.03% |
| **sets-bytes** | 7,678,425,776 | 7,285,767,928 | 🡖 392,657,848 | -5.11% |
| **sets-elements** | 310,241,340 | 294,373,227 | 🡖 15,868,113 | -5.11% |
| **sets-number** | 29,079,202 | 27,601,310 | 🡖 1,477,892 | -5.08% |
| **sizes-Attr** | 24 | 24 | 0 | |
| **sizes-Bindings** | 8 | 8 | 0 | |
| **sizes-Env** | 16 | 16 | 0 | |
| **sizes-Value** | 24 | 24 | 0 | |
| **symbols-bytes** | 16,474,666 | 16,474,676 | 🡕 10 | 0.00% |
| **symbols-number** | 376,426 | 376,427 | 🡕 1 | 0.00% |
| **values-bytes** | 6,856,506,288 | 6,316,585,560 | 🡖 539,920,728 | -7.87% |
| **values-number** | 285,687,762 | 263,191,065 | 🡖 22,496,697 | -7.87% |
The following information is from `NIX_SHOW_STATS=1 GC_INITIAL_HEAP_SIZE=4g nix-instantiate ./nixos/release-combined.nix -A tested`:
| stat | before | after | Δ | Δ% |
|:---------------------------|---------------:|---------------:|:----------------|-------:|
| **cpuTime** | 256.071 | 237.531 | 🡖 18.54 | -7.24% |
| **envs-bytes** | 7,111,004,192 | 7,041,478,520 | 🡖 69,525,672 | -0.98% |
| **envs-elements** | 346,236,940 | 339,588,487 | 🡖 6,648,453 | -1.92% |
| **envs-number** | 271,319,292 | 270,298,164 | 🡖 1,021,128 | -0.38% |
| **gc-heapSize** | 8,995,291,136 | 10,110,009,344 | 🡕 1,114,718,208 | 12.39% |
| **gc-totalBytes** | 37,172,737,408 | 36,878,391,888 | 🡖 294,345,520 | -0.79% |
| **list-bytes** | 1,886,162,656 | 1,886,163,472 | 🡕 816 | 0.00% |
| **list-concats** | 6,898,114 | 6,898,114 | 0 | |
| **list-elements** | 235,770,332 | 235,770,434 | 🡕 102 | 0.00% |
| **nrAvoided** | 328,829,821 | 326,618,157 | 🡖 2,211,664 | -0.67% |
| **nrFunctionCalls** | 240,850,845 | 239,998,495 | 🡖 852,350 | -0.35% |
| **nrLookups** | 144,849,632 | 142,126,339 | 🡖 2,723,293 | -1.88% |
| **nrOpUpdateValuesCopied** | 251,032,504 | 251,032,504 | 0 | |
| **nrOpUpdates** | 17,903,110 | 17,903,110 | 0 | |
| **nrPrimOpCalls** | 140,674,913 | 139,485,975 | 🡖 1,188,938 | -0.85% |
| **nrThunks** | 294,643,131 | 288,678,022 | 🡖 5,965,109 | -2.02% |
| **sets-bytes** | 9,464,322,192 | 9,456,172,048 | 🡖 8,150,144 | -0.09% |
| **sets-elements** | 377,474,889 | 377,134,877 | 🡖 340,012 | -0.09% |
| **sets-number** | 50,615,607 | 50,616,875 | 🡕 1,268 | 0.00% |
| **sizes-Attr** | 24 | 24 | 0 | |
| **sizes-Bindings** | 8 | 8 | 0 | |
| **sizes-Env** | 16 | 16 | 0 | |
| **sizes-Value** | 24 | 24 | 0 | |
| **symbols-bytes** | 3,147,102 | 3,147,064 | 🡖 38 | -0.00% |
| **symbols-number** | 82,819 | 82,819 | 0 | |
| **values-bytes** | 11,147,448,768 | 10,996,111,512 | 🡖 151,337,256 | -1.36% |
| **values-number** | 464,477,032 | 458,171,313 | 🡖 6,305,719 | -1.36% |
2019-04-11 16:35:35 +00:00
let
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
# Lib attributes are inherited to the lexical scope for performance reasons.
inherit ( lib )
any
assertMsg
attrNames
boolToString
chooseDevOutputs
concatLists
concatMap
concatMapStrings
concatStringsSep
elem
elemAt
extendDerivation
filter
findFirst
flip
head
imap1
isAttrs
isBool
isDerivation
isInt
isList
isString
mapAttrs
mapNullable
optional
optionalAttrs
optionalString
optionals
remove
splitString
subtractLists
unique
;
check-env: don't execute check-meta.nix 15,000 times
Generated from https://github.com/NixOS/nix/pull/2761:
```
ns calls ns/call
- /home/grahamc/projects/github.com/NixOS/nixpkgs/pkgs/stdenv/generic/check-meta.nix:22:5 591200 15026 39.3451
+ /home/grahamc/projects/github.com/NixOS/nixpkgs/pkgs/stdenv/generic/check-meta.nix:22:5 8744 308 28.3896
```
more, generated by:
```
$ NIX_SHOW_STATS=1 NIX_COUNT_CALLS=1 nix-instantiate ./pkgs/top-level/release.nix -A unstable > before 2>&1
$ jq -r '.functions | map((.name + ":" + .file + ":" + (.line|tostring) + ":" + (.column|tostring) + " " + (.count|tostring))) | .[]' before | sort > before.list
```
applying this patch, then:
```
$ NIX_SHOW_STATS=1 NIX_COUNT_CALLS=1 nix-instantiate ./pkgs/top-level/release.nix -A unstable > after 2>&1
$ jq -r '.functions | map((.name + ":" + .file + ":" + (.line|tostring) + ":" + (.column|tostring) + " " + (.count|tostring))) | .[]' after | sort > after.list
```
and then diffing before.list and after.list to get:
```
calls
- :/home/grahamc/projects/github.com/NixOS/nixpkgs/pkgs/stdenv/generic/check-meta.nix:4:1 7513
+ :/home/grahamc/projects/github.com/NixOS/nixpkgs/pkgs/stdenv/generic/check-meta.nix:4:1 154
- mutuallyExclusive:/home/grahamc/projects/github.com/NixOS/nixpkgs/lib/lists.nix:658:23 7513
+ mutuallyExclusive:/home/grahamc/projects/github.com/NixOS/nixpkgs/lib/lists.nix:658:23 154
- mutuallyExclusive:/home/grahamc/projects/github.com/NixOS/nixpkgs/lib/lists.nix:658:26 7513
+ mutuallyExclusive:/home/grahamc/projects/github.com/NixOS/nixpkgs/lib/lists.nix:658:26 154
- onlyLicenses:/home/grahamc/projects/github.com/NixOS/nixpkgs/pkgs/stdenv/generic/check-meta.nix:21:18 15026
+ onlyLicenses:/home/grahamc/projects/github.com/NixOS/nixpkgs/pkgs/stdenv/generic/check-meta.nix:21:18 308
```
The following information is from `NIX_SHOW_STATS=1 GC_INITIAL_HEAP_SIZE=4g nix-env -f ./outpaths.nix -qaP --no-name --out-path --arg checkMeta true`:
| stat | before | after | Δ | Δ% |
|:---------------------------|---------------:|---------------:|:----------------|--------:|
| **cpuTime** | 179.915 | 145.543 | 🡖 34.372 | -19.10% |
| **envs-bytes** | 3,900,878,824 | 3,599,483,208 | 🡖 301,395,616 | -7.73% |
| **envs-elements** | 214,426,071 | 185,881,709 | 🡖 28,544,362 | -13.31% |
| **envs-number** | 136,591,891 | 132,026,846 | 🡖 4,565,045 | -3.34% |
| **gc-heapSize** | 11,400,048,640 | 12,314,890,240 | 🡕 914,841,600 | 8.02% |
| **gc-totalBytes** | 25,976,902,560 | 24,510,740,176 | 🡖 1,466,162,384 | -5.64% |
| **list-bytes** | 1,665,290,080 | 1,665,290,080 | 0 | |
| **list-concats** | 7,264,417 | 7,264,417 | 0 | |
| **list-elements** | 208,161,260 | 208,161,260 | 0 | |
| **nrAvoided** | 191,359,386 | 179,693,661 | 🡖 11,665,725 | -6.10% |
| **nrFunctionCalls** | 119,665,062 | 116,348,547 | 🡖 3,316,515 | -2.77% |
| **nrLookups** | 80,996,257 | 76,069,825 | 🡖 4,926,432 | -6.08% |
| **nrOpUpdateValuesCopied** | 213,930,649 | 213,930,649 | 0 | |
| **nrOpUpdates** | 12,025,937 | 12,025,937 | 0 | |
| **nrPrimOpCalls** | 88,105,604 | 86,451,598 | 🡖 1,654,006 | -1.88% |
| **nrThunks** | 196,842,044 | 175,126,701 | 🡖 21,715,343 | -11.03% |
| **sets-bytes** | 7,678,425,776 | 7,285,767,928 | 🡖 392,657,848 | -5.11% |
| **sets-elements** | 310,241,340 | 294,373,227 | 🡖 15,868,113 | -5.11% |
| **sets-number** | 29,079,202 | 27,601,310 | 🡖 1,477,892 | -5.08% |
| **sizes-Attr** | 24 | 24 | 0 | |
| **sizes-Bindings** | 8 | 8 | 0 | |
| **sizes-Env** | 16 | 16 | 0 | |
| **sizes-Value** | 24 | 24 | 0 | |
| **symbols-bytes** | 16,474,666 | 16,474,676 | 🡕 10 | 0.00% |
| **symbols-number** | 376,426 | 376,427 | 🡕 1 | 0.00% |
| **values-bytes** | 6,856,506,288 | 6,316,585,560 | 🡖 539,920,728 | -7.87% |
| **values-number** | 285,687,762 | 263,191,065 | 🡖 22,496,697 | -7.87% |
The following information is from `NIX_SHOW_STATS=1 GC_INITIAL_HEAP_SIZE=4g nix-instantiate ./nixos/release-combined.nix -A tested`:
| stat | before | after | Δ | Δ% |
|:---------------------------|---------------:|---------------:|:----------------|-------:|
| **cpuTime** | 256.071 | 237.531 | 🡖 18.54 | -7.24% |
| **envs-bytes** | 7,111,004,192 | 7,041,478,520 | 🡖 69,525,672 | -0.98% |
| **envs-elements** | 346,236,940 | 339,588,487 | 🡖 6,648,453 | -1.92% |
| **envs-number** | 271,319,292 | 270,298,164 | 🡖 1,021,128 | -0.38% |
| **gc-heapSize** | 8,995,291,136 | 10,110,009,344 | 🡕 1,114,718,208 | 12.39% |
| **gc-totalBytes** | 37,172,737,408 | 36,878,391,888 | 🡖 294,345,520 | -0.79% |
| **list-bytes** | 1,886,162,656 | 1,886,163,472 | 🡕 816 | 0.00% |
| **list-concats** | 6,898,114 | 6,898,114 | 0 | |
| **list-elements** | 235,770,332 | 235,770,434 | 🡕 102 | 0.00% |
| **nrAvoided** | 328,829,821 | 326,618,157 | 🡖 2,211,664 | -0.67% |
| **nrFunctionCalls** | 240,850,845 | 239,998,495 | 🡖 852,350 | -0.35% |
| **nrLookups** | 144,849,632 | 142,126,339 | 🡖 2,723,293 | -1.88% |
| **nrOpUpdateValuesCopied** | 251,032,504 | 251,032,504 | 0 | |
| **nrOpUpdates** | 17,903,110 | 17,903,110 | 0 | |
| **nrPrimOpCalls** | 140,674,913 | 139,485,975 | 🡖 1,188,938 | -0.85% |
| **nrThunks** | 294,643,131 | 288,678,022 | 🡖 5,965,109 | -2.02% |
| **sets-bytes** | 9,464,322,192 | 9,456,172,048 | 🡖 8,150,144 | -0.09% |
| **sets-elements** | 377,474,889 | 377,134,877 | 🡖 340,012 | -0.09% |
| **sets-number** | 50,615,607 | 50,616,875 | 🡕 1,268 | 0.00% |
| **sizes-Attr** | 24 | 24 | 0 | |
| **sizes-Bindings** | 8 | 8 | 0 | |
| **sizes-Env** | 16 | 16 | 0 | |
| **sizes-Value** | 24 | 24 | 0 | |
| **symbols-bytes** | 3,147,102 | 3,147,064 | 🡖 38 | -0.00% |
| **symbols-number** | 82,819 | 82,819 | 0 | |
| **values-bytes** | 11,147,448,768 | 10,996,111,512 | 🡖 151,337,256 | -1.36% |
| **values-number** | 464,477,032 | 458,171,313 | 🡖 6,305,719 | -1.36% |
2019-04-11 16:35:35 +00:00
checkMeta = import ./check-meta.nix {
inherit lib config ;
# Nix itself uses the `system` field of a derivation to decide where
# to build it. This is a bit confusing for cross compilation.
inherit ( stdenv ) hostPlatform ;
} ;
2021-04-20 11:46:53 +00:00
# Based off lib.makeExtensible, with modifications:
2022-06-05 11:36:56 +00:00
makeDerivationExtensible = rattrs :
2021-04-20 11:46:53 +00:00
let
2022-01-11 08:07:58 +00:00
# NOTE: The following is a hint that will be printed by the Nix cli when
# encountering an infinite recursion. It must not be formatted into
# separate lines, because Nix would only show the last line of the comment.
# An infinite recursion here can be caused by having the attribute names of expression `e` in `.overrideAttrs(finalAttrs: previousAttrs: e)` depend on `finalAttrs`. Only the attribute values of `e` can depend on `finalAttrs`.
2022-11-18 11:16:45 +00:00
args = rattrs ( args // { inherit finalPackage overrideAttrs ; } ) ;
2022-01-11 08:07:58 +00:00
# ^^^^
2022-11-18 11:16:45 +00:00
overrideAttrs = f0 :
let
f = self : super :
# Convert f0 to an overlay. Legacy is:
# overrideAttrs (super: {})
# We want to introduce self. We follow the convention of overlays:
# overrideAttrs (self: super: {})
# Which means the first parameter can be either self or super.
# This is surprising, but far better than the confusion that would
# arise from flipping an overlay's parameters in some cases.
let x = f0 super ;
2021-07-29 11:09:27 +00:00
in
2022-11-18 11:16:45 +00:00
if builtins . isFunction x
then
# Can't reuse `x`, because `self` comes first.
# Looks inefficient, but `f0 super` was a cheap thunk.
f0 self super
else x ;
in
makeDerivationExtensible
2023-06-21 14:30:32 +00:00
( self : let super = rattrs self ; in super // ( if builtins . isFunction f0 || f0 ? __functor then f self super else f0 ) ) ;
2022-11-18 11:16:45 +00:00
finalPackage =
mkDerivationSimple overrideAttrs args ;
2022-03-15 11:22:16 +00:00
in finalPackage ;
2021-04-20 11:46:53 +00:00
2023-08-05 21:01:26 +00:00
#makeDerivationExtensibleConst = attrs: makeDerivationExtensible (_: attrs);
2021-04-20 11:46:53 +00:00
# but pre-evaluated for a slight improvement in performance.
2022-06-05 11:36:56 +00:00
makeDerivationExtensibleConst = attrs :
2021-07-29 11:09:27 +00:00
mkDerivationSimple
( f0 :
let
f = self : super :
let x = f0 super ;
in
if builtins . isFunction x
then
f0 self super
else x ;
in
2023-07-03 18:34:50 +00:00
makeDerivationExtensible ( self : attrs // ( if builtins . isFunction f0 || f0 ? __functor then f self attrs else f0 ) ) )
2021-04-20 11:46:53 +00:00
attrs ;
2022-06-05 11:33:35 +00:00
mkDerivationSimple = overrideAttrs :
2021-04-20 11:46:53 +00:00
2021-08-18 17:19:30 +00:00
# `mkDerivation` wraps the builtin `derivation` function to
# produce derivations that use this stdenv and its shell.
#
# See also:
#
# * https://nixos.org/nixpkgs/manual/#sec-using-stdenv
# Details on how to use this mkDerivation function
#
2022-05-08 03:20:34 +00:00
# * https://nixos.org/manual/nix/stable/expressions/derivations.html#derivations
2021-08-18 17:19:30 +00:00
# Explanation about derivations in general
{
# These types of dependencies are all exhaustively documented in
# the "Specifying Dependencies" section of the "Standard
# Environment" chapter of the Nixpkgs manual.
# TODO(@Ericson2314): Stop using legacy dep attribute names
2022-12-18 12:00:00 +00:00
# host offset -> target offset
depsBuildBuild ? [ ] # -1 -> -1
, depsBuildBuildPropagated ? [ ] # -1 -> -1
, nativeBuildInputs ? [ ] # -1 -> 0 N.B. Legacy name
, propagatedNativeBuildInputs ? [ ] # -1 -> 0 N.B. Legacy name
, depsBuildTarget ? [ ] # -1 -> 1
, depsBuildTargetPropagated ? [ ] # -1 -> 1
, depsHostHost ? [ ] # 0 -> 0
, depsHostHostPropagated ? [ ] # 0 -> 0
, buildInputs ? [ ] # 0 -> 1 N.B. Legacy name
, propagatedBuildInputs ? [ ] # 0 -> 1 N.B. Legacy name
, depsTargetTarget ? [ ] # 1 -> 1
, depsTargetTargetPropagated ? [ ] # 1 -> 1
, checkInputs ? [ ]
, installCheckInputs ? [ ]
, nativeCheckInputs ? [ ]
, nativeInstallCheckInputs ? [ ]
2021-08-18 17:19:30 +00:00
# Configure Phase
, configureFlags ? [ ]
, cmakeFlags ? [ ]
, mesonFlags ? [ ]
, # Target is not included by default because most programs don't care.
# Including it then would cause needless mass rebuilds.
2017-07-05 21:56:53 +00:00
#
2022-06-21 07:14:34 +00:00
# TODO(@Ericson2314): Make [ "build" "host" ] always the default / resolve #87909
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
configurePlatforms ? optionals
2022-06-21 07:14:34 +00:00
( stdenv . hostPlatform != stdenv . buildPlatform || config . configurePlatformsByDefault )
2021-08-18 17:19:30 +00:00
[ " b u i l d " " h o s t " ]
# TODO(@Ericson2314): Make unconditional / resolve #33599
# Check phase
, doCheck ? config . doCheckByDefault or false
# TODO(@Ericson2314): Make unconditional / resolve #33599
# InstallCheck phase
, doInstallCheck ? config . doCheckByDefault or false
2022-06-21 18:03:43 +00:00
, # TODO(@Ericson2314): Make always true and remove / resolve #178468
2022-04-27 14:53:08 +00:00
strictDeps ? if config . strictDepsByDefault then true else stdenv . hostPlatform != stdenv . buildPlatform
2021-04-20 11:46:53 +00:00
2022-05-11 13:41:30 +00:00
, enableParallelBuilding ? config . enableParallelBuildingByDefault
2021-08-18 17:19:30 +00:00
, meta ? { }
, passthru ? { }
, pos ? # position used in error messages and for meta.position
( if attrs . meta . description or null != null
then builtins . unsafeGetAttrPos " d e s c r i p t i o n " attrs . meta
else if attrs . version or null != null
then builtins . unsafeGetAttrPos " v e r s i o n " attrs
else builtins . unsafeGetAttrPos " n a m e " attrs )
, separateDebugInfo ? false
, outputs ? [ " o u t " ]
, __darwinAllowLocalNetworking ? false
, __impureHostDeps ? [ ]
, __propagatedImpureHostDeps ? [ ]
, sandboxProfile ? " "
, propagatedSandboxProfile ? " "
, hardeningEnable ? [ ]
, hardeningDisable ? [ ]
, patches ? [ ]
, __contentAddressed ?
( ! attrs ? outputHash ) # Fixed-output drvs can't be content addressed too
2022-04-27 20:21:32 +00:00
&& config . contentAddressedByDefault
2021-08-18 17:19:30 +00:00
2022-05-31 21:34:59 +00:00
# Experimental. For simple packages mostly just works,
# but for anything complex, be prepared to debug if enabling.
2022-11-17 16:30:58 +00:00
, __structuredAttrs ? config . structuredAttrsByDefault or false
2022-05-31 21:34:59 +00:00
, env ? { }
2021-08-18 17:19:30 +00:00
, . . . } @ attrs :
2023-09-14 16:45:25 +00:00
# Policy on acceptable hash types in nixpkgs
assert attrs ? outputHash -> (
let algo =
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
attrs . outputHashAlgo or ( head ( splitString " - " attrs . outputHash ) ) ;
2023-09-14 16:45:25 +00:00
in
if algo == " m d 5 " then
throw " R e j e c t e d i n s e c u r e ${ algo } h a s h ' ${ attrs . outputHash } ' "
else
true
) ;
2021-08-18 17:19:30 +00:00
let
# TODO(@oxij, @Ericson2314): This is here to keep the old semantics, remove when
# no package has `doCheck = true`.
2021-10-02 12:02:47 +00:00
doCheck' = doCheck && stdenv . buildPlatform . canExecute stdenv . hostPlatform ;
doInstallCheck' = doInstallCheck && stdenv . buildPlatform . canExecute stdenv . hostPlatform ;
2021-08-18 17:19:30 +00:00
2023-02-25 18:18:00 +00:00
separateDebugInfo' = separateDebugInfo && stdenv . hostPlatform . isLinux ;
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
outputs' = outputs ++ optional separateDebugInfo' " d e b u g " ;
2021-08-18 17:19:30 +00:00
2023-01-20 14:56:31 +00:00
# Turn a derivation into its outPath without a string context attached.
# See the comment at the usage site.
unsafeDerivationToUntrackedOutpath = drv :
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
if isDerivation drv
2023-01-20 14:56:31 +00:00
then builtins . unsafeDiscardStringContext drv . outPath
else drv ;
2021-08-18 17:19:30 +00:00
noNonNativeDeps = builtins . length ( depsBuildTarget ++ depsBuildTargetPropagated
++ depsHostHost ++ depsHostHostPropagated
++ buildInputs ++ propagatedBuildInputs
++ depsTargetTarget ++ depsTargetTargetPropagated ) == 0 ;
dontAddHostSuffix = attrs ? outputHash && ! noNonNativeDeps || ! stdenv . hasCC ;
2023-01-21 12:37:24 +00:00
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
hardeningDisable' = if any ( x : x == " f o r t i f y " ) hardeningDisable
2023-01-21 12:37:24 +00:00
# disabling fortify implies fortify3 should also be disabled
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
then unique ( hardeningDisable ++ [ " f o r t i f y 3 " ] )
2023-01-21 12:37:24 +00:00
else hardeningDisable ;
2023-10-02 18:38:56 +00:00
knownHardeningFlags = [
" b i n d n o w "
" f o r m a t "
" f o r t i f y "
" f o r t i f y 3 "
" p i c "
" p i e "
" r e l r o "
" s t a c k p r o t e c t o r "
" s t r i c t o v e r f l o w "
] ;
defaultHardeningFlags = stdenv . cc . defaultHardeningFlags or
# fallback safe-ish set of flags
( remove " p i e " knownHardeningFlags ) ;
2021-08-18 17:19:30 +00:00
enabledHardeningOptions =
2023-01-21 12:37:24 +00:00
if builtins . elem " a l l " hardeningDisable'
2021-08-18 17:19:30 +00:00
then [ ]
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
else subtractLists hardeningDisable' ( defaultHardeningFlags ++ hardeningEnable ) ;
2021-08-18 17:19:30 +00:00
# hardeningDisable additionally supports "all".
2023-10-02 18:38:56 +00:00
erroneousHardeningFlags = subtractLists knownHardeningFlags ( hardeningEnable ++ remove " a l l " hardeningDisable ) ;
2022-01-28 07:16:57 +00:00
2022-04-05 10:18:04 +00:00
checkDependencyList = checkDependencyList' [ ] ;
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
checkDependencyList' = positions : name : deps : flip imap1 deps ( index : dep :
if isDerivation dep || dep == null || builtins . isString dep || builtins . isPath dep then dep
else if isList dep then checkDependencyList' ( [ index ] ++ positions ) name dep
else throw " D e p e n d e n c y i s n o t o f a v a l i d t y p e : ${ concatMapStrings ( ix : " e l e m e n t ${ toString ix } o f " ) ( [ index ] ++ positions ) } ${ name } f o r ${ attrs . name or attrs . pname } " ) ;
2021-08-18 17:19:30 +00:00
in if builtins . length erroneousHardeningFlags != 0
then abort ( " m k D e r i v a t i o n w a s c a l l e d w i t h u n s u p p o r t e d h a r d e n i n g f l a g s : " + lib . generators . toPretty { } {
2023-10-02 18:38:56 +00:00
inherit erroneousHardeningFlags hardeningDisable hardeningEnable knownHardeningFlags ;
2021-08-18 17:19:30 +00:00
} )
else let
doCheck = doCheck' ;
doInstallCheck = doInstallCheck' ;
2022-12-18 12:00:00 +00:00
buildInputs' = buildInputs
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
++ optionals doCheck checkInputs
++ optionals doInstallCheck installCheckInputs ;
2022-12-18 12:00:00 +00:00
nativeBuildInputs' = nativeBuildInputs
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
++ optional separateDebugInfo' ../../build-support/setup-hooks/separate-debug-info.sh
++ optional stdenv . hostPlatform . isWindows ../../build-support/setup-hooks/win-dll-link.sh
++ optionals doCheck nativeCheckInputs
++ optionals doInstallCheck nativeInstallCheckInputs ;
2021-08-18 17:19:30 +00:00
outputs = outputs' ;
references = nativeBuildInputs ++ buildInputs
++ propagatedNativeBuildInputs ++ propagatedBuildInputs ;
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
dependencies = map ( map chooseDevOutputs ) [
2021-08-18 17:19:30 +00:00
[
2022-01-28 07:16:57 +00:00
( map ( drv : drv . __spliced . buildBuild or drv ) ( checkDependencyList " d e p s B u i l d B u i l d " depsBuildBuild ) )
2022-12-18 12:00:00 +00:00
( map ( drv : drv . __spliced . buildHost or drv ) ( checkDependencyList " n a t i v e B u i l d I n p u t s " nativeBuildInputs' ) )
2022-01-28 07:16:57 +00:00
( map ( drv : drv . __spliced . buildTarget or drv ) ( checkDependencyList " d e p s B u i l d T a r g e t " depsBuildTarget ) )
2021-08-18 17:19:30 +00:00
]
[
2022-01-28 07:16:57 +00:00
( map ( drv : drv . __spliced . hostHost or drv ) ( checkDependencyList " d e p s H o s t H o s t " depsHostHost ) )
2022-12-18 12:00:00 +00:00
( map ( drv : drv . __spliced . hostTarget or drv ) ( checkDependencyList " b u i l d I n p u t s " buildInputs' ) )
2021-08-18 17:19:30 +00:00
]
[
2022-01-28 07:16:57 +00:00
( map ( drv : drv . __spliced . targetTarget or drv ) ( checkDependencyList " d e p s T a r g e t T a r g e t " depsTargetTarget ) )
2021-08-18 17:19:30 +00:00
]
] ;
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
propagatedDependencies = map ( map chooseDevOutputs ) [
2021-08-18 17:19:30 +00:00
[
2022-01-28 07:16:57 +00:00
( map ( drv : drv . __spliced . buildBuild or drv ) ( checkDependencyList " d e p s B u i l d B u i l d P r o p a g a t e d " depsBuildBuildPropagated ) )
2022-11-18 04:55:23 +00:00
( map ( drv : drv . __spliced . buildHost or drv ) ( checkDependencyList " p r o p a g a t e d N a t i v e B u i l d I n p u t s " propagatedNativeBuildInputs ) )
2022-01-28 07:16:57 +00:00
( map ( drv : drv . __spliced . buildTarget or drv ) ( checkDependencyList " d e p s B u i l d T a r g e t P r o p a g a t e d " depsBuildTargetPropagated ) )
2021-08-18 17:19:30 +00:00
]
[
2022-01-28 07:16:57 +00:00
( map ( drv : drv . __spliced . hostHost or drv ) ( checkDependencyList " d e p s H o s t H o s t P r o p a g a t e d " depsHostHostPropagated ) )
2022-11-18 04:55:23 +00:00
( map ( drv : drv . __spliced . hostTarget or drv ) ( checkDependencyList " p r o p a g a t e d B u i l d I n p u t s " propagatedBuildInputs ) )
2021-08-18 17:19:30 +00:00
]
[
2022-01-28 07:16:57 +00:00
( map ( drv : drv . __spliced . targetTarget or drv ) ( checkDependencyList " d e p s T a r g e t T a r g e t P r o p a g a t e d " depsTargetTargetPropagated ) )
2021-08-18 17:19:30 +00:00
]
] ;
computedSandboxProfile =
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
concatMap ( input : input . __propagatedSandboxProfile or [ ] )
2021-08-18 17:19:30 +00:00
( stdenv . extraNativeBuildInputs
++ stdenv . extraBuildInputs
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
++ concatLists dependencies ) ;
2021-08-18 17:19:30 +00:00
computedPropagatedSandboxProfile =
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
concatMap ( input : input . __propagatedSandboxProfile or [ ] )
( concatLists propagatedDependencies ) ;
2021-08-18 17:19:30 +00:00
computedImpureHostDeps =
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
unique ( concatMap ( input : input . __propagatedImpureHostDeps or [ ] )
2021-08-18 17:19:30 +00:00
( stdenv . extraNativeBuildInputs
++ stdenv . extraBuildInputs
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
++ concatLists dependencies ) ) ;
2021-08-18 17:19:30 +00:00
computedPropagatedImpureHostDeps =
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
unique ( concatMap ( input : input . __propagatedImpureHostDeps or [ ] )
( concatLists propagatedDependencies ) ) ;
2021-08-18 17:19:30 +00:00
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
envIsExportable = isAttrs env && ! isDerivation env ;
2022-05-31 21:34:59 +00:00
2021-08-18 17:19:30 +00:00
derivationArg =
( removeAttrs attrs
2022-05-31 21:34:59 +00:00
( [ " m e t a " " p a s s t h r u " " p o s "
2021-08-18 17:19:30 +00:00
" c h e c k I n p u t s " " i n s t a l l C h e c k I n p u t s "
2022-12-18 12:00:00 +00:00
" n a t i v e C h e c k I n p u t s " " n a t i v e I n s t a l l C h e c k I n p u t s "
2023-05-02 21:24:57 +00:00
" _ _ c o n t e n t A d d r e s s e d "
2021-08-18 17:19:30 +00:00
" _ _ d a r w i n A l l o w L o c a l N e t w o r k i n g "
" _ _ i m p u r e H o s t D e p s " " _ _ p r o p a g a t e d I m p u r e H o s t D e p s "
2022-05-31 21:34:59 +00:00
" s a n d b o x P r o f i l e " " p r o p a g a t e d S a n d b o x P r o f i l e " ]
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
++ optional ( __structuredAttrs || envIsExportable ) " e n v " ) )
// ( optionalAttrs ( attrs ? name || ( attrs ? pname && attrs ? version ) ) {
2021-08-18 17:19:30 +00:00
name =
let
# Indicate the host platform of the derivation if cross compiling.
# Fixed-output derivations like source tarballs shouldn't get a host
# suffix. But we have some weird ones with run-time deps that are
# just used for their side-affects. Those might as well since the
# hash can't be the same. See #32986.
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
hostSuffix = optionalString
2021-08-18 17:19:30 +00:00
( stdenv . hostPlatform != stdenv . buildPlatform && ! dontAddHostSuffix )
" - ${ stdenv . hostPlatform . config } " ;
2023-02-18 18:45:18 +00:00
2021-08-18 17:19:30 +00:00
# Disambiguate statically built packages. This was originally
# introduce as a means to prevent nix-env to get confused between
# nix and nixStatic. This should be also achieved by moving the
# hostSuffix before the version, so we could contemplate removing
# it again.
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
staticMarker = optionalString stdenv . hostPlatform . isStatic " - s t a t i c " ;
2021-08-18 17:19:30 +00:00
in
2022-03-30 08:20:44 +00:00
lib . strings . sanitizeDerivationName (
2021-08-18 17:19:30 +00:00
if attrs ? name
then attrs . name + hostSuffix
2023-02-18 18:45:18 +00:00
else
# we cannot coerce null to a string below
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
assert assertMsg ( attrs ? version && attrs . version != null ) " T h e ‘ v e r s i o n ’ a t t r i b u t e c a n n o t b e n u l l . " ;
2023-02-18 18:45:18 +00:00
" ${ attrs . pname } ${ staticMarker } ${ hostSuffix } - ${ attrs . version } "
2022-03-30 08:20:44 +00:00
) ;
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
} ) // optionalAttrs __structuredAttrs { env = checkedEnv ; } // {
2021-08-18 17:19:30 +00:00
builder = attrs . realBuilder or stdenv . shell ;
args = attrs . args or [ " - e " ( attrs . builder or ./default-builder.sh ) ] ;
inherit stdenv ;
# The `system` attribute of a derivation has special meaning to Nix.
# Derivations set it to choose what sort of machine could be used to
# execute the build, The build platform entirely determines this,
# indeed more finely than Nix knows or cares about. The `system`
# attribute of `buildPlatfom` matches Nix's degree of specificity.
# exactly.
inherit ( stdenv . buildPlatform ) system ;
userHook = config . stdenv . userHook or null ;
__ignoreNulls = true ;
2022-05-31 21:34:59 +00:00
inherit __structuredAttrs strictDeps ;
2021-08-18 17:19:30 +00:00
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
depsBuildBuild = elemAt ( elemAt dependencies 0 ) 0 ;
nativeBuildInputs = elemAt ( elemAt dependencies 0 ) 1 ;
depsBuildTarget = elemAt ( elemAt dependencies 0 ) 2 ;
depsHostHost = elemAt ( elemAt dependencies 1 ) 0 ;
buildInputs = elemAt ( elemAt dependencies 1 ) 1 ;
depsTargetTarget = elemAt ( elemAt dependencies 2 ) 0 ;
2021-08-18 17:19:30 +00:00
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
depsBuildBuildPropagated = elemAt ( elemAt propagatedDependencies 0 ) 0 ;
propagatedNativeBuildInputs = elemAt ( elemAt propagatedDependencies 0 ) 1 ;
depsBuildTargetPropagated = elemAt ( elemAt propagatedDependencies 0 ) 2 ;
depsHostHostPropagated = elemAt ( elemAt propagatedDependencies 1 ) 0 ;
propagatedBuildInputs = elemAt ( elemAt propagatedDependencies 1 ) 1 ;
depsTargetTargetPropagated = elemAt ( elemAt propagatedDependencies 2 ) 0 ;
2021-08-18 17:19:30 +00:00
# This parameter is sometimes a string, sometimes null, and sometimes a list, yuck
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
configureFlags =
2023-06-22 17:24:23 +00:00
configureFlags
2021-08-18 17:19:30 +00:00
++ optional ( elem " b u i l d " configurePlatforms ) " - - b u i l d = ${ stdenv . buildPlatform . config } "
++ optional ( elem " h o s t " configurePlatforms ) " - - h o s t = ${ stdenv . hostPlatform . config } "
++ optional ( elem " t a r g e t " configurePlatforms ) " - - t a r g e t = ${ stdenv . targetPlatform . config } " ;
2022-07-04 01:25:04 +00:00
cmakeFlags =
2023-06-22 17:24:23 +00:00
cmakeFlags
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
++ optionals ( stdenv . hostPlatform != stdenv . buildPlatform ) ( [
" - D C M A K E _ S Y S T E M _ N A M E = ${ findFirst isString " G e n e r i c " ( optional ( ! stdenv . hostPlatform . isRedox ) stdenv . hostPlatform . uname . system ) } "
] ++ optionals ( stdenv . hostPlatform . uname . processor != null ) [
2023-06-22 17:24:23 +00:00
" - D C M A K E _ S Y S T E M _ P R O C E S S O R = ${ stdenv . hostPlatform . uname . processor } "
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
] ++ optionals ( stdenv . hostPlatform . uname . release != null ) [
2023-06-22 17:24:23 +00:00
" - D C M A K E _ S Y S T E M _ V E R S I O N = ${ stdenv . hostPlatform . uname . release } "
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
] ++ optionals ( stdenv . hostPlatform . isDarwin ) [
2023-06-22 17:24:23 +00:00
" - D C M A K E _ O S X _ A R C H I T E C T U R E S = ${ stdenv . hostPlatform . darwinArch } "
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
] ++ optionals ( stdenv . buildPlatform . uname . system != null ) [
2023-06-22 17:24:23 +00:00
" - D C M A K E _ H O S T _ S Y S T E M _ N A M E = ${ stdenv . buildPlatform . uname . system } "
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
] ++ optionals ( stdenv . buildPlatform . uname . processor != null ) [
2023-06-22 17:24:23 +00:00
" - D C M A K E _ H O S T _ S Y S T E M _ P R O C E S S O R = ${ stdenv . buildPlatform . uname . processor } "
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
] ++ optionals ( stdenv . buildPlatform . uname . release != null ) [
2023-06-22 17:24:23 +00:00
" - D C M A K E _ H O S T _ S Y S T E M _ V E R S I O N = ${ stdenv . buildPlatform . uname . release } "
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
] ++ optionals ( stdenv . buildPlatform . canExecute stdenv . hostPlatform ) [
2023-08-27 22:19:58 +00:00
" - D C M A K E _ C R O S S C O M P I L I N G _ E M U L A T O R = e n v "
pkgsStatic.stdenv: fix custom CMake LINKER_LANGUAGE
If a CMake target has a non-default LINKER_LANGUAGE set, CMake will
manually add the libraries it has detected that language's compiler as
linking implicitly. When it does this, it'll pass -Bstatic and
-Bdynamic options based on the vibes it gets from each such detected
library. This in itself isn't a problem, because the compiler
toolchain, or our wrapper, or something, seems to be smart enough to
ignore -Bdynamic for those libraries. But it does create a problem if
the compiler adds extra libraries to the linker command line after
that final -Bdynamic, because those will be linked dynamically. Since
our compiler is static by default, CMake should reset to -Bstatic
after it's done manually specifying libraries, but CMake didn't
actually know that our compiler is static by default. The fix for
that is to tell it, like so.
Until recently, this problem was difficult to notice, because it would
result binaries that worked, but that were dynamically linked. Since
e08ce498f03f ("cc-wrapper: Account for NIX_LDFLAGS and NIX_CFLAGS_LINK
in linkType"), though, -Wl,-dynamic-linker is no longer mistakenly
passed for executables that are supposed to be static, so they end up
created with a /lib interpreter path, and so don't run at all on
NixOS.
This fixes pkgsStatic.graphite2.
2023-11-23 11:28:49 +00:00
] ++ lib . optionals stdenv . hostPlatform . isStatic [
" - D C M A K E _ L I N K _ S E A R C H _ S T A R T _ S T A T I C = O N "
2023-06-22 17:24:23 +00:00
] ) ;
2022-07-04 01:25:04 +00:00
2022-07-06 03:05:48 +00:00
mesonFlags =
let
# See https://mesonbuild.com/Reference-tables.html#cpu-families
cpuFamily = platform : with platform ;
/* */ if isAarch32 then " a r m "
else if isx86_32 then " x 8 6 "
2023-01-30 18:57:37 +00:00
else platform . uname . processor ;
2022-07-06 03:05:48 +00:00
crossFile = builtins . toFile " c r o s s - f i l e . c o n f " ''
[ properties ]
2023-05-09 13:43:23 +00:00
bindgen_clang_arguments = [ ' - target' , ' $ { stdenv . targetPlatform . config } ' ]
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
needs_exe_wrapper = $ { boolToString ( ! stdenv . buildPlatform . canExecute stdenv . hostPlatform ) }
2022-07-06 03:05:48 +00:00
[ host_machine ]
system = ' $ { stdenv . targetPlatform . parsed . kernel . name } '
cpu_family = ' $ { cpuFamily stdenv . targetPlatform } '
cpu = ' $ { stdenv . targetPlatform . parsed . cpu . name } '
endian = $ { if stdenv . targetPlatform . isLittleEndian then " ' l i t t l e ' " else " ' b i g ' " }
[ binaries ]
llvm-config = ' llvm-config-native'
2023-05-09 13:43:23 +00:00
rust = [ ' rustc' , ' - - target' , ' $ { stdenv . targetPlatform . rust . rustcTargetSpec } ' ]
2022-07-06 03:05:48 +00:00
'' ;
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
crossFlags = optionals ( stdenv . hostPlatform != stdenv . buildPlatform ) [ " - - c r o s s - f i l e = ${ crossFile } " ] ;
2023-06-22 17:24:23 +00:00
in crossFlags ++ mesonFlags ;
2022-07-06 03:05:48 +00:00
2021-08-18 17:19:30 +00:00
inherit patches ;
inherit doCheck doInstallCheck ;
inherit outputs ;
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
} // optionalAttrs ( __contentAddressed ) {
2021-08-18 17:19:30 +00:00
inherit __contentAddressed ;
# Provide default values for outputHashMode and outputHashAlgo because
# most people won't care about these anyways
outputHashAlgo = attrs . outputHashAlgo or " s h a 2 5 6 " ;
outputHashMode = attrs . outputHashMode or " r e c u r s i v e " ;
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
} // optionalAttrs ( enableParallelBuilding ) {
2022-12-17 22:21:07 +00:00
inherit enableParallelBuilding ;
2021-08-18 17:19:30 +00:00
enableParallelChecking = attrs . enableParallelChecking or true ;
2023-02-21 19:50:48 +00:00
enableParallelInstalling = attrs . enableParallelInstalling or true ;
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
} // optionalAttrs ( hardeningDisable != [ ] || hardeningEnable != [ ] || stdenv . hostPlatform . isMusl ) {
2021-08-18 17:19:30 +00:00
NIX_HARDENING_ENABLE = enabledHardeningOptions ;
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
} // optionalAttrs ( stdenv . hostPlatform . isx86_64 && stdenv . hostPlatform ? gcc . arch ) {
2021-08-18 17:19:30 +00:00
requiredSystemFeatures = attrs . requiredSystemFeatures or [ ] ++ [ " g c c a r c h - ${ stdenv . hostPlatform . gcc . arch } " ] ;
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
} // optionalAttrs ( stdenv . buildPlatform . isDarwin ) {
2021-08-18 17:19:30 +00:00
inherit __darwinAllowLocalNetworking ;
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
# TODO: remove `unique` once nix has a list canonicalization primitive
2021-08-18 17:19:30 +00:00
__sandboxProfile =
let profiles = [ stdenv . extraSandboxProfile ] ++ computedSandboxProfile ++ computedPropagatedSandboxProfile ++ [ propagatedSandboxProfile sandboxProfile ] ;
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
final = concatStringsSep " \n " ( filter ( x : x != " " ) ( unique profiles ) ) ;
2021-08-18 17:19:30 +00:00
in final ;
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
__propagatedSandboxProfile = unique ( computedPropagatedSandboxProfile ++ [ propagatedSandboxProfile ] ) ;
2021-08-18 17:19:30 +00:00
__impureHostDeps = computedImpureHostDeps ++ computedPropagatedImpureHostDeps ++ __propagatedImpureHostDeps ++ __impureHostDeps ++ stdenv . __extraImpureHostDeps ++ [
" / d e v / z e r o "
" / d e v / r a n d o m "
" / d e v / u r a n d o m "
" / b i n / s h "
2017-07-05 21:56:53 +00:00
] ;
2021-08-18 17:19:30 +00:00
__propagatedImpureHostDeps = computedPropagatedImpureHostDeps ++ __propagatedImpureHostDeps ;
2023-01-20 14:56:31 +00:00
} //
# If we use derivations directly here, they end up as build-time dependencies.
# This is especially problematic in the case of disallowed*, since the disallowed
# derivations will be built by nix as build-time dependencies, while those
# derivations might take a very long time to build, or might not even build
# successfully on the platform used.
# We can improve on this situation by instead passing only the outPath,
# without an attached string context, to nix. The out path will be a placeholder
# which will be replaced by the actual out path if the derivation in question
# is part of the final closure (and thus needs to be built). If it is not
# part of the final closure, then the placeholder will be passed along,
# but in that case we know for a fact that the derivation is not part of the closure.
# This means that passing the out path to nix does the right thing in either
# case, both for disallowed and allowed references/requisites, and we won't
# build the derivation if it wouldn't be part of the closure, saving time and resources.
# While the problem is less severe for allowed*, since we want the derivation
# to be built eventually, we would still like to get the error early and without
# having to wait while nix builds a derivation that might not be used.
# See also https://github.com/NixOS/nix/issues/4629
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
optionalAttrs ( attrs ? disallowedReferences ) {
2023-01-20 14:56:31 +00:00
disallowedReferences =
map unsafeDerivationToUntrackedOutpath attrs . disallowedReferences ;
} //
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
optionalAttrs ( attrs ? disallowedRequisites ) {
2023-01-20 14:56:31 +00:00
disallowedRequisites =
map unsafeDerivationToUntrackedOutpath attrs . disallowedRequisites ;
} //
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
optionalAttrs ( attrs ? allowedReferences ) {
2023-01-20 14:56:31 +00:00
allowedReferences =
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
mapNullable unsafeDerivationToUntrackedOutpath attrs . allowedReferences ;
2023-01-20 14:56:31 +00:00
} //
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
optionalAttrs ( attrs ? allowedRequisites ) {
2023-01-20 14:56:31 +00:00
allowedRequisites =
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
mapNullable unsafeDerivationToUntrackedOutpath attrs . allowedRequisites ;
2021-08-18 17:19:30 +00:00
} ;
2023-04-26 03:43:44 +00:00
meta = checkMeta . commonMeta { inherit validity attrs pos references ; } ;
validity = checkMeta . assertValidity { inherit meta attrs ; } ;
2021-08-18 17:19:30 +00:00
2022-05-31 21:34:59 +00:00
checkedEnv =
let
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
overlappingNames = attrNames ( builtins . intersectAttrs env derivationArg ) ;
2022-05-31 21:34:59 +00:00
in
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
assert assertMsg envIsExportable
2022-12-13 17:12:04 +00:00
" W h e n u s i n g s t r u c t u r e d a t t r i b u t e s , ` e n v ` m u s t b e a n a t t r i b u t e s e t o f e n v i r o n m e n t v a r i a b l e s . " ;
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
assert assertMsg ( overlappingNames == [ ] )
" T h e ‘ e n v ’ a t t r i b u t e s e t c a n n o t c o n t a i n a n y a t t r i b u t e s p a s s e d t o d e r i v a t i o n . T h e f o l l o w i n g a t t r i b u t e s a r e o v e r l a p p i n g : ${ concatStringsSep " , " overlappingNames } " ;
mapAttrs
( n : v : assert assertMsg ( isString v || isBool v || isInt v || isDerivation v )
2022-12-06 22:08:33 +00:00
" T h e ‘ e n v ’ a t t r i b u t e s e t c a n o n l y c o n t a i n d e r i v a t i o n , s t r i n g , b o o l e a n o r i n t e g e r a t t r i b u t e s . T h e ‘ ${ n } ’ a t t r i b u t e i s o f t y p e ${ builtins . typeOf v } . " ; v )
2022-05-31 21:34:59 +00:00
env ;
2021-08-18 17:19:30 +00:00
in
2017-07-05 21:56:53 +00:00
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
extendDerivation
2021-08-18 17:19:30 +00:00
validity . handled
( {
# A derivation that always builds successfully and whose runtime
# dependencies are the original derivations build time dependencies
# This allows easy building and distributing of all derivations
# needed to enter a nix-shell with
# nix-build shell.nix -A inputDerivation
inputDerivation = derivation ( derivationArg // {
# Add a name in case the original drv didn't have one
name = derivationArg . name or " i n p u t D e r i v a t i o n " ;
# This always only has one output
outputs = [ " o u t " ] ;
# Propagate the original builder and arguments, since we override
# them and they might contain references to build inputs
_derivation_original_builder = derivationArg . builder ;
_derivation_original_args = derivationArg . args ;
builder = stdenv . shell ;
# The bash builtin `export` dumps all current environment variables,
# which is where all build input references end up (e.g. $PATH for
# binaries). By writing this to $out, Nix can find and register
# them as runtime dependencies (since Nix greps for store paths
# through $out to find them)
2023-05-08 11:55:36 +00:00
args = [ " - c " ''
export > $ out
for var in $ passAsFile ; do
pathVar = " ' ' ${ var } P a t h "
printf " % s " " $ ( < " '' ${ ! pathVar } " ) " > > $o u t
done
'' ] ;
2023-02-07 17:57:23 +00:00
# inputDerivation produces the inputs; not the outputs, so any
# restrictions on what used to be the outputs don't serve a purpose
# anymore.
2023-05-08 12:02:13 +00:00
allowedReferences = null ;
allowedRequisites = null ;
2023-02-07 17:57:23 +00:00
disallowedReferences = [ ] ;
disallowedRequisites = [ ] ;
2021-08-18 17:19:30 +00:00
} ) ;
2023-04-26 03:43:44 +00:00
inherit passthru overrideAttrs ;
inherit meta ;
2021-08-18 17:19:30 +00:00
} //
# Pass through extra attributes that are not inputs, but
# should be made available to Nix expressions using the
# derivation (e.g., in assertions).
passthru )
stdenv: Improve performance
| stat | before | after | Δ | Δ% |
|------------------------|-----------------|-----------------|-----------------|---------|
| cpuTime | 513.67 | 507.77 | ↘ 5.90 | -1.15% |
| envs-bytes | 20,682,847,968 | 20,628,961,616 | ↘ 53,886,352 | -0.26% |
| envs-elements | 1,054,735,104 | 1,051,395,620 | ↘ 3,339,484 | -0.32% |
| envs-number | 765,310,446 | 763,612,291 | ↘ 1,698,155 | -0.22% |
| gc-heapSize | 53,439,602,688 | 51,711,545,344 | ↘ 1,728,057,344 | -3.23% |
| gc-totalBytes | 113,062,066,672 | 112,139,998,240 | ↘ 922,068,432 | -0.82% |
| list-bytes | 3,118,249,784 | 3,118,249,784 | 0 | |
| list-concats | 52,834,140 | 52,834,140 | 0 | |
| list-elements | 389,781,223 | 389,781,223 | 0 | |
| nrAvoided | 968,097,988 | 991,889,795 | ↗ 23,791,807 | 2.46% |
| nrFunctionCalls | 697,259,792 | 697,259,792 | 0 | |
| nrLookups | 510,257,062 | 338,275,331 | ↘ 171,981,731 | -33.70% |
| nrOpUpdateValuesCopied | 1,446,690,216 | 1,446,690,216 | 0 | |
| nrOpUpdates | 68,504,034 | 68,504,034 | 0 | |
| nrPrimOpCalls | 429,464,805 | 429,464,805 | 0 | |
| nrThunks | 1,009,240,391 | 982,109,100 | ↘ 27,131,291 | -2.69% |
| sets-bytes | 33,524,722,928 | 33,524,722,928 | 0 | |
| sets-elements | 1,938,309,212 | 1,938,309,212 | 0 | |
| sets-number | 156,985,971 | 156,985,971 | 0 | |
| sizes-Attr | 16 | 16 | 0 | |
| sizes-Bindings | 16 | 16 | 0 | |
| sizes-Env | 16 | 16 | 0 | |
| sizes-Value | 24 | 24 | 0 | |
| symbols-bytes | 2,151,298 | 2,151,298 | 0 | |
| symbols-number | 159,707 | 159,707 | 0 | |
| values-bytes | 30,218,194,248 | 29,567,043,264 | ↘ 651,150,984 | -2.15% |
| values-number | 1,259,091,427 | 1,231,960,136 | ↘ 27,131,291 | -2.15% |
> Accessing the lexical scope directly should be more efficient, yes, because it changes from a binary search (many lookups) to just two memory accesses
> correction: one short linked list + one array access
> oh and you had to do the lexical scope lookup anyway for lib itself
> so it really does save a binary search at basically no extra cost
- roberth
after seeing the stats
> Oooh nice. I did not consider that more of the maybeThunk optimization becomes effective (nrAvoided). Those lookups also caused allocations!
- roberth
Left `lib.generators` and `lib.strings` alone because they're only used
once.
2023-11-11 17:15:20 +00:00
( derivation ( derivationArg // optionalAttrs envIsExportable checkedEnv ) ) ;
2021-04-20 11:46:53 +00:00
2022-06-05 11:33:35 +00:00
in
2022-06-05 11:35:04 +00:00
fnOrAttrs :
if builtins . isFunction fnOrAttrs
2022-06-05 11:36:56 +00:00
then makeDerivationExtensible fnOrAttrs
else makeDerivationExtensibleConst fnOrAttrs