Add 'download-buffer-size' setting

We are piping curl downloads into `unpackTarfileToSink()`, but the
latter is typically slower than the former if you're on a fast
connection. So the download could appear unnecessarily slow. (There is
even a risk that if the Git import is *really* slow for whatever
reason, the TCP connection could time out.)

So let's make the download buffer bigger by default - 64 MiB is big
enough for the Nixpkgs tarball. Perhaps in the future, we could have
an unlimited buffer that spills data to disk beyond a certain
threshold, but that's probably overkill.

(cherry picked from commit 8ffea0a018)
This commit is contained in:
Eelco Dolstra 2024-07-24 20:10:45 +02:00 committed by github-actions[bot]
parent 211b0d4e13
commit 56140d974e
2 changed files with 7 additions and 1 deletions

View File

@ -835,7 +835,7 @@ void FileTransfer::download(
buffer). We don't wait forever to prevent stalling the
download thread. (Hopefully sleeping will throttle the
sender.) */
if (state->data.size() > 1024 * 1024) {
if (state->data.size() > fileTransferSettings.downloadBufferSize) {
debug("download buffer is full; going to sleep");
state.wait_for(state->request, std::chrono::seconds(10));
}

View File

@ -45,6 +45,12 @@ struct FileTransferSettings : Config
Setting<unsigned int> tries{this, 5, "download-attempts",
"How often Nix will attempt to download a file before giving up."};
Setting<size_t> downloadBufferSize{this, 64 * 1024 * 1024, "download-buffer-size",
R"(
The size of Nix's internal download buffer during `curl` transfers. If data is
not processed quickly enough to exceed the size of this buffer, downloads may stall.
)"};
};
extern FileTransferSettings fileTransferSettings;