R. Ryantm
9dca8ad0d2
llama-cpp: 3645 -> 3672
2024-09-06 04:58:14 +00:00
R. Ryantm
0261b36199
llama-cpp: 3620 -> 3645
2024-08-30 13:52:39 +00:00
R. Ryantm
43dd0ac922
llama-cpp: 3565 -> 3620
2024-08-24 21:41:17 +00:00
Atemu
118ab33e1d
llama-cpp: update description
...
Closes https://github.com/NixOS/nixpkgs/pull/334451
2024-08-18 01:38:13 +02:00
R. Ryantm
ccc5699686
llama-cpp: 3499 -> 3565
2024-08-10 20:56:53 +00:00
Peder Bergebakken Sundt
ec91af6409
Merge pull request #314132 from newAM/cleanup-xtask-binaries
...
treewide: cleanup xtask binaries
2024-08-03 22:26:17 +02:00
R. Ryantm
540982b4da
llama-cpp: 3423 -> 3499
2024-08-01 08:30:05 +00:00
Philip Taron
402c2115f8
Merge pull request #328522 from r-ryantm/auto-update/llama-cpp
...
llama-cpp: 3403 -> 3423
2024-07-31 14:01:06 -07:00
Sandro
64a4b158bd
Merge pull request #326131 from nwhirschfeld/lldap-cli
2024-07-28 22:40:10 +02:00
Saksham Mittal
a3748d8201
llama-cpp: add shaderc dependency for Vulkan backend
2024-07-27 18:45:59 +05:30
R. Ryantm
8ed9b3f67b
llama-cpp: 3403 -> 3423
2024-07-19 20:45:08 +00:00
Redyf
4e8851bf08
llama-cpp: 3328 -> 3403
2024-07-16 17:19:53 -03:00
Niclas Hirschfeld
63f25148f4
lldap-cli: init at 0-unstable-2024-02-24
2024-07-10 17:16:17 +02:00
R. Ryantm
a231b6ea37
llama-cpp: 3260 -> 3328
2024-07-07 14:03:24 +00:00
Someone
d5491008d9
Merge pull request #323056 from SomeoneSerge/fix/cudaPackages/outputSpecified
...
cudaPackages: make getOutput work again
2024-07-03 19:19:51 +00:00
Someone Serge
82018339bd
treewide: cuda: use propagatedBuildInputs, lib.getOutput
2024-07-02 01:47:19 +00:00
Lan Tian
134743c02a
llama-cpp: 3091 -> 3260
2024-06-29 02:38:46 -07:00
Jeremy Schlatter
4a2b827c71
treewide: use cmakeCudaArchitecturesString
2024-06-23 16:51:31 -07:00
R. Ryantm
aee3455afd
llama-cpp: 3089 -> 3091
2024-06-09 23:44:27 +00:00
Jono Chang
b5331032eb
llama-cpp: 3070 -> 3089
...
Diff: https://github.com/ggerganov/llama.cpp/compare/b3070..b3089
Changelog: https://github.com/ggerganov/llama.cpp/releases/tag/b3089
2024-06-06 08:18:00 +10:00
R. Ryantm
4d1bc27756
llama-cpp: 3015 -> 3070
2024-06-03 04:34:54 +00:00
OTABI Tomoya
9d73dc1ae5
Merge pull request #315258 from r-ryantm/auto-update/llama-cpp
...
llama-cpp: 2953 -> 3015
2024-06-02 10:25:13 +09:00
Peder Bergebakken Sundt
aaa74081c2
Merge pull request #313525 from maxstrid/llama-cpp-rpc
...
llama-cpp: Add rpc and remove mpi support
2024-06-01 21:05:36 +02:00
R. Ryantm
fb013003f0
llama-cpp: 2953 -> 3015
2024-05-28 05:50:51 +00:00
R. Ryantm
01d3250b56
llm-ls: 0.5.2 -> 0.5.3
2024-05-24 14:49:45 +00:00
Alex Martens
dc9f57c229
llm-ls: set buildAndTestSubdir
2024-05-23 17:15:37 -07:00
Maxwell Henderson
6467f8b017
llama-cpp: Add rpc and remove mpi support
...
llama-cpp no longer supports mpi and rpc is the recommended alternative.
See: https://github.com/ggerganov/llama.cpp/pull/7395
Signed-off-by: Maxwell Henderson <mxwhenderson@gmail.com>
2024-05-21 17:43:47 -07:00
R. Ryantm
f26e8f439c
llama-cpp: 2901 -> 2953
2024-05-21 14:41:37 +00:00
R. Ryantm
12e0c85d77
llama-cpp: 2843 -> 2901
2024-05-16 10:09:34 +00:00
R. Ryantm
e94a2d47cd
llama-cpp: 2781 -> 2843
2024-05-11 03:35:19 +00:00
R. Ryantm
ab83e3fd1e
llama-cpp: 2746 -> 2781
2024-05-03 10:06:47 +00:00
Enno Richter
6aed0cc958
llama-cpp: set build_number/build_commit for version info
2024-04-30 10:21:56 +02:00
R. Ryantm
27a673ef3e
llama-cpp: 2700 -> 2746
2024-04-26 18:14:07 +00:00
Roman Zakirzyanov
aedebb76de
llm-ls: 0.4.0 -> 0.5.2
2024-04-25 17:11:07 +03:00
R. Ryantm
fdcc5233d8
llama-cpp: 2674 -> 2700
2024-04-21 10:39:42 +00:00
R. Ryantm
1897af2d37
llama-cpp: 2636 -> 2674
2024-04-14 23:18:05 +00:00
R. Ryantm
ef16276bb6
llama-cpp: 2589 -> 2636
2024-04-09 21:33:46 +00:00
R. Ryantm
90891cd009
llama-cpp: 2568 -> 2589
2024-04-04 15:32:57 +00:00
Jonathan Ringer
65c4c21a2b
llama-cpp: use pkgs.autoAddDriverRunpath
2024-03-31 10:15:47 -07:00
Joseph Stahl
a06a03ed7c
llama-cpp: update from b2481 to b2568
2024-03-28 20:56:55 -04:00
Joseph Stahl
e1ef3aaacc
llama-cpp: embed (don't pre-compile) metal shaders
...
port of https://github.com/ggerganov/llama.cpp/pull/6118 , although compiling shaders with XCode disabled as it requires disabling sandbox (and only works on MacOS anyways)
2024-03-26 14:01:29 -04:00
Joseph Stahl
7aa588cc96
llama-cpp: rename cuBLAS to CUDA
...
Matches change from upstream 280345968d
2024-03-26 13:54:30 -04:00
Christian Kögler
2af438f836
llama-cpp: fix blasSupport ( #298567 )
...
* llama-cpp: fix blasSupport
* llama-cpp: switch from openblas to blas
2024-03-25 18:55:45 +01:00
R. Ryantm
c70ff30bde
llama-cpp: 2454 -> 2481
2024-03-21 17:34:35 +00:00
Someone
e7797267a2
Merge pull request #281576 from yannham/refactor/cuda-setup-hooks-refactor
...
cudaPackages: generalize and refactor setup hooks
2024-03-19 20:06:18 +00:00
R. Ryantm
1c2a0b6df9
llama-cpp: 2424 -> 2454
2024-03-18 12:50:17 +00:00
Yann Hamdaoui
63746cac08
cudaPackages: generalize and refactor setup hook
...
This PR refactor CUDA setup hooks, and in particular
autoAddOpenGLRunpath and autoAddCudaCompatRunpathHook, that were using a
lot of code in common (in fact, I introduced the latter by copy pasting
most of the bash script of the former). This is not satisfying for
maintenance, as a recent patch showed, because we need to duplicate
changes to both hooks.
This commit abstract the common part in a single shell script that
applies a generic patch action to every elf file in the output. For
autoAddOpenGLRunpath the action is just addOpenGLRunpath (now
addDriverRunpath), and is few line function for
autoAddCudaCompatRunpathHook.
Doing so, we also takes the occasion to use the newer addDriverRunpath
instead of the previous addOpenGLRunpath, and rename the CUDA hook to
reflect that as well.
Co-Authored-By: Connor Baker <connor.baker@tweag.io>
2024-03-15 15:54:21 +01:00
R. Ryantm
4744ccc2db
llama-cpp: 2382 -> 2424
2024-03-14 20:36:15 +00:00
R. Ryantm
bab5a87ffc
llama-cpp: 2346 -> 2382
2024-03-10 12:09:01 +00:00
R. Ryantm
821ea0b581
llama-cpp: 2294 -> 2346
2024-03-05 19:34:21 +00:00