R. Ryantm
a231b6ea37
llama-cpp: 3260 -> 3328
2024-07-07 14:03:24 +00:00
Someone
d5491008d9
Merge pull request #323056 from SomeoneSerge/fix/cudaPackages/outputSpecified
...
cudaPackages: make getOutput work again
2024-07-03 19:19:51 +00:00
Someone Serge
82018339bd
treewide: cuda: use propagatedBuildInputs, lib.getOutput
2024-07-02 01:47:19 +00:00
Lan Tian
134743c02a
llama-cpp: 3091 -> 3260
2024-06-29 02:38:46 -07:00
Jeremy Schlatter
4a2b827c71
treewide: use cmakeCudaArchitecturesString
2024-06-23 16:51:31 -07:00
R. Ryantm
aee3455afd
llama-cpp: 3089 -> 3091
2024-06-09 23:44:27 +00:00
Jono Chang
b5331032eb
llama-cpp: 3070 -> 3089
...
Diff: https://github.com/ggerganov/llama.cpp/compare/b3070..b3089
Changelog: https://github.com/ggerganov/llama.cpp/releases/tag/b3089
2024-06-06 08:18:00 +10:00
R. Ryantm
4d1bc27756
llama-cpp: 3015 -> 3070
2024-06-03 04:34:54 +00:00
OTABI Tomoya
9d73dc1ae5
Merge pull request #315258 from r-ryantm/auto-update/llama-cpp
...
llama-cpp: 2953 -> 3015
2024-06-02 10:25:13 +09:00
Peder Bergebakken Sundt
aaa74081c2
Merge pull request #313525 from maxstrid/llama-cpp-rpc
...
llama-cpp: Add rpc and remove mpi support
2024-06-01 21:05:36 +02:00
R. Ryantm
fb013003f0
llama-cpp: 2953 -> 3015
2024-05-28 05:50:51 +00:00
R. Ryantm
01d3250b56
llm-ls: 0.5.2 -> 0.5.3
2024-05-24 14:49:45 +00:00
Maxwell Henderson
6467f8b017
llama-cpp: Add rpc and remove mpi support
...
llama-cpp no longer supports mpi and rpc is the recommended alternative.
See: https://github.com/ggerganov/llama.cpp/pull/7395
Signed-off-by: Maxwell Henderson <mxwhenderson@gmail.com>
2024-05-21 17:43:47 -07:00
R. Ryantm
f26e8f439c
llama-cpp: 2901 -> 2953
2024-05-21 14:41:37 +00:00
R. Ryantm
12e0c85d77
llama-cpp: 2843 -> 2901
2024-05-16 10:09:34 +00:00
R. Ryantm
e94a2d47cd
llama-cpp: 2781 -> 2843
2024-05-11 03:35:19 +00:00
R. Ryantm
ab83e3fd1e
llama-cpp: 2746 -> 2781
2024-05-03 10:06:47 +00:00
Enno Richter
6aed0cc958
llama-cpp: set build_number/build_commit for version info
2024-04-30 10:21:56 +02:00
R. Ryantm
27a673ef3e
llama-cpp: 2700 -> 2746
2024-04-26 18:14:07 +00:00
Roman Zakirzyanov
aedebb76de
llm-ls: 0.4.0 -> 0.5.2
2024-04-25 17:11:07 +03:00
R. Ryantm
fdcc5233d8
llama-cpp: 2674 -> 2700
2024-04-21 10:39:42 +00:00
R. Ryantm
1897af2d37
llama-cpp: 2636 -> 2674
2024-04-14 23:18:05 +00:00
R. Ryantm
ef16276bb6
llama-cpp: 2589 -> 2636
2024-04-09 21:33:46 +00:00
R. Ryantm
90891cd009
llama-cpp: 2568 -> 2589
2024-04-04 15:32:57 +00:00
Jonathan Ringer
65c4c21a2b
llama-cpp: use pkgs.autoAddDriverRunpath
2024-03-31 10:15:47 -07:00
Joseph Stahl
a06a03ed7c
llama-cpp: update from b2481 to b2568
2024-03-28 20:56:55 -04:00
Joseph Stahl
e1ef3aaacc
llama-cpp: embed (don't pre-compile) metal shaders
...
port of https://github.com/ggerganov/llama.cpp/pull/6118 , although compiling shaders with XCode disabled as it requires disabling sandbox (and only works on MacOS anyways)
2024-03-26 14:01:29 -04:00
Joseph Stahl
7aa588cc96
llama-cpp: rename cuBLAS to CUDA
...
Matches change from upstream 280345968d
2024-03-26 13:54:30 -04:00
Christian Kögler
2af438f836
llama-cpp: fix blasSupport ( #298567 )
...
* llama-cpp: fix blasSupport
* llama-cpp: switch from openblas to blas
2024-03-25 18:55:45 +01:00
R. Ryantm
c70ff30bde
llama-cpp: 2454 -> 2481
2024-03-21 17:34:35 +00:00
Someone
e7797267a2
Merge pull request #281576 from yannham/refactor/cuda-setup-hooks-refactor
...
cudaPackages: generalize and refactor setup hooks
2024-03-19 20:06:18 +00:00
R. Ryantm
1c2a0b6df9
llama-cpp: 2424 -> 2454
2024-03-18 12:50:17 +00:00
Yann Hamdaoui
63746cac08
cudaPackages: generalize and refactor setup hook
...
This PR refactor CUDA setup hooks, and in particular
autoAddOpenGLRunpath and autoAddCudaCompatRunpathHook, that were using a
lot of code in common (in fact, I introduced the latter by copy pasting
most of the bash script of the former). This is not satisfying for
maintenance, as a recent patch showed, because we need to duplicate
changes to both hooks.
This commit abstract the common part in a single shell script that
applies a generic patch action to every elf file in the output. For
autoAddOpenGLRunpath the action is just addOpenGLRunpath (now
addDriverRunpath), and is few line function for
autoAddCudaCompatRunpathHook.
Doing so, we also takes the occasion to use the newer addDriverRunpath
instead of the previous addOpenGLRunpath, and rename the CUDA hook to
reflect that as well.
Co-Authored-By: Connor Baker <connor.baker@tweag.io>
2024-03-15 15:54:21 +01:00
R. Ryantm
4744ccc2db
llama-cpp: 2382 -> 2424
2024-03-14 20:36:15 +00:00
R. Ryantm
bab5a87ffc
llama-cpp: 2346 -> 2382
2024-03-10 12:09:01 +00:00
R. Ryantm
821ea0b581
llama-cpp: 2294 -> 2346
2024-03-05 19:34:21 +00:00
happysalada
03173009d5
llama-cpp: 2249 -> 2294; bring upstream flake
2024-02-28 19:44:34 -05:00
R. Ryantm
33294caa77
llama-cpp: 2212 -> 2249
2024-02-23 15:04:25 +00:00
R. Ryantm
3e909c9e10
llama-cpp: 2167 -> 2212
2024-02-20 05:27:18 +00:00
R. Ryantm
efa55a0426
llama-cpp: 2135 -> 2167
2024-02-16 21:01:12 +00:00
R. Ryantm
aee2928614
llama-cpp: 2105 -> 2135
2024-02-13 10:34:27 +00:00
R. Ryantm
e5a9f1c720
llama-cpp: 2074 -> 2105
2024-02-09 03:07:40 +00:00
R. Ryantm
27398a3fe2
llama-cpp: 2050 -> 2074
2024-02-06 00:29:01 +00:00
R. Ryantm
3a67f01b7f
llama-cpp: 1892 -> 2050
2024-02-02 19:34:54 +00:00
happysalada
aaa2d4b738
llama-cpp: 1848->1892; add static build mode
2024-01-19 08:42:56 -05:00
Alex Martens
49309c0d27
llama-cpp: 1742 -> 1848
2024-01-12 19:01:51 -08:00
Weijia Wang
1f54e5e2a6
Merge pull request #278120 from r-ryantm/auto-update/llama-cpp
...
llama-cpp: 1710 -> 1742
2024-01-13 02:57:28 +01:00
R. Ryantm
b63bdd46cd
llama-cpp: 1710 -> 1742
2024-01-01 18:52:11 +00:00
happysalada
47fc482e58
llama-cpp: fix cuda support; integrate upstream
2023-12-31 16:57:28 +01:00
Nick Cao
e51a04fa37
Merge pull request #277451 from accelbread/llama-cpp-update
...
llama-cpp: 1671 -> 1710
2023-12-29 10:42:00 -05:00