Spaces:
Running
Running
Commit History
opencl: add initial mxfp4 support via mv (llama/15270)
1a0281c
lhez
shawngu-quic
commited on
vulkan : fix out-of-bounds access in argmax kernel (llama/15342)
78a1865
vulkan : fix compile warnings on macos (llama/15340)
e3107ff
ggml: initial IBM zDNN backend (llama/14975)
449e1a4
CUDA: fix negative KV_max values in FA (llama/15321)
6e3a7b6
HIP: Cleanup hipification header (llama/15285)
7cdf9cd
vulkan: perf_logger improvements (llama/15246)
d48d508
ggml: fix ggml_conv_1d_dw bug (ggml/1323)
4496862
cuda : fix GGML_CUDA_GRAPHS=OFF (llama/15300)
59c694d
Sigbjørn Skjæret
commited on
finetune: SGD optimizer, more CLI args (llama/13873)
f585fe7
HIP: bump requirement to rocm 6.1 (llama/15296)
58a3802
ggml : update `ggml_rope_multi` (llama/12665)
b4896dc
ggml : repack block_iq4_nlx8 (llama/14904)
db4407f
CUDA: Optimize `reduce_rows_f32` kernel, leading up to 25x perf improvement on kernel-level and 10% perf increase for Gemma3n (llama/15132)
c768824
ggml-rpc: chunk send()/recv() to avoid EINVAL for very large tensors over RPC (macOS & others) (llama/15188)
c8284f2
HIP: disable sync warp shuffel operators from clr amd_warp_sync_functions.h (llama/15273)
8fca6dd
sycl: Fix and disable more configurations of mul_mat (llama/15151)
7b868ed
Romain Biessy
commited on
opencl: allow mixed f16/f32 `add` (llama/15140)
345810b
CUDA cmake: add `-lineinfo` for easier debug (llama/15260)
008e169
CANN: GGML_OP_CPY optimization (llama/15070)
73e90ff
Chenguang Li
commited on
musa: fix failures in test-backend-ops for mul_mat_id op (llama/15236)
4168dda
CANN: Add broadcast for softmax and FA (llama/15208)
db87c9d
kleidiai: fix unsigned overflow bug (llama/15150)
9d5f58c
Charles Xu
commited on
cuda: refactored ssm_scan and use CUB (llama/13291)
7a187d1
David Zhao
commited on
CUDA: add attention sinks for tile and wmma (llama/15178)
46e7c87
gguf-py : add Numpy MXFP4 de/quantization support (llama/15111)
324f3bd
ggml : fix field name when new ggml_backend (llama/14944)
685748d
AN Long
commited on
CUDA: attention sinks for mma FlashAttention (llama/15157)
0ab9aba
opencl: support sink in `soft_max` (attn sinks) (llama/15152)
d8664e4
lhez
commited on
vulkan: support fattn sinks (llama/15126)
d7e9115
vulkan: Add env var to disable host visible vidmem (llama/15109)
5ec4382
HIP: add cmake option to enable compiler output of kernel resource usage metrics (llama/15103)
577f7e4
ggml: Skip backend library linking code when GGML_BACKEND_DL=ON (llama/15094)
f84562e
Christian Kastner
commited on
CUDA: GEMM for FP32/FP16/BF16 and ne11 <= 16 (llama/15131)
1d24833
fix profiling crash (llama/15072)
67ec576
opencl: add `swiglu_oai` and `add_id` (llama/15121)
1c97db6
lhez
commited on
ggml : fix fallback to CPU for ununsupported ops (llama/15118)
2b7ae5e
Diego Devesa
commited on
CANN: add support for ACL Graph (llama/15065)
137a0dc
Chenguang Li
commited on
sycl: fix mul_mat selection (llama/15092)
344310a
Romain Biessy
commited on
cmake: Add GGML_BACKEND_DIR option (llama/15074)
6e460b6
Christian Kastner
commited on
vulkan: fix build when using glslang that does not support coopmat2 (llama/15062)
863e083
vulkan: Use coopmat2 for conv2d (llama/14982)
6df82f4
opencl: fix adreno compiler detection logic (llama/15029)
e6a209e
lhez
commited on