Spaces:
Running
Running
Commit History
finetune: SGD optimizer, more CLI args (llama/13873)
f585fe7
ggml : update `ggml_rope_multi` (llama/12665)
b4896dc
ggml : remove old kompute, cann (skip) (#3349)
d321914
unverified
ggml: Add initial WebGPU backend (llama/14521)
0dd208f
Reese Levine
commited on
sync : resolve conflicts (ggml/0)
497add0
ggml : add ggml_scale_bias (llama/14417)
573d50a
CUDA: add bilinear interpolation for upscale (llama/14563)
68ded09
ggml : implement GEGLU_ERF and GEGLU_QUICK ops (llama/14445)
f798922
Sigbjørn Skjæret
commited on
ggml : fix FA mask dim 2 and 3 (llama/14505)
a89dc81
llama : initial Mamba-2 support (llama/9126)
1b4087e
ggml : support bcast ggml_soft_max_ext, ggml_flash_attn_ext (llama/14435)
ebacb3e
ggml : Callback before abort (llama/14481)
ccee17d
Add Conv2d for CPU (llama/14388)
68eb27a
ggml : implement REGLU/GEGLU/SWIGLU ops (llama/14158)
add5c0f
vulkan: Add fusion support for RMS_NORM+MUL (llama/14366)
737f12d
ggml-cpu: enable IBM NNPA Vector Intrinsics (llama/14317)
fea8f94
ggml-cpu : "align corners" for bilinear upscale/downscale (ggml/1285)
88e7829
Add `ggml_roll` (ggml/1274)
71923e5
threading: support for GGML_SCHED_PRIO_LOW, update thread info on Windows to avoid throttling (llama/12995)
d5d55f2
Max Krasnyansky
Diego Devesa
commited on
ggml : add ggml_repeat_4d (llama/13824)
3fe8af8
ggml : remove ggml_graph_import and ggml_graph_export declarations (ggml/1247)
3c9a1d2
ggml : fix the order of ggml_unary_op (llama/13718)
bdae2b3
ggml : add ggml_gelu_erf() (llama/13667)
6c9cd9a
mnist: fix segmentation fault (ggml/1227)
341f451
llama/ggml: add LLM training support (llama/10544)
8d3b3c1
Add `--no-op-offload` to improve `-ot` pp perf in MoE models like llama4 400B (llama/13386)
418769d
David Huang
commited on
CUDA: fix bad asserts for partial offload (llama/13337)
23e676b
CUDA: fix logic for clearing padding with -ngl 0 (llama/13320)
c3e51a2
CUDA: fix q_nope_absorbed prec for DS 2 Lite f16 (llama/13137)
e9c9d4b
ggml: move fp16/bf16 conversion optimizations to CPU backend + export conversion APIs (llama/13107)
c47823e
rpc : do not wait for response when sending RPC_CMD_SET_TENSOR (llama/12943)
691c071
ggml : fix ggml_gallocr_ptr type (ggml/1205)
cf46d5c
Diego Devesa
commited on
rpc : add RPC_CMD_HELLO (llama/12955)
ff22836
ggml : Depthwise 2D convolution (ggml/1152)
0c950d5
ggml : add bilinear upscale support (ggml/1185)
4c5e449
Diego Devesa
commited on
ggml : add more generic custom op, remove deprecated custom ops (ggml/1183)
ba7a5f8
Diego Devesa
commited on
metal : improve FA + improve MoE (llama/12612)
04a3389
rpc : send hash when tensor data is above some fixed threshold (llama/12496)
c39f9c4
llama: Add support for RWKV v7 architecture (llama/12412)
727de7e
ggml-cpu: Faster IQ1 mul_mat_vec on AVX2 using BMI2 instructions (llama/12154)
05466a9
Rémy O
commited on
ggml : portability fixes for VS 2017 (llama/12150)
49e3343
mgroeber9110
Marcus Groeber
commited on
ggml : upgrade init_tensor API to return a ggml_status (llama/11854)
d6b6852
William Tambellini
slaren
commited on
ggml-cpu: Support s390x SIMD Instruction Set (llama/12019)
4aa54ec
Aaron Teo
Jinyang He
junchao-zhao
commited on
ggml-cpu: Add CPU backend support for KleidiAI library (llama/11390)
9de6d81
Charles Xu
commited on
repo : update links to new url (llama/11886)
9705bb5
cleanup: fix compile warnings associated with gnu_printf (llama/11811)
ef6a968
bandoti
commited on