Quark Quantized MXFP4 models
Collection
40 items • Updated
experts, shared_expertsThis model was built with Kimi-K2.6 model by applying AMD-Quark for MXFP4 quantization.
The model was quantized from a BF16-decompressed version of moonshotai/Kimi-K2.6 using AMD-Quark. The original checkpoint uses native INT4 (compressed-tensors) quantization; it was first decompressed to BF16 before applying MXFP4 quantization. The weights and activations are quantized to MXFP4.
Quantization scripts:
cd Quark/examples/torch/language_modeling/llm_ptq/
exclude_layers="*self_attn* *mlp.gate *mlp.gate.linear *lm_head *mlp.gate_proj *mlp.up_proj *mlp.down_proj *mm_projector* *vision_tower*"
python quantize_quark.py \
--model_dir /path/to/Kimi-K2.6-bf16 \
--quant_scheme mxfp4 \
--exclude_layers $exclude_layers \
--output_dir amd/Kimi-K2.6-MXFP4 \
--model_export hf_format \
--file2file_quantization
This model can be deployed efficiently using the vLLM backend.
The model was evaluated on GSM8K benchmarks.
| Benchmark | Kimi-K2.6 | Kimi-K2.6-MXFP4 (this model) | Recovery |
| GSM8K (flexible-extract) | 0.9393 | 0.9318 | 99.2% |
The GSM8K results were obtained using the lm-evaluation-harness framework, with vLLM, lm-eval and amd-quark compiled and installed from source.
lm_eval \
--model vllm \
--model_args pretrained=amd/Kimi-K2.6-MXFP4,trust_remote_code=True,tensor_parallel_size=4 \
--tasks gsm8k \
--batch_size auto
Modifications Copyright(c) 2026 Advanced Micro Devices, Inc. All rights reserved.
Base model
moonshotai/Kimi-K2.6