Quark Quantized MXFP4 models
Collection
38 items • Updated
The model was quantized from QuixiAI/MiniMax-M2.1-bf16 using AMD-Quark. The weights are quantized to MXFP4 and activations are quantized to MXFP4.
Quantization scripts:
cd Quark/examples/torch/language_modeling/llm_ptq/
export exclude_layers="lm_head *block_sparse_moe.gate* *self_attn*"
python3 quantize_quark.py --model_dir $MODEL_DIR \
--quant_scheme mxfp4 \
--num_calib_data 128 \
--exclude_layers $exclude_layers \
--skip_evaluation \
--multi_gpu \
--trust_remote_code \
--model_export hf_format \
--output_dir $output_dir
For further details or issues, please refer to the AMD-Quark documentation or contact the respective developers.
The model was evaluated on gsm8k benchmarks using the vllm framework.
| Benchmark | QuixiAI/MiniMax-M2.1-bf16 | amd/MiniMax-M2.1-MXFP4(this model) | Recovery |
| gsm8k (flexible-extract) | 0.9356 | 0.9348 | 99.91% |
The GSM8K results were obtained using the vLLM framework, based on the Docker image rocm/vllm:rocm7.0.0_vllm_0.11.2_20251210, and vLLM is installed inside the container with fixes applied for model support.
# Reinstall vLLM
pip uninstall vllm -y
git clone https://github.com/vllm-project/vllm.git
cd vllm
git checkout v0.13.0
pip install -r requirements/rocm.txt
python setup.py develop
cd ..
VLLM_ROCM_USE_AITER=1 \
VLLM_DISABLE_COMPILE_CACHE=1 \
vllm serve "$MODEL" \
--tensor-parallel-size 4 \
--trust-remote-code \
--max-model-len 32768 \
--port 8899
python vllm/tests/evals/gsm8k/gsm8k_eval.py --host http://127.0.0.1 --port 8899 --num-questions 1000 --save-results logs
Modifications Copyright(c) 2026 Advanced Micro Devices, Inc. All rights reserved.
Base model
MiniMaxAI/MiniMax-M2.1