MiniMaxAI/SynLogic
Viewer • Updated • 49.3k • 620 • 104
How to use MiniMaxAI/SynLogic-7B with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="MiniMaxAI/SynLogic-7B")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MiniMaxAI/SynLogic-7B")
model = AutoModelForCausalLM.from_pretrained("MiniMaxAI/SynLogic-7B")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use MiniMaxAI/SynLogic-7B with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "MiniMaxAI/SynLogic-7B"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "MiniMaxAI/SynLogic-7B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/MiniMaxAI/SynLogic-7B
How to use MiniMaxAI/SynLogic-7B with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "MiniMaxAI/SynLogic-7B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "MiniMaxAI/SynLogic-7B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "MiniMaxAI/SynLogic-7B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "MiniMaxAI/SynLogic-7B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use MiniMaxAI/SynLogic-7B with Docker Model Runner:
docker model run hf.co/MiniMaxAI/SynLogic-7B
SynLogic-7B is a logical reasoning model built on Qwen2.5-7B-Base and trained using reinforcement learning on our SynLogic dataset. Despite its smaller size, the model demonstrates strong logical reasoning capabilities and effective generalization to mathematical domains.
| Model | KOR-Bench | BBH | BBEH |
|---|---|---|---|
| Qwen2.5-7B-Instruct | 38.6 | 62.7 | 12.4 |
| SynLogic-7B | 48.1 | 66.5 | 8.0 |
| Model | AIME 2024 | MATH 500 | AMC 2023 |
|---|---|---|---|
| Qwen2.5-7B-Base | 0.3 | 64.6 | 30.0 |
| Qwen2.5-7B-Instruct | 6.3 | 76.4 | 52.5 |
| SynLogic-7B | 10.0 | 71.8 | 55.0 |
Key Achievements:
@misc{liu2025synlogic,
title={SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond},
author={Junteng Liu and Yuanxiang Fan and Zhuo Jiang and Han Ding and Yongyi Hu and Chi Zhang and Yiqi Shi and Shitong Weng and Aili Chen and Shiqi Chen and Yunan Huang and Mozhi Zhang and Pengyu Zhao and Junjie Yan and Junxian He},
year={2025},
eprint={2505.19641},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2505.19641},
}
docker model run hf.co/MiniMaxAI/SynLogic-7B