Datasets:
The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Read-only mirror. The primary source for these tasks is GitHub at
benchflow-ai/skillsbench. Open issues and pull requests there.Leaderboard / trajectories:
benchflow/skillsbench-leaderboard.
SkillsBench
Agent Skills are structured packages of procedural knowledge that augment LLM agents at inference time. SkillsBench measures how well agents put those skills to work on realistic professional tasks across many domains.
- Paper: arXiv:2602.12670
- Code & tasks:
github.com/benchflow-ai/skillsbench - Runner (BenchFlow SDK):
github.com/benchflow-ai/benchflow - Discord:
discord.gg/G9dg3EfSva
Quickstart
The canonical install + run instructions live in the github repo's README
and CONTRIBUTING.md.
At the time of this snapshot the runner is invoked as bench eval create; check the github
repo for the current install command before copying any line below.
# Run the oracle agent (no API key required) to confirm tasks load:
bench eval create -t tasks --agent oracle
# Run a real agent with skills:
bench eval create \
-t tasks --agent claude-agent-acp -m claude-opus-4-7 \
-s tasks/<task-id>/environment/skills \
-o jobs/with-skills
# … and again without skills, to measure the skill delta:
bench eval create \
-t tasks --agent claude-agent-acp -m claude-opus-4-7 \
-o jobs/no-skills
To submit a run to the leaderboard, copy jobs/<run-timestamp>/ into a fork of
benchflow/skillsbench-leaderboard
under submissions/skillsbench/v0.1/<agent>__<model>/ and open a PR. See that repo's
README for the full submission protocol.
What's in this dataset
A single parquet bundling the full task corpus from upstream main at commit
05f263429c5033604da64e8de568ce654fcbe6e6 (91 tasks). One row per task with
task.toml metadata flattened to top-level columns, key text files inlined
(instruction, dockerfile, solve_sh, test_sh, test_outputs), and every
other file (environment data, solution artefacts, test fixtures, skill assets) in
files: list[struct{path, size_bytes, sha256, is_text, content}] with binary
content as parquet binary.
| Field | Type | Description |
|---|---|---|
task_id |
string | directory name under tasks/ |
category, difficulty |
string | from task.toml [metadata] |
difficulty_explanation |
string | author's prose on why the task is hard |
tags |
list[string] | [metadata].tags |
allow_internet |
bool | [environment].allow_internet |
instruction, task_toml, dockerfile, solve_sh, test_sh, test_outputs |
string | inlined source files |
skills |
list[struct] | {name, description, skill_md} per skill in environment/skills/ |
files |
list[struct] | every other file: {path, size_bytes, sha256, is_text, content} |
from huggingface_hub import hf_hub_download
import pyarrow.parquet as pq, pyarrow.compute as pc
path = hf_hub_download("benchflow/skillsbench",
"skillsbench-tasks.parquet", repo_type="dataset")
t = pq.read_table(path)
row = t.filter(pc.equal(t["task_id"], "shock-analysis-demand")).to_pylist()[0]
for f in row["files"]:
... # write f["content"] to f["path"]
Anti-contamination
Do not train models on this dataset. Solutions and ground-truth tests are inlined for reproducibility. Contamination on those will void any benchmark numbers reported against SkillsBench.
Citing SkillsBench
@misc{skillsbench_2026,
title={SkillsBench: Benchmarking How Well Agent Skills Work Across Diverse Tasks},
author={Xiangyi Li and Wenbo Chen and Yimin Liu and Shenghan Zheng and Xiaokun Chen and Yifeng He and Yubo Li and Bingran You and Haotian Shen and Jiankai Sun and Shuyi Wang and Binxu Li and Qunhong Zeng and Di Wang and Xuandong Zhao and Yuanli Wang and Roey Ben Chaim and Zonglin Di and Yipeng Gao and Junwei He and Yizhuo He and Liqiang Jing and Luyang Kong and Xin Lan and Jiachen Li and Songlin Li and Yijiang Li and Yueqian Lin and Xinyi Liu and Xuanqing Liu and Haoran Lyu and Ze Ma and Bowei Wang and Runhui Wang and Tianyu Wang and Wengao Ye and Yue Zhang and Hanwen Xing and Yiqi Xue and Steven Dillmann and Han-chung Lee},
year={2026},
eprint={2602.12670},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2602.12670}}
- Downloads last month
- -