Vision-DeepResearch: Incentivizing DeepResearch Capability in Multimodal Large Language Models Paper • 2601.22060 • Published Jan 29 • 155
Vision-DeepResearch Benchmark: Rethinking Visual and Textual Search for Multimodal Large Language Models Paper • 2602.02185 • Published Feb 2 • 118
SpecEyes: Accelerating Agentic Multimodal LLMs via Speculative Perception and Planning Paper • 2603.23483 • Published 17 days ago • 61
WorldAgents: Can Foundation Image Models be Agents for 3D World Models? Paper • 2603.19708 • Published 22 days ago • 13
GameplayQA: A Benchmarking Framework for Decision-Dense POV-Synced Multi-Video Understanding of 3D Virtual Agents Paper • 2603.24329 • Published 16 days ago • 27
GEMS: Agent-Native Multimodal Generation with Memory and Skills Paper • 2603.28088 • Published 12 days ago • 85
Unify-Agent: A Unified Multimodal Agent for World-Grounded Image Synthesis Paper • 2603.29620 • Published 11 days ago • 46
Story2Proposal: A Scaffold for Structured Scientific Paper Writing Paper • 2603.27065 • Published 14 days ago • 22
GBQA: A Game Benchmark for Evaluating LLMs as Quality Assurance Engineers Paper • 2604.02648 • Published 8 days ago • 41
Agentic-MME: What Agentic Capability Really Brings to Multimodal Intelligence? Paper • 2604.03016 • Published 8 days ago • 32
Experience Transfer for Multimodal LLM Agents in Minecraft Game Paper • 2604.05533 • Published 4 days ago • 9