Papers
arxiv:2604.05846

AgentGL: Towards Agentic Graph Learning with LLMs via Reinforcement Learning

Published on Apr 7
ยท Submitted by
Yuanfu Sun
on Apr 9
Authors:
,
,
,

Abstract

AgentGL is a reinforcement learning-driven framework that enables large language models to navigate and reason over complex relational data by integrating graph-native tools and curriculum learning strategies.

AI-generated summary

Large Language Models (LLMs) increasingly rely on agentic capabilities-iterative retrieval, tool use, and decision-making-to overcome the limits of static, parametric knowledge. Yet existing agentic frameworks treat external information as unstructured text and fail to leverage the topological dependencies inherent in real-world data. To bridge this gap, we introduce Agentic Graph Learning (AGL), a paradigm that reframes graph learning as an interleaved process of topology-aware navigation and LLM-based inference. Specifically, we propose AgentGL, the first reinforcement learning (RL)-driven framework for AGL. AgentGL equips an LLM agent with graph-native tools for multi-scale exploration, regulates tool usage via search-constrained thinking to balance accuracy and efficiency, and employs a graph-conditioned curriculum RL strategy to stabilize long-horizon policy learning without step-wise supervision. Across diverse Text-Attributed Graph (TAG) benchmarks and multiple LLM backbones, AgentGL substantially outperforms strong GraphLLMs and GraphRAG baselines, achieving absolute improvements of up to 17.5% in node classification and 28.4% in link prediction. These results demonstrate that AGL is a promising frontier for enabling LLMs to autonomously navigate and reason over complex relational environments. The code is publicly available at https://github.com/sunyuanfu/AgentGL.

Community

Paper author Paper submitter

[๐Ÿ–๏ธ ACL 2026 Main Conference] What if your LLM could stop getting lost in graphs and start cruising through them like it just turned on Google Maps for agentic reasoning? Meet AgentGL โ€” our new paper on Agentic Graph Learning with LLMs via Reinforcement Learning!

๐Ÿ”ฅ In this work, we introduce Agentic Graph Learning (AGL), a new paradigm that reframes graph learning as an interleaved process of topology-aware navigation and LLM-based reasoning. Building on this idea, we propose AgentGL, the first RL-driven framework for AGL.

๐Ÿ” AgentGL equips LLMs with graph-native search tools
๐Ÿง  Encourages search-constrained thinking to avoid redundant exploration
๐Ÿ“ˆ Leverages graph-conditioned curriculum RL to stabilize long-horizon policy learning

โœ… Across diverse text-attributed graph benchmarks and multiple LLM backbones, AgentGL consistently outperforms strong GraphLLM and GraphRAG baselines in both in-domain and zero-shot transfer settings, with gains of up to 17.5% on node classification and 28.4% on link prediction.

๐ŸŒ We hope this work can inspire a new direction for LLMs: not just reading static context during inference time, but actively exploring, navigating, and reasoning over complex relational worlds.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2604.05846
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2604.05846 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2604.05846 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2604.05846 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.