Papers
arxiv:2603.08648

CAST: Modeling Visual State Transitions for Consistent Video Retrieval

Published on Mar 9
· Submitted by
Liuyanqing
on Mar 10
Authors:
,
,
,
,
,

Abstract

Consistent Video Retrieval addresses context-agnostic limitations in video content composition by introducing CAST, a context-aware state transition method that improves narrative coherence through state-conditioned visual history prediction.

AI-generated summary

As video content creation shifts toward long-form narratives, composing short clips into coherent storylines becomes increasingly important. However, prevailing retrieval formulations remain context-agnostic at inference time, prioritizing local semantic alignment while neglecting state and identity consistency. To address this structural limitation, we formalize the task of Consistent Video Retrieval (CVR) and introduce a diagnostic benchmark spanning YouCook2, COIN, and CrossTask. We propose CAST (Context-Aware State Transition), a lightweight, plug-and-play adapter compatible with diverse frozen vision-language embedding spaces. By predicting a state-conditioned residual update (Δ) from visual history, CAST introduces an explicit inductive bias for latent state evolution. Extensive experiments show that CAST improves performance on YouCook2 and CrossTask, remains competitive on COIN, and consistently outperforms zero-shot baselines across diverse foundation backbones. Furthermore, CAST provides a useful reranking signal for black-box video generation candidates (e.g., from Veo), promoting more temporally coherent continuations.

Community

Video retrieval systems often return clips that are semantically relevant but inconsistent with the ongoing procedural state or identity in a multi-step activity.

We introduce Consistent Video Retrieval (CVR), a benchmark designed to diagnose these failures.

We also propose CAST, a lightweight adapter that models latent visual state transitions on top of frozen vision-language embeddings.

CAST consistently improves retrieval consistency across datasets and backbones, and can even help select more coherent candidates in video generation pipelines.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.08648 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.08648 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.08648 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.