Video-Text-to-Text
Transformers
Safetensors
English
qwen2_5_vl
image-text-to-text
multimodal
text-generation-inference
Instructions to use OpenGVLab/VideoChat-R1_5-7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OpenGVLab/VideoChat-R1_5-7B with Transformers:
# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("OpenGVLab/VideoChat-R1_5-7B") model = AutoModelForImageTextToText.from_pretrained("OpenGVLab/VideoChat-R1_5-7B") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- e28a3cfb581ff5c8bc2a3feb53ae98ca7c5e2882890054bc1f8058fc82209b34
- Size of remote file:
- 8.31 kB
- SHA256:
- 9295f2e5f413dd0f34c1a02cb6749ea51dd14381de503d3ac9aa3a49187f18f5
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.