How to use from the
Use from the
Transformers library
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("city96/mt5-xl-fp16")
model = AutoModelForSeq2SeqLM.from_pretrained("city96/mt5-xl-fp16")
Quick Links

This is a fp16 safetensors version of Google's mT5-xl model to be used in downstream inference tasks.

This repository contains both the encoder and decoder part of the model. For just the encoder, use the following repository: city96/mt5-xl-encoder-fp16

Downloads last month
12
Safetensors
Model size
4B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including city96/mt5-xl-fp16