Model Card for Gemma 4 LiteRT-LM
Tiny randomly initialized Gemma 4 LiteRT-LM model for testing.
Created using the following code.
# pip install litert-torch
from litert_torch.generative.export_hf.export import export
export(
model="optimum-intel-internal-testing/tiny-random-gemma4",
output_dir="output",
externalize_embedder=True,
# Disable quantization, None still defaults to a value
quantization_recipe="",
use_jinja_template=False,
)
- Downloads last month
- 18
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support