Instructions to use pruna-test/tiny_llama_hqq with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use pruna-test/tiny_llama_hqq with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="pruna-test/tiny_llama_hqq") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("pruna-test/tiny_llama_hqq") model = AutoModelForCausalLM.from_pretrained("pruna-test/tiny_llama_hqq") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Pruna AI
How to use pruna-test/tiny_llama_hqq with Pruna AI:
# Use a pipeline as a high-level helper from pruna_pro import PrunaProModel pipe = PrunaProModel.from_pretrained("pruna_pro-test/tiny_llama_hqq") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)from pruna_pro import PrunaProModel # Load model directly from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("pruna_pro-test/tiny_llama_hqq") model = PrunaProModel.from_pretrained("pruna_pro-test/tiny_llama_hqq") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use pruna-test/tiny_llama_hqq with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "pruna-test/tiny_llama_hqq" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "pruna-test/tiny_llama_hqq", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/pruna-test/tiny_llama_hqq
- SGLang
How to use pruna-test/tiny_llama_hqq with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "pruna-test/tiny_llama_hqq" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "pruna-test/tiny_llama_hqq", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "pruna-test/tiny_llama_hqq" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "pruna-test/tiny_llama_hqq", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use pruna-test/tiny_llama_hqq with Docker Model Runner:
docker model run hf.co/pruna-test/tiny_llama_hqq
Model Card for loulou2/tiny_llama_hqq
This model was created using the pruna library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead.
Usage
First things first, you need to install the pruna library:
pip install pruna_pro
You can use the transformers library to load the model but this might not include all optimizations by default.
To ensure that all optimizations are applied, use the pruna library to load the model using the following code:
from pruna_pro import PrunaProModel
loaded_model = PrunaProModel.from_pretrained(
"loulou2/tiny_llama_hqq"
)
# we can then run inference using the methods supported by the base model
Alternatively, you can visit the Pruna documentation for more information.
Smash Configuration
The compression configuration of the model is stored in the smash_config.json file, which describes the optimization methods that were applied to the model.
{
"batcher": null,
"cacher": null,
"compiler": null,
"distiller": null,
"distributer": null,
"enhancer": null,
"factorizer": null,
"kernel": null,
"pruner": null,
"quantizer": "hqq",
"recoverer": null,
"hqq_backend": "torchao_int4",
"hqq_compute_dtype": "torch.bfloat16",
"hqq_force_hf_implementation": true,
"hqq_group_size": 64,
"hqq_use_torchao_kernels": false,
"hqq_weight_bits": 4,
"batch_size": 1,
"device": "cuda",
"device_map": null,
"save_fns": [],
"load_fns": [
"transformers"
],
"reapply_after_load": {
"factorizer": null,
"pruner": null,
"quantizer": null,
"distiller": null,
"kernel": null,
"cacher": null,
"recoverer": null,
"distributer": null,
"compiler": null,
"batcher": null,
"enhancer": null
}
}
馃實 Join the Pruna AI community!
- Downloads last month
- 3