Hi, i am attempting to train a CodeLlama-34b-v2 model on a custom dataset of front end code. I have tried to do this using GCPs Vertex AI; however, the integration with huggingface and GCP resources is not that intuitive. Does anyone have any experience with training this model on larger datasets of code?
Related topics
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Fine-tune code llama on private source code | 3 | 2897 | May 30, 2024 | |
| How to fine-tune a pretrained LLM on custom code libraries? | 3 | 8580 | April 26, 2025 | |
| Want to host a production level server for runnin llm for code generation | 0 | 101 | January 7, 2025 | |
| Fine-tuning CodeLlama for Multi-File Code Generation in a Private Repository | 11 | 8394 | October 31, 2025 | |
| Pretrain GPT-Neo for Open Source GitHub Copilot Model | 54 | 24227 | January 18, 2022 |