Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Paper
•
2311.03099
•
Published
•
30
Update: Getting suprisingly good results at 16384 context, which is unexpected given this context pool should remain untouched from other mistral models working around 8192.
Thanks to @Lewdiculus for the Quants: https://huggingface.co/Lewdiculous/Prima-LelantaclesV5-7b-GGUF
This model was merged using the DARE TIES merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: dare_ties
base_model: Test157t/Prima-LelantaclesV4-7b-16k
parameters:
normalize: true
models:
- model: Test157t/Pasta-Lake-7b
parameters:
weight: 1
- model: Test157t/Prima-LelantaclesV4-7b-16k
parameters:
weight: 1
dtype: float16
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 73.09 |
| AI2 Reasoning Challenge (25-Shot) | 70.65 |
| HellaSwag (10-Shot) | 87.87 |
| MMLU (5-Shot) | 64.52 |
| TruthfulQA (0-shot) | 68.26 |
| Winogrande (5-shot) | 82.40 |
| GSM8k (5-shot) | 64.82 |
Base model
Nitral-Archive/Pasta-Lake-7b