url
stringlengths 56
57
| repository_url
stringclasses 1
value | labels_url
stringlengths 70
71
| comments_url
stringlengths 65
66
| events_url
stringlengths 63
64
| html_url
stringlengths 46
47
| id
int64 1.88B
2.91B
| node_id
stringlengths 18
18
| number
int64 906
2.42k
| title
stringlengths 3
380
| user
dict | labels
listlengths 0
3
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
1
| milestone
null | comments
int64 0
42
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | sub_issues_summary
dict | active_lock_reason
null | draft
bool 0
classes | pull_request
dict | body
stringlengths 4
45.9k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 65
66
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 1
class |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/peft/issues/2415
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2415/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2415/comments
|
https://api.github.com/repos/huggingface/peft/issues/2415/events
|
https://github.com/huggingface/peft/issues/2415
| 2,905,929,237
|
I_kwDOIf9iDM6tNPYV
| 2,415
|
size mismatch for lm_head when fintune QWEN2.5
|
{
"login": "minmie",
"id": 40080081,
"node_id": "MDQ6VXNlcjQwMDgwMDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/40080081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minmie",
"html_url": "https://github.com/minmie",
"followers_url": "https://api.github.com/users/minmie/followers",
"following_url": "https://api.github.com/users/minmie/following{/other_user}",
"gists_url": "https://api.github.com/users/minmie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minmie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minmie/subscriptions",
"organizations_url": "https://api.github.com/users/minmie/orgs",
"repos_url": "https://api.github.com/users/minmie/repos",
"events_url": "https://api.github.com/users/minmie/events{/privacy}",
"received_events_url": "https://api.github.com/users/minmie/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2025-03-10T02:45:29
| 2025-03-10T02:45:29
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
transformers version: 4.49.0
Platform: Linux-6.6.0-72.0.0.64.oe2403.x86_64-x86_64-with-glibc2.38
Python version: 3.10.16
Huggingface_hub version: 0.29.1
Safetensors version: 0.5.3
Accelerate version: 1.4.0
Accelerate config: not found
DeepSpeed version: not installed
PyTorch version (GPU?): 2.2.2+cu121 (True)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using distributed or parallel set-up in script?:
Using GPU in script?:
GPU type: NVIDIA L4
### Who can help?
@benjaminbossan @sayakpaul
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
I load an adapter for Qwen/Qwen2.5-0.5B using the following code and an error occur:
```python
import torch
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer, pipeline
from peft import PeftConfig, PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "/home/chenjq/pythonWork/nlp/Qwen2.5-0.5B-SFT-Capybara/checkpoint-31"
# peft_model_id = args.output_dir
tokenizer = AutoTokenizer.from_pretrained(peft_model_id)
# Load Model with PEFT adapter
model = AutoPeftModelForCausalLM.from_pretrained(
peft_model_id,
device_map="auto",
torch_dtype=torch.float16
)
```
Error info as follow:
```python
Sliding Window Attention is enabled but not implemented for `sdpa`; unexpected results may be encountered.
Traceback (most recent call last):
File "/home/chenjq/.pycharm_helpers/pydev/pydevd.py", line 1500, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/chenjq/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/chenjq/pythonWork/nlp/test14.py", line 11, in <module>
model = AutoPeftModelForCausalLM.from_pretrained(
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/peft/auto.py", line 130, in from_pretrained
return cls._target_peft_class.from_pretrained(
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/peft/peft_model.py", line 581, in from_pretrained
load_result = model.load_adapter(
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/peft/peft_model.py", line 1239, in load_adapter
load_result = set_peft_model_state_dict(
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/peft/utils/save_and_load.py", line 451, in set_peft_model_state_dict
load_result = model.load_state_dict(peft_model_state_dict, strict=False)
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2153, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for PeftModelForCausalLM:
size mismatch for base_model.model.lm_head.modules_to_save.default.weight: copying a param with shape torch.Size([151936, 896]) from checkpoint, the shape in current model is torch.Size([151665, 896]).
Process finished with exit code 1
```
However, if I use the following code to load model, everything just work fine:
```python
from peft import PeftConfig, PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model_name ='/home/models/qwen/Qwen2.5-0.5B'
adapter_model_name = "/home/chenjq/pythonWork/nlp/Qwen2.5-0.5B-SFT-Capybara/checkpoint-31"
model = AutoModelForCausalLM.from_pretrained(base_model_name)
model = PeftModel.from_pretrained(model, adapter_model_name)
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
```
Some info from [here ](https://github.com/huggingface/transformers/issues/36550#issuecomment-2708336059)that maybe help:
Hi everyone! I did some research and found out that the error occurs because the len(tokenizer)(151665) and the embedding size (151936) of Qwen/Qwen2.5-0.5B do not match. _BaseAutoPeftModel.from_pretrained resizes the base model embeddings to match with the tokenizer ([here](https://github.com/huggingface/peft/blob/8edaae9460e4b76bce9431dc187402178ff7b689/src/peft/auto.py#L137)) and as a result, it is unable to load the saved weights. I think a possible solution might be to only resize base model embeddings if the tokenizer size differs from the base tokenizer size. What do you think?
The adapter trained using the following code:
```python
from datasets import load_dataset
from trl import SFTConfig, SFTTrainer
from peft import LoraConfig
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
dataset = load_dataset("trl-lib/Capybara", split="train")
dataset = dataset.select(range(500))
MODEL_ID = 'Qwen/Qwen2.5-0.5B'
peft_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.05,
target_modules="all-linear",
modules_to_save=["lm_head", "embed_token"],
task_type="CAUSAL_LM",
)
args = SFTConfig(
output_dir="Qwen2.5-0.5B-SFT-Capybara", # directory to save and repository id
num_train_epochs=1, # number of training epochs
per_device_train_batch_size=4, # batch size per device during training
gradient_accumulation_steps=4, # number of steps before performing a backward/update pass
gradient_checkpointing=True, # use gradient checkpointing to save memory
optim="adamw_torch_fused", # use fused adamw optimizer
logging_steps=10, # log every 10 steps
save_strategy="epoch", # save checkpoint every epoch
bf16=True, # use bfloat16 precision
tf32=True, # use tf32 precision
learning_rate=2e-4, # learning rate, based on QLoRA paper
max_grad_norm=0.3, # max gradient norm based on QLoRA paper
warmup_ratio=0.03, # warmup ratio based on QLoRA paper
lr_scheduler_type="constant", # use constant learning rate scheduler
push_to_hub=False, # push model to hub
# report_to="tensorboard", # report metrics to tensorboard
)
trainer = SFTTrainer(
MODEL_ID,
train_dataset=dataset,
args=args,
peft_config=peft_config
)
trainer.train()
print('end')
```
### Expected behavior
Hope the model can predict normally.
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2415/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2413
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2413/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2413/comments
|
https://api.github.com/repos/huggingface/peft/issues/2413/events
|
https://github.com/huggingface/peft/issues/2413
| 2,901,962,025
|
I_kwDOIf9iDM6s-G0p
| 2,413
|
`LoraConfig` multiple properties should be unified
|
{
"login": "Qubitium",
"id": 417764,
"node_id": "MDQ6VXNlcjQxNzc2NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/417764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Qubitium",
"html_url": "https://github.com/Qubitium",
"followers_url": "https://api.github.com/users/Qubitium/followers",
"following_url": "https://api.github.com/users/Qubitium/following{/other_user}",
"gists_url": "https://api.github.com/users/Qubitium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Qubitium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Qubitium/subscriptions",
"organizations_url": "https://api.github.com/users/Qubitium/orgs",
"repos_url": "https://api.github.com/users/Qubitium/repos",
"events_url": "https://api.github.com/users/Qubitium/events{/privacy}",
"received_events_url": "https://api.github.com/users/Qubitium/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 9
| 2025-03-07T04:14:24
| 2025-03-10T14:59:51
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
@BenjaminBossan I am trying to add dynamic Lora support to both vLLM and SGLang as LoraConfig already supports this dynamic control via the following variables:
- `rank_pattern`: regex matching of which different `r`/`rank` values are applied
- `exclude_modules`: regex: which modules are not excluded from lora completedly
- `alpha_pattern`: regex matching of `alpha` override. extactly the same as `rank_pattern` but different property.
Nothing wrong with them individually but together, they become unncessary detached and has negative impact on code cost but also on dynamic control efficiency.
GPTQModel uses a single `dynamic`: Diction[str, Dict[]] where the `str` is a regex with `+:` (positive prefix, optional), `-:` negative prefix (Optional).
The dict value is the property override in string: value format.
Example as applied to PEFT (Proposal):
```
# implicit +: prefix if not used
# prefixs are stripped before the regex is performed
"mlp\.down_proj": { "r": 128 } # implicit positive
"+:mlp\.down_proj": { "r": 256 } # explicit positive
"-:mlp\.gate_proj": {} # negative
```
This simple control allows 3 states.
- Positive match == override any property values in base config (LoraConfig).
- Negative match == skip this modele for Lora (no LoraConfig at all)
- No match == There is no module matched so Base LoraConfig is used.
This single control replaces all existing PEFT control with same functionally while allowing ALL properties to be dynamically overriden (if necessary) without any additional apis/LoraConfig vars. As it exists, you need to add code and logic to every LoraConfig property that participates in dynamic override/control.
Basically I want Peft LoraConfig to the clean standard for vLLM and SGLang when it comes to dynamic control. Having a unified `dynamic` override system makes everyone's life so much easier and at the same time eliminate the issue that we have to write code each time a new LoraConfig property comes into pace.
Let me know what you think. I am willing to spend time working on it. You can also reach me at [email protected] and on [X: qubitium](https://x.com/qubitium). I really would love to chat with you for like 15 minutes to ping-pong this idea with you.
CC: @SunMarc @MekkCyber
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2413/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2412
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2412/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2412/comments
|
https://api.github.com/repos/huggingface/peft/issues/2412/events
|
https://github.com/huggingface/peft/issues/2412
| 2,901,275,403
|
I_kwDOIf9iDM6s7fML
| 2,412
|
Lora_B weight becomes 0 when using AuotModel
|
{
"login": "makcedward",
"id": 36614806,
"node_id": "MDQ6VXNlcjM2NjE0ODA2",
"avatar_url": "https://avatars.githubusercontent.com/u/36614806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/makcedward",
"html_url": "https://github.com/makcedward",
"followers_url": "https://api.github.com/users/makcedward/followers",
"following_url": "https://api.github.com/users/makcedward/following{/other_user}",
"gists_url": "https://api.github.com/users/makcedward/gists{/gist_id}",
"starred_url": "https://api.github.com/users/makcedward/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/makcedward/subscriptions",
"organizations_url": "https://api.github.com/users/makcedward/orgs",
"repos_url": "https://api.github.com/users/makcedward/repos",
"events_url": "https://api.github.com/users/makcedward/events{/privacy}",
"received_events_url": "https://api.github.com/users/makcedward/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2025-03-06T19:45:29
| 2025-03-06T19:45:29
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
transformers version: 4.49.0
peft version: 0.14.0
### Who can help?
@benjaminbossan @sayakpaul
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModel, AutoModelForCausalLM
from peft import PeftModel
base_model_id = "meta-llama/Llama-3.2-1B"
adapter_id = "makcedward/Llama-3.2-1B-Instruct-LoRA-Adapter"
auto_model = PeftModel.from_pretrained(
AutoModel.from_pretrained(
base_model_id,
),
adapter_id
)
auto_casual_model = PeftModel.from_pretrained(
AutoModelForCausalLM.from_pretrained(
base_model_id,
),
adapter_id
)
print("Auto Model")
print(auto_model.base_model.model.layers[0].self_attn.q_proj.lora_A.default.weight)
# tensor([[-0.0168, 0.0056, -0.0009, ..., 0.0149, -0.0161, -0.0064],
print(auto_model.base_model.model.layers[0].self_attn.q_proj.lora_B.default.weight)
# tensor([[0., 0., 0., ..., 0., 0., 0.],
print("AutoModelForCausalLM")
print(auto_casual_model.base_model.model.model.layers[0].self_attn.q_proj.lora_A.default.weight)
# tensor([[ 1.5867e-02, 2.7307e-02, -1.8503e-02, ..., -1.2035e-02,
print(auto_casual_model.base_model.model.model.layers[0].self_attn.q_proj.lora_B.default.weight)
# tensor([[-7.1123e-04, -4.3834e-03, -1.7415e-03, ..., 4.3514e-03,
```
### Expected behavior
Able to load LoRA weights by using AutoModel
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2412/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2410
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2410/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2410/comments
|
https://api.github.com/repos/huggingface/peft/issues/2410/events
|
https://github.com/huggingface/peft/issues/2410
| 2,899,373,069
|
I_kwDOIf9iDM6s0OwN
| 2,410
|
running forward loop using get_peft_model disables requires_grad on output
|
{
"login": "Hamidreza3252",
"id": 27887474,
"node_id": "MDQ6VXNlcjI3ODg3NDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/27887474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hamidreza3252",
"html_url": "https://github.com/Hamidreza3252",
"followers_url": "https://api.github.com/users/Hamidreza3252/followers",
"following_url": "https://api.github.com/users/Hamidreza3252/following{/other_user}",
"gists_url": "https://api.github.com/users/Hamidreza3252/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hamidreza3252/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hamidreza3252/subscriptions",
"organizations_url": "https://api.github.com/users/Hamidreza3252/orgs",
"repos_url": "https://api.github.com/users/Hamidreza3252/repos",
"events_url": "https://api.github.com/users/Hamidreza3252/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hamidreza3252/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 3
| 2025-03-06T05:12:42
| 2025-03-06T15:35:13
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
I would like to report a recent issue I have been facing, but I am not sure if it is a bug or I am doing something wrong in the process. The steps to re-create the steps are easy. The issue happens when I try to convert **Qwen2-VL-2B-Instruct** model into a PEFT model using `get_peft_model` method. Simply load the model using the sample code in https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct and try to convert it to a PEFT model using a typical **8bit** LoraConfig with just sample `target_modules=["q_proj", "v_proj"]`. Then simply run a forward call to the model using a dummy input, such as `input_ids = torch.zeros((4, 1247)).to(device)`. When I inspect the `requires_grad` of `logits` attribute of the output, it is False. Meaning that I cannot run backward based on that output. This issue has been puzzling me for a while. I would appreciate if you can help me with a solution or advice how to address it properly.
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2410/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2407
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2407/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2407/comments
|
https://api.github.com/repos/huggingface/peft/issues/2407/events
|
https://github.com/huggingface/peft/issues/2407
| 2,895,061,583
|
I_kwDOIf9iDM6sjyJP
| 2,407
|
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:3! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
|
{
"login": "maxliang114514",
"id": 196797831,
"node_id": "U_kgDOC7rlhw",
"avatar_url": "https://avatars.githubusercontent.com/u/196797831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxliang114514",
"html_url": "https://github.com/maxliang114514",
"followers_url": "https://api.github.com/users/maxliang114514/followers",
"following_url": "https://api.github.com/users/maxliang114514/following{/other_user}",
"gists_url": "https://api.github.com/users/maxliang114514/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxliang114514/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxliang114514/subscriptions",
"organizations_url": "https://api.github.com/users/maxliang114514/orgs",
"repos_url": "https://api.github.com/users/maxliang114514/repos",
"events_url": "https://api.github.com/users/maxliang114514/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxliang114514/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 6
| 2025-03-04T18:09:43
| 2025-03-10T11:17:16
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
**When I attempted to swap out the Lora configuration in Q-Lora(see qlora.py in _https://github.com/artidoro/qlora_) for Vera, I ran into the following error:**
Traceback (most recent call last):
File "qvera.py", line 859, in <module>
train()
File "qvera.py", line 821, in train
train_result = trainer.train()
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/trainer.py", line 1809, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/trainer.py", line 2654, in training_step
loss = self.compute_loss(model, inputs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/trainer.py", line 2679, in compute_loss
outputs = model(**inputs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/peft/peft_model.py", line 1644, in forward
return self.base_model(
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/peft/tuners/tuners_utils.py", line 197, in forward
return self.model.forward(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 806, in forward
outputs = self.model(
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 685, in forward
layer_outputs = torch.utils.checkpoint.checkpoint(
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/utils/checkpoint.py", line 249, in checkpoint
return CheckpointFunction.apply(function, preserve, *args)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/utils/checkpoint.py", line 107, in forward
outputs = run_function(*args)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 681, in custom_forward
return module(*inputs, output_attentions, None)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 408, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 305, in forward
query_states = self.q_proj(hidden_states)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/peft/tuners/vera/layer.py", line 287, in forward
result = result + lambda_b * F.linear(lambda_d * F.linear(dropout(x), sliced_A), sliced_B)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:3! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
**However, with the original settings, everything was trainable. My GPU specs are as follows:**
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.135 Driver Version: 550.135 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 2080 Ti Off | 00000000:02:00.0 Off | N/A |
| 22% 19C P8 11W / 250W | 1MiB / 11264MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA GeForce RTX 2080 Ti Off | 00000000:03:00.0 Off | N/A |
| 22% 19C P8 21W / 250W | 1MiB / 11264MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 2 NVIDIA GeForce RTX 2080 Ti Off | 00000000:82:00.0 Off | N/A |
| 22% 20C P8 17W / 250W | 1MiB / 11264MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 3 NVIDIA GeForce RTX 2080 Ti Off | 00000000:83:00.0 Off | N/A |
| 22% 19C P8 8W / 250W | 1MiB / 11264MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
**Is this an issue specific to Vera's unique characteristics? Given the scarcity of resources on Vera, I'd greatly appreciate any help with this problem, thank you!**
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2407/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2405
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2405/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2405/comments
|
https://api.github.com/repos/huggingface/peft/issues/2405/events
|
https://github.com/huggingface/peft/issues/2405
| 2,890,200,666
|
I_kwDOIf9iDM6sRPZa
| 2,405
|
SafetensorError when Merging LoRA Weights
|
{
"login": "Nothern-ai",
"id": 143473220,
"node_id": "U_kgDOCI06RA",
"avatar_url": "https://avatars.githubusercontent.com/u/143473220?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nothern-ai",
"html_url": "https://github.com/Nothern-ai",
"followers_url": "https://api.github.com/users/Nothern-ai/followers",
"following_url": "https://api.github.com/users/Nothern-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/Nothern-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nothern-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nothern-ai/subscriptions",
"organizations_url": "https://api.github.com/users/Nothern-ai/orgs",
"repos_url": "https://api.github.com/users/Nothern-ai/repos",
"events_url": "https://api.github.com/users/Nothern-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nothern-ai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 1
| 2025-03-03T05:22:05
| 2025-03-03T10:11:44
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Original Working Environment: Python 3.8, transformers==4.46.0.dev0, safetensors==0.4.4, peft==0.12.0, trl==0.10.1
New Environment with Issue: transformers==4.45.2, safetensors==0.4.4, peft==0.12.0, trl==0.10.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
When migrating from the original environment to a new machine with slightly different package versions, I encountered an error during the model merging process.
My workflow involves:
Saving LoRA weights
Merging these weights with the base model
The error occurs specifically during the loading of safetensors files after merging/
Reproduction Steps
no need to train directly save LoRA weights (this step succeeds)
Attempt to merge the saved weights with the original model
The merge fails with the above error
```
# train_critic.py
import os
import time
import shutil
import argparse
import torch
import torch.distributed as dist
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
GenerationConfig,
BitsAndBytesConfig,
)
from datasets import load_dataset
from trl import DPOTrainer, DPOConfig
from peft import LoraConfig, PeftModel
import wandb
from datetime import datetime
def print_rank_0(message):
if dist.get_rank() == 0:
print(message)
def main():
# ------------- Parse Arguments -------------
parser = argparse.ArgumentParser()
parser.add_argument("--epoch", type=int, required=True, help="Current outer training iteration (which round)")
parser.add_argument("--pref_dir", type=str, required=True, help="Folder for storing the preference dataset")
parser.add_argument("--weights_dir", type=str, required=True, help="Folder for saving and loading weights")
parser.add_argument("--train_epochs", type=int, default=1, help="Number of epochs to run in this DPO fine-tuning")
parser.add_argument("--beta", type=float, default=0.2, help="Beta hyperparameter for DPO")
parser.add_argument("--learning_rate", type=float, default=5e-6, help="Learning rate")
parser.add_argument("--batch_size", type=int, default=1, help="Batch Size")
args = parser.parse_args()
# ------------- Distributed Initialization -------------
local_rank = int(os.environ.get("LOCAL_RANK", -1))
if local_rank >= 0:
torch.cuda.set_device(local_rank)
dist.init_process_group(
backend='nccl',
init_method='env://',
world_size=int(os.environ.get("WORLD_SIZE", 1)),
rank=int(os.environ.get("RANK", 0))
)
print_rank_0(f"CUDA_VISIBLE_DEVICES: {os.environ.get('CUDA_VISIBLE_DEVICES')}")
print_rank_0(f"LOCAL_RANK: {os.environ.get('LOCAL_RANK')}")
print_rank_0(f"WORLD_SIZE: {os.environ.get('WORLD_SIZE')}")
# ------------- config -------------
epoch = args.epoch
weights_dir = args.weights_dir
pref_dir = args.pref_dir
batch_size = args.batch_size
base_model_path = "meta-llama/Llama-3.1-8B-Instruct"
print("base_model_path:", base_model_path)
data_path = os.path.join(pref_dir, f"critic_{epoch}.jsonl")
output_model_path = os.path.join(weights_dir, f"critic_{epoch}")
os.makedirs(output_model_path, exist_ok=True)
print_rank_0(f"Loading base model from: {base_model_path}")
model = AutoModelForCausalLM.from_pretrained(
base_model_path,
torch_dtype=torch.bfloat16,
device_map={'': torch.cuda.current_device()}
# device_map={'': torch.cuda.current_device()} if local_rank >= 0 else "auto",
)
tokenizer = AutoTokenizer.from_pretrained(base_model_path, use_fast=False)
model.generation_config = GenerationConfig(
max_new_tokens=512,
temperature=0.7,
do_sample=True,
)
# padding_side/pad_token
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
tokenizer.padding_side = 'right'
tokenizer.pad_token = '[PAD]'
model.config.pad_token_id = tokenizer.pad_token_id
model.config.eos_token_id = tokenizer.eos_token_id
with torch.no_grad():
model.resize_token_embeddings(len(tokenizer))
print_rank_0(f"Loading dataset from: {data_path}")
dataset = load_dataset('json', data_files=data_path)['train']
def convert_format(example):
messages = example['messages']
formatted = "<|begin_of_text|>"
# system
system_msg = messages[0]
formatted += f"<|start_header_id|>system<|end_header_id|>\n\n{system_msg['content']}<|eot_id|>"
# user
user_msg = messages[1]
formatted += f"<|start_header_id|>user<|end_header_id|>\n\n{user_msg['content']}<|eot_id|>"
# assistant
formatted += "<|start_header_id|>assistant<|end_header_id|>\n\n"
chosen_response = example['chosen'] + tokenizer.eos_token
rejected_response = example['rejected'] + tokenizer.eos_token
return {
"prompt": formatted,
"chosen": chosen_response,
"rejected": rejected_response
}
train_dataset = dataset.map(
convert_format,
remove_columns=dataset.column_names,
load_from_cache_file=False
)
base_lr = args.learning_rate
scaled_lr = base_lr * dist.get_world_size() * batch_size
warmup_steps = 100
dpo_config = DPOConfig(
beta=args.beta,
warmup_steps=warmup_steps,
weight_decay=0.01,
learning_rate=scaled_lr,
rpo_alpha=1.0,
# lr_scheduler_type="cosine",
output_dir=output_model_path,
num_train_epochs=args.train_epochs,
per_device_train_batch_size=batch_size,
fp16=False,
bf16=True,
logging_steps=10,
save_strategy="no",
save_total_limit=1,
report_to="none",
ddp_backend='nccl',
remove_unused_columns=False,
dataloader_drop_last=True,
max_length=2048,
max_prompt_length=2048,
local_rank=local_rank,
)
# LoRA
peft_config = LoraConfig(
r=256,
lora_alpha=32,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
lora_dropout=0.0,
bias="none",
task_type="CAUSAL_LM",
)
trainer = DPOTrainer(
model=model,
args=dpo_config,
train_dataset=train_dataset,
tokenizer=tokenizer,
peft_config=peft_config,
)
trainer.train()
# ------------- merge LoRA -------------
if dist.get_rank() == 0:
lora_weights_path = os.path.join(output_model_path, "lora_weights")
trainer.model.save_pretrained(lora_weights_path)
# print("lora weight saved")
# trainer.model.save_pretrained(lora_weights_path, safe_serialization=False)
print("lora weight saved")
base_merged_model = AutoModelForCausalLM.from_pretrained(
base_model_path,
device_map=None,
low_cpu_mem_usage=False,
)
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
tokenizer.pad_token = '[PAD]'
base_merged_model.config.pad_token_id = tokenizer.pad_token_id
base_merged_model.config.eos_token_id = tokenizer.eos_token_id
with torch.no_grad():
base_merged_model.resize_token_embeddings(len(tokenizer))
peft_model = PeftModel.from_pretrained(
base_merged_model,
lora_weights_path,
device_map=None,
)
merged_model = peft_model.merge_and_unload()
# save
print_rank_0(f"Saving merged model to: {output_model_path}")
merged_model.save_pretrained(output_model_path)
print_rank_0("Model saved successfully")
tokenizer.save_pretrained(output_model_path)
# delete lora weights
shutil.rmtree(lora_weights_path)
dist.barrier(device_ids=[local_rank] if local_rank >= 0 else None)
print_rank_0("DPO Training complete.")
dist.destroy_process_group()
if __name__ == "__main__":
main()
```
When trying to skip saving the LoRA weights and directly merging them, the merge operation succeeds
```
peft_model = trainer.model
merged_model = peft_model.merge_and_unload()
print_rank_0(f"Saving merged model to: {output_model_path}")
merged_model.save_pretrained(output_model_path)
tokenizer.save_pretrained(output_model_path)
print_rank_0("Merged model saved successfully")
```
However, attempting to AutoModelForCausalLM.from_pretrained the merged safetensors weights later results in the error2
### Expected behavior
error1(save lora weights and merge):
> 100%|██████████| 1/1 [00:01<00:00, 1.91s/it]
> 100%|██████████| 1/1 [00:01<00:00, 1.92s/it]
> /home//miniconda3/envs/py39env/lib/python3.8/site-packages/peft/utils/save_and_load.py:232: UserWarning: Setting `save_embedding_layers` to `True` as the embedding layer has been resized during finetuning.
> warnings.warn(
> lora weight saved
>
> Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]
> Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:02, 1.28it/s]
> Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.32it/s]
> Loading checkpoint shards: 75%|███████▌ | 3/4 [00:02<00:00, 1.31it/s]
> Loading checkpoint shards: 100%|██████████| 4/4 [00:02<00:00, 1.74it/s]
> Loading checkpoint shards: 100%|██████████| 4/4 [00:02<00:00, 1.55it/s]
> [rank0]: Traceback (most recent call last):
> [rank0]: File "/users/w/ac/train/train_critic.py", line 249, in <module>
> [rank0]: main()
> [rank0]: File "/users/w/ac/train/train_critic.py", line 225, in main
> [rank0]: peft_model = PeftModel.from_pretrained(
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/peft/peft_model.py", line 545, in from_pretrained
> [rank0]: model.load_adapter(
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/peft/peft_model.py", line 1113, in load_adapter
> [rank0]: adapters_weights = load_peft_weights(model_id, device=torch_device, **hf_hub_download_kwargs)
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/peft/utils/save_and_load.py", line 486, in load_peft_weights
> [rank0]: adapters_weights = safe_load_file(filename, device=device)
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/safetensors/torch.py", line 311, in load_file
> [rank0]: with safe_open(filename, framework="pt", device=device) as f:
> [rank0]: safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer
> E0302 21:17:38.377842 2650981 site-packages/torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 0 (pid: 2651079) of binary: /home//miniconda3/envs/py39env/bin/python
> Traceback (most recent call last):
> File "/home//miniconda3/envs/py39env/bin/torchrun", line 8, in <module>
> sys.exit(main())
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
> return f(*args, **kwargs)
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/run.py", line 919, in main
> run(args)
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/run.py", line 910, in run
> elastic_launch(
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 138, in __call__
> return launch_agent(self._config, self._entrypoint, list(args))
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
> raise ChildFailedError(
> torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
error2:(directly merge, and load the model after merge
> CUDA_VISIBLE_DEVICES: 1
> LOCAL_RANK: 0
> WORLD_SIZE: 1
> base_model_path: /train/runs/301_wd/weights/_1
> Loading base model from: /train/runs/301_wd/weights/_1
>
> Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s]
> Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s]
> [rank0]: Traceback (most recent call last):
> [rank0]: File "/train/train_.py", line 216, in <module>
> [rank0]: main()
> [rank0]: File "/train/train_.py", line 91, in main
> [rank0]: model = AutoModelForCausalLM.from_pretrained(
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
> [rank0]: return model_class.from_pretrained(
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/transformers/modeling_utils.py", line 4014, in from_pretrained
> [rank0]: ) = cls._load_pretrained_model(
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/transformers/modeling_utils.py", line 4482, in _load_pretrained_model
> [rank0]: state_dict = load_state_dict(shard_file, is_quantized=is_quantized)
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/transformers/modeling_utils.py", line 549, in load_state_dict
> [rank0]: with safe_open(checkpoint_file, framework="pt") as f:
> [rank0]: safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer
> E0302 20:39:06.398025 2565872 site-packages/torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 0 (pid: 2566031) of binary: /home//miniconda3/envs/py39env/bin/python
> Traceback (most recent call last):
> File "/home//miniconda3/envs/py39env/bin/torchrun", line 8, in <module>
> sys.exit(main())
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
> return f(*args, **kwargs)
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/run.py", line 919, in main
> run(args)
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/run.py", line 910, in run
> elastic_launch(
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 138, in __call__
> return launch_agent(self._config, self._entrypoint, list(args))
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
> raise ChildFailedError(
> torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
> ============================================================
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2405/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2400
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2400/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2400/comments
|
https://api.github.com/repos/huggingface/peft/issues/2400/events
|
https://github.com/huggingface/peft/issues/2400
| 2,881,481,036
|
I_kwDOIf9iDM6rv-lM
| 2,400
|
processing_class and tokenizer arguments on SFTTrainer()
|
{
"login": "ErikKankaTrea",
"id": 18656607,
"node_id": "MDQ6VXNlcjE4NjU2NjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/18656607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ErikKankaTrea",
"html_url": "https://github.com/ErikKankaTrea",
"followers_url": "https://api.github.com/users/ErikKankaTrea/followers",
"following_url": "https://api.github.com/users/ErikKankaTrea/following{/other_user}",
"gists_url": "https://api.github.com/users/ErikKankaTrea/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ErikKankaTrea/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ErikKankaTrea/subscriptions",
"organizations_url": "https://api.github.com/users/ErikKankaTrea/orgs",
"repos_url": "https://api.github.com/users/ErikKankaTrea/repos",
"events_url": "https://api.github.com/users/ErikKankaTrea/events{/privacy}",
"received_events_url": "https://api.github.com/users/ErikKankaTrea/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2025-02-26T12:48:33
| 2025-02-27T03:39:02
| 2025-02-27T03:39:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi!!!
I got unexpected error from my side when running the example train.py with deepspeed [(link)](https://github.com/huggingface/peft/tree/main/examples/sft)
Argument "**tokenizer**" should be now "**processing_class**".
Could anyone please, let me know whether with the example provided (link above) changing the arguments names on SFTTrainer() for passing the tokenizer should be enough ?
I am worried if I make that change switching arguments the example scripts will miss sense.
Thanks in advance!
|
{
"login": "ErikKankaTrea",
"id": 18656607,
"node_id": "MDQ6VXNlcjE4NjU2NjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/18656607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ErikKankaTrea",
"html_url": "https://github.com/ErikKankaTrea",
"followers_url": "https://api.github.com/users/ErikKankaTrea/followers",
"following_url": "https://api.github.com/users/ErikKankaTrea/following{/other_user}",
"gists_url": "https://api.github.com/users/ErikKankaTrea/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ErikKankaTrea/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ErikKankaTrea/subscriptions",
"organizations_url": "https://api.github.com/users/ErikKankaTrea/orgs",
"repos_url": "https://api.github.com/users/ErikKankaTrea/repos",
"events_url": "https://api.github.com/users/ErikKankaTrea/events{/privacy}",
"received_events_url": "https://api.github.com/users/ErikKankaTrea/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2400/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2394
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2394/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2394/comments
|
https://api.github.com/repos/huggingface/peft/issues/2394/events
|
https://github.com/huggingface/peft/issues/2394
| 2,874,191,172
|
I_kwDOIf9iDM6rUK1E
| 2,394
|
TP + DP training error
|
{
"login": "iMountTai",
"id": 35353688,
"node_id": "MDQ6VXNlcjM1MzUzNjg4",
"avatar_url": "https://avatars.githubusercontent.com/u/35353688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iMountTai",
"html_url": "https://github.com/iMountTai",
"followers_url": "https://api.github.com/users/iMountTai/followers",
"following_url": "https://api.github.com/users/iMountTai/following{/other_user}",
"gists_url": "https://api.github.com/users/iMountTai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iMountTai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iMountTai/subscriptions",
"organizations_url": "https://api.github.com/users/iMountTai/orgs",
"repos_url": "https://api.github.com/users/iMountTai/repos",
"events_url": "https://api.github.com/users/iMountTai/events{/privacy}",
"received_events_url": "https://api.github.com/users/iMountTai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 7
| 2025-02-24T08:30:53
| 2025-02-27T16:50:07
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
peft: 0.14.1.dev0
transformers: 4.50.dev0
accelerate: 1.4.0.dev0
python: 3.11
linux
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
After adding the LoRA module to the model, an error occurred:
NotImplementederror: ColwiseParallel currently only support nn.linear and nn.embedding
### Expected behavior
lora module training with TP
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2394/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2390
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2390/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2390/comments
|
https://api.github.com/repos/huggingface/peft/issues/2390/events
|
https://github.com/huggingface/peft/issues/2390
| 2,866,034,838
|
I_kwDOIf9iDM6q1DiW
| 2,390
|
Bug: Using 2 LoRA configs with `target_modules='all-linear'` leads to nested LoRA layers
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 4838806417,
"node_id": "LA_kwDOIf9iDM8AAAABIGpTkQ",
"url": "https://api.github.com/repos/huggingface/peft/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 4838806434,
"node_id": "LA_kwDOIf9iDM8AAAABIGpTog",
"url": "https://api.github.com/repos/huggingface/peft/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false
| null |
[] | null | 0
| 2025-02-20T12:34:35
| 2025-03-04T16:16:16
| 2025-03-04T16:16:16
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
-
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModelForCausalLM
from peft import LoraConfig, get_peft_model
model_id = "hf-internal-testing/tiny-random-OPTForCausalLM"
model = AutoModelForCausalLM.from_pretrained(model_id)
config0 = LoraConfig(target_modules="all-linear")
config1 = LoraConfig(target_modules="all-linear")
model = get_peft_model(model, config0)#, adapter_name="default")
model.add_adapter("adapter1", config1)
print(model.base_model.model.model.decoder.layers[0].self_attn.k_proj)
```
prints:
```
lora.Linear(
(base_layer): lora.Linear(
(base_layer): Linear(in_features=16, out_features=16, bias=True)
(lora_dropout): ModuleDict(
(adapter1): Identity()
)
(lora_A): ModuleDict(
(adapter1): Linear(in_features=16, out_features=8, bias=False)
)
(lora_B): ModuleDict(
(adapter1): Linear(in_features=8, out_features=16, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
(lora_dropout): ModuleDict(
(default): Identity()
)
(lora_A): ModuleDict(
(default): lora.Linear(
(base_layer): Linear(in_features=16, out_features=8, bias=False)
(lora_dropout): ModuleDict(
(adapter1): Identity()
)
(lora_A): ModuleDict(
(adapter1): Linear(in_features=16, out_features=8, bias=False)
)
(lora_B): ModuleDict(
(adapter1): Linear(in_features=8, out_features=8, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
)
(lora_B): ModuleDict(
(default): lora.Linear(
(base_layer): Linear(in_features=8, out_features=16, bias=False)
(lora_dropout): ModuleDict(
(adapter1): Identity()
)
(lora_A): ModuleDict(
(adapter1): Linear(in_features=8, out_features=8, bias=False)
)
(lora_B): ModuleDict(
(adapter1): Linear(in_features=8, out_features=16, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
```
### Expected behavior
Instead of getting nested LoRA layers, the linear layers belonging to a LoRA layer should not be targeted by `all-linear`.
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2390/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2388
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2388/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2388/comments
|
https://api.github.com/repos/huggingface/peft/issues/2388/events
|
https://github.com/huggingface/peft/issues/2388
| 2,863,639,986
|
I_kwDOIf9iDM6qr62y
| 2,388
|
ValueError: Target module Qwen2_5_VisionTransformerPretrainedModel is not supported.
|
{
"login": "samuellimabraz",
"id": 115582014,
"node_id": "U_kgDOBuOkPg",
"avatar_url": "https://avatars.githubusercontent.com/u/115582014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samuellimabraz",
"html_url": "https://github.com/samuellimabraz",
"followers_url": "https://api.github.com/users/samuellimabraz/followers",
"following_url": "https://api.github.com/users/samuellimabraz/following{/other_user}",
"gists_url": "https://api.github.com/users/samuellimabraz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samuellimabraz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samuellimabraz/subscriptions",
"organizations_url": "https://api.github.com/users/samuellimabraz/orgs",
"repos_url": "https://api.github.com/users/samuellimabraz/repos",
"events_url": "https://api.github.com/users/samuellimabraz/events{/privacy}",
"received_events_url": "https://api.github.com/users/samuellimabraz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 3
| 2025-02-19T15:09:17
| 2025-03-06T16:30:36
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
## Context
I'm finetuning the Qwen2.5-Vl model with swift for data extraction using LoRA. I'm not sure what is the correct way to save and upload the adapter and be able to recharge it correctly.
In short, I followed these steps
```python
# load model
model, processor = get_model_tokenizer(
'Qwen/Qwen2.5-VL-3B-Instruct',
torch_dtype=torch.bfloat16,
use_hf=True,
attn_impl="flash_attn",
)
# get lora
...
model = Swift.prepare_model(model, lora_config)
# train config e run
...
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
data_collator=template.data_collator,
train_dataset=train_dataset,
eval_dataset=val_dataset,
template=template,
callbacks= [
EarlyStoppingCallback(
early_stopping_patience=6,
early_stopping_threshold=0.001
)
]
)
stats = trainer.train()
# push adapter
model.push_to_hub(f"tech4humans/{model_name}", private=True)
```
debugging the peft model was loaded with the class `PeftModelForCausalLM`.
## Problem
Then after I tried to recharge the adapter and I get an error with peft
```python
from transformers import Qwen2_5_VLForConditionalGeneration
model = Qwen2_5_VLForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-VL-3B-Instruct", device_map="auto")
model.load_adapter("tech4humans/Qwen2.5-VL-3B-Instruct-r4-tuned")
```
```python
/usr/local/lib/python3.10/dist-packages/peft/tuners/lora/model.py in _create_new_module(lora_config, adapter_name, target, **kwargs)
345 if new_module is None:
346 # no module could be matched
--> 347 raise ValueError(
348 f"Target module {target} is not supported. Currently, only the following modules are supported: "
349 "`torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv1d`, `torch.nn.Conv2d`, `torch.nn.Conv3d`, ".
ValueError: Target module Qwen2_5_VisionTransformerPretrainedModel(
(patch_embed): Qwen2_5_VisionPatchEmbed(
(proj): Conv3d(3, 1280, kernel_size=(2, 14, 14), stride=(2, 14, 14), bias=False)
)
(rotary_pos_emb): Qwen2_5_VisionRotaryEmbedding()
(blocks): ModuleList(
(0-31): 32 x Qwen2_5_VLVisionBlock(
(norm1): Qwen2RMSNorm((1280,), eps=1e-06)
(norm2): Qwen2RMSNorm((1280,), eps=1e-06)
(attn): Qwen2_5_VLVisionSdpaAttention(
(qkv): Linear(in_features=1280, out_features=3840, bias=True)
(proj): Linear(in_features=1280, out_features=1280, bias=True)
)
(mlp): Qwen2_5_VLMLP(
(gate_proj): Linear(in_features=1280, out_features=3420, bias=True)
(up_proj): Linear(in_features=1280, out_features=3420, bias=True)
(down_proj): Linear(in_features=3420, out_features=1280, bias=True)
(act_fn): SiLU()
)
)
)
(merger): Qwen2_5_VLPatchMerger(
(ln_q): Qwen2RMSNorm((1280,), eps=1e-06)
(mlp): Sequential(
(0): Linear(in_features=5120, out_features=5120, bias=True)
(1): GELU(approximate='none')
(2): Linear(in_features=5120, out_features=2048, bias=True)
)
)
) is not supported. Currently, only the following modules are supported: `torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv1d`, `torch.nn.Conv2d`, `torch.nn.Conv3d`, `transformers.pytorch_utils.Conv1D`, `torch.nn.MultiheadAttention.`.
```
## Sytem info
```
transformers 4.50.0.dev0
peft 0.14.1.dev0
ms-swift 3.2.0.dev0
Python 3.10.12
CUDA Version: 12.6
```
Am I missing something or doing something wrong? Any pointers would be appreciated. Thanks!
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2388/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2381
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2381/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2381/comments
|
https://api.github.com/repos/huggingface/peft/issues/2381/events
|
https://github.com/huggingface/peft/issues/2381
| 2,857,556,037
|
I_kwDOIf9iDM6qUthF
| 2,381
|
Bug when deleting adapters of a model with modules_to_save
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 4838806417,
"node_id": "LA_kwDOIf9iDM8AAAABIGpTkQ",
"url": "https://api.github.com/repos/huggingface/peft/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2025-02-17T11:22:34
| 2025-02-20T12:35:13
| null |
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
All PEFT versions.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModelForSequenceClassification
from peft import LoraConfig, get_peft_model
model_id = "facebook/opt-125m"
config = LoraConfig(task_type="SEQ_CLS")
model = AutoModelForSequenceClassification.from_pretrained(model_id)
adapter_to_delete = "delete_me"
model = get_peft_model(model, config)
model.add_adapter(adapter_to_delete, config)
# sanity check
assert "delete_me" in model.base_model.model.score.modules_to_save
model.delete_adapter(adapter_to_delete)
assert "delete_me" not in model.base_model.model.score.modules_to_save
```
### Expected behavior
When adding, say, a LoRA adapter with `modules_to_save`, then deleting the adapter, the LoRA part is correctly removed but the `modules_to_save` part is not removed.
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2381/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2379
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2379/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2379/comments
|
https://api.github.com/repos/huggingface/peft/issues/2379/events
|
https://github.com/huggingface/peft/issues/2379
| 2,854,940,754
|
I_kwDOIf9iDM6qKvBS
| 2,379
|
prompt_tuning_peft tutorial raises cache layer error
|
{
"login": "jakerobers",
"id": 1840629,
"node_id": "MDQ6VXNlcjE4NDA2Mjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1840629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jakerobers",
"html_url": "https://github.com/jakerobers",
"followers_url": "https://api.github.com/users/jakerobers/followers",
"following_url": "https://api.github.com/users/jakerobers/following{/other_user}",
"gists_url": "https://api.github.com/users/jakerobers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jakerobers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jakerobers/subscriptions",
"organizations_url": "https://api.github.com/users/jakerobers/orgs",
"repos_url": "https://api.github.com/users/jakerobers/repos",
"events_url": "https://api.github.com/users/jakerobers/events{/privacy}",
"received_events_url": "https://api.github.com/users/jakerobers/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 3
| 2025-02-15T00:10:11
| 2025-02-19T10:21:15
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Following the prompt tuning guide leads to an error when executing in a local environment:
- https://huggingface.co/learn/cookbook/en/prompt_tuning_peft
When executing, an exception is raised when calling `model.generate()` with the prompt-tuned model. Everything up to that point seems to be working as expected (i.e. the `peft_outputs_prompt` and `peft_outputs_sentences` directories containing the prompt-tunings have checkpoints).
Having a look at the stacktrace, it looks like `model_kwargs["past_key_values"]` is being referenced in `peft/peft_model.py`. I'm curious if this is possibly related to https://github.com/huggingface/peft/issues/1962.
```
Traceback (most recent call last):
File "/main.py", line 148, in <module>
loaded_model_prompt_outputs = get_outputs(loaded_model_prompt, input_prompt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./main.py", line 17, in get_outputs
outputs = model.generate(
^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/peft/peft_model.py", line 1140, in generate
outputs = self.base_model.generate(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/transformers/generation/utils.py", line 2255, in generate
result = self._sample(
^^^^^^^^^^^^^
File "lib/python3.11/site-packages/transformers/generation/utils.py", line 3247, in _sample
model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/peft/peft_model.py", line 1169, in prepare_inputs_for_generation
if model_kwargs["past_key_values"][0][0].shape[-2] >= model_kwargs["input_ids"].shape[1]:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^
File "lib/python3.11/site-packages/transformers/cache_utils.py", line 390, in __getitem__
raise KeyError(f"Cache only has {len(self)} layers, attempted to access layer with index {layer_idx}")
KeyError: 'Cache only has 0 layers, attempted to access layer with index 0'
```
cc @BenjaminBossan since you have some context around how `past_key_values` [works with transformers](https://github.com/huggingface/peft/pull/2096/files)
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
This is the code provided in the article https://huggingface.co/learn/cookbook/en/prompt_tuning_peft, condensed into a single script.
```
#!/usr/bin/env python
# TODO: https://huggingface.co/learn/cookbook/en/prompt_tuning_peft
# TODO: https://huggingface.co/docs/peft/en/package_reference/prompt_tuning
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "bigscience/bloomz-560m"
# model_name="bigscience/bloom-1b1"
NUM_VIRTUAL_TOKENS = 4
NUM_EPOCHS = 6
tokenizer = AutoTokenizer.from_pretrained(model_name)
foundational_model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
def get_outputs(model, inputs, max_new_tokens=100):
outputs = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
max_new_tokens=max_new_tokens,
# temperature=0.2,
# top_p=0.95,
# do_sample=True,
repetition_penalty=1.5, # Avoid repetition.
early_stopping=True, # The model can stop before reach the max_length
eos_token_id=tokenizer.eos_token_id,
)
return outputs
input_prompt = tokenizer("I want you to act as a motivational coach. ", return_tensors="pt")
foundational_outputs_prompt = get_outputs(foundational_model, input_prompt, max_new_tokens=50)
print(tokenizer.batch_decode(foundational_outputs_prompt, skip_special_tokens=True))
import os
from IPython.display import display
# os.environ["TOKENIZERS_PARALLELISM"] = "false"
from datasets import load_dataset
dataset_prompt = "fka/awesome-chatgpt-prompts"
# Create the Dataset to create prompts.
#
data_prompt = load_dataset(dataset_prompt)
data_prompt = data_prompt.map(lambda samples: tokenizer(samples["prompt"]), batched=True)
train_sample_prompt = data_prompt["train"].select(range(50))
display(train_sample_prompt)
print(train_sample_prompt[:1])
dataset_sentences = load_dataset("Abirate/english_quotes")
data_sentences = dataset_sentences.map(lambda samples: tokenizer(samples["quote"]), batched=True)
train_sample_sentences = data_sentences["train"].select(range(25))
train_sample_sentences = train_sample_sentences.remove_columns(["author", "tags"])
display(train_sample_sentences)
print(train_sample_sentences[:1])
from peft import get_peft_model, PromptTuningConfig, TaskType, PromptTuningInit
generation_config = PromptTuningConfig(
task_type=TaskType.CAUSAL_LM, # This type indicates the model will generate text.
prompt_tuning_init=PromptTuningInit.RANDOM, # The added virtual tokens are initializad with random numbers
num_virtual_tokens=NUM_VIRTUAL_TOKENS, # Number of virtual tokens to be added and trained.
tokenizer_name_or_path=model_name, # The pre-trained model.
)
peft_model_prompt = get_peft_model(foundational_model, generation_config)
print(peft_model_prompt.print_trainable_parameters())
peft_model_sentences = get_peft_model(foundational_model, generation_config)
print(peft_model_sentences.print_trainable_parameters())
from transformers import TrainingArguments
def create_training_arguments(path, learning_rate=0.0035, epochs=6):
training_args = TrainingArguments(
output_dir=path, # Where the model predictions and checkpoints will be written
use_cpu=True, # This is necessary for CPU clusters.
auto_find_batch_size=True, # Find a suitable batch size that will fit into memory automatically
learning_rate=learning_rate, # Higher learning rate than full Fine-Tuning
num_train_epochs=epochs,
)
return training_args
import os
working_dir = "./"
# Is best to store the models in separate folders.
# Create the name of the directories where to store the models.
output_directory_prompt = os.path.join(working_dir, "peft_outputs_prompt")
output_directory_sentences = os.path.join(working_dir, "peft_outputs_sentences")
# Just creating the directoris if not exist.
if not os.path.exists(working_dir):
os.mkdir(working_dir)
if not os.path.exists(output_directory_prompt):
os.mkdir(output_directory_prompt)
if not os.path.exists(output_directory_sentences):
os.mkdir(output_directory_sentences)
training_args_prompt = create_training_arguments(output_directory_prompt, 0.003, NUM_EPOCHS)
training_args_sentences = create_training_arguments(output_directory_sentences, 0.003, NUM_EPOCHS)
from transformers import Trainer, DataCollatorForLanguageModeling
def create_trainer(model, training_args, train_dataset):
trainer = Trainer(
model=model, # We pass in the PEFT version of the foundation model, bloomz-560M
args=training_args, # The args for the training.
train_dataset=train_dataset, # The dataset used to tyrain the model.
data_collator=DataCollatorForLanguageModeling(
tokenizer, mlm=False
), # mlm=False indicates not to use masked language modeling
)
return trainer
trainer_prompt = create_trainer(peft_model_prompt, training_args_prompt, train_sample_prompt)
trainer_prompt.train()
trainer_sentences = create_trainer(peft_model_sentences, training_args_sentences, train_sample_sentences)
trainer_sentences.train()
trainer_prompt.model.save_pretrained(output_directory_prompt)
trainer_sentences.model.save_pretrained(output_directory_sentences)
from peft import PeftModel
loaded_model_prompt = PeftModel.from_pretrained(
foundational_model,
output_directory_prompt,
# device_map='auto',
is_trainable=False,
)
loaded_model_prompt_outputs = get_outputs(loaded_model_prompt, input_prompt)
print(tokenizer.batch_decode(loaded_model_prompt_outputs, skip_special_tokens=True))
loaded_model_prompt.load_adapter(output_directory_sentences, adapter_name="quotes")
loaded_model_prompt.set_adapter("quotes")
loaded_model_sentences_outputs = get_outputs(loaded_model_prompt, input_sentences)
print(tokenizer.batch_decode(loaded_model_sentences_outputs, skip_special_tokens=True))
# Notes:
# - https://github.com/huggingface/peft/issues/1962
# - https://github.com/huggingface/peft/issues/869#issuecomment-2263322623
```
### Expected behavior
The `loaded_model_prompt` should be able to execute `generate` and return a prompt-tuned response.
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2379/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2377
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2377/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2377/comments
|
https://api.github.com/repos/huggingface/peft/issues/2377/events
|
https://github.com/huggingface/peft/issues/2377
| 2,853,540,672
|
I_kwDOIf9iDM6qFZNA
| 2,377
|
Contributing new model merging method to PEFT
|
{
"login": "SpeeeedLee",
"id": 132431571,
"node_id": "U_kgDOB-S-0w",
"avatar_url": "https://avatars.githubusercontent.com/u/132431571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SpeeeedLee",
"html_url": "https://github.com/SpeeeedLee",
"followers_url": "https://api.github.com/users/SpeeeedLee/followers",
"following_url": "https://api.github.com/users/SpeeeedLee/following{/other_user}",
"gists_url": "https://api.github.com/users/SpeeeedLee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SpeeeedLee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SpeeeedLee/subscriptions",
"organizations_url": "https://api.github.com/users/SpeeeedLee/orgs",
"repos_url": "https://api.github.com/users/SpeeeedLee/repos",
"events_url": "https://api.github.com/users/SpeeeedLee/events{/privacy}",
"received_events_url": "https://api.github.com/users/SpeeeedLee/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 1
| 2025-02-14T12:17:46
| 2025-02-14T15:57:51
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Feature request
Hi all,
I noticed that several model merging methods, such as TIES and DARE, have been implemented in this library, as mentioned [here](https://github.com/huggingface/peft/blob/main/docs/source/developer_guides/model_merging.md).
I was wondering if there is a way for me to contribute a recently accepted model merging method to this repo.
I would really appreciate any guidance or suggestions on how to proceed.
Thanks in advance!
### Motivation
Enhance the diversity of model merging supported in this library.
### Your contribution
I can submit a PR.
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2377/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2368
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2368/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2368/comments
|
https://api.github.com/repos/huggingface/peft/issues/2368/events
|
https://github.com/huggingface/peft/issues/2368
| 2,838,153,330
|
I_kwDOIf9iDM6pKshy
| 2,368
|
[FSDP] After training embed_tokens in modules_to_save model has hallucinations
|
{
"login": "DmitryDiTy",
"id": 90377536,
"node_id": "MDQ6VXNlcjkwMzc3NTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/90377536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DmitryDiTy",
"html_url": "https://github.com/DmitryDiTy",
"followers_url": "https://api.github.com/users/DmitryDiTy/followers",
"following_url": "https://api.github.com/users/DmitryDiTy/following{/other_user}",
"gists_url": "https://api.github.com/users/DmitryDiTy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DmitryDiTy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DmitryDiTy/subscriptions",
"organizations_url": "https://api.github.com/users/DmitryDiTy/orgs",
"repos_url": "https://api.github.com/users/DmitryDiTy/repos",
"events_url": "https://api.github.com/users/DmitryDiTy/events{/privacy}",
"received_events_url": "https://api.github.com/users/DmitryDiTy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 17
| 2025-02-07T13:23:07
| 2025-02-14T08:23:35
| 2025-02-14T08:21:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
### Libs
```
absl-py==2.1.0
accelerate==1.3.0
aiohappyeyeballs==2.4.4
aiohttp==3.11.10
aiosignal==1.3.2
annotated-types==0.7.0
asttokens @ file:///home/conda/feedstock_root/build_artifacts/asttokens_1733250440834/work
async-timeout==5.0.1
attrs==24.3.0
beartype==0.14.1
bert-score==0.3.13
better-abc==0.0.3
certifi==2024.12.14
charset-normalizer==3.4.0
circuitsvis @ git+https://github.com/callummcdougall/CircuitsVis.git@1e6129d08cae7af9242d9ab5d3ed322dd44b4dd3#subdirectory=python
click==8.1.7
comm @ file:///home/conda/feedstock_root/build_artifacts/comm_1733502965406/work
contourpy==1.3.1
cycler==0.12.1
datasets==3.2.0
debugpy @ file:///home/conda/feedstock_root/build_artifacts/debugpy_1734158947252/work
decorator @ file:///home/conda/feedstock_root/build_artifacts/decorator_1733236420667/work
dill==0.3.8
docker-pycreds==0.4.0
einops==0.8.0
evaluate==0.4.3
exceptiongroup @ file:///home/conda/feedstock_root/build_artifacts/exceptiongroup_1733208806608/work
executing @ file:///home/conda/feedstock_root/build_artifacts/executing_1733569351617/work
fancy-einsum==0.0.3
filelock==3.16.1
fonttools==4.55.6
frozenlist==1.5.0
fsspec==2024.9.0
gitdb==4.0.11
GitPython==3.1.43
huggingface-hub==0.27.0
idna==3.10
importlib-metadata==5.2.0
ipykernel @ file:///home/conda/feedstock_root/build_artifacts/ipykernel_1719845459717/work
ipython @ file:///home/conda/feedstock_root/build_artifacts/ipython_1732896932739/work
ipywidgets==8.1.5
jaxtyping==0.2.36
jedi @ file:///home/conda/feedstock_root/build_artifacts/jedi_1733300866624/work
Jinja2==3.1.4
joblib==1.4.2
jupyter_client @ file:///home/conda/feedstock_root/build_artifacts/jupyter_client_1733440914442/work
jupyter_core @ file:///home/conda/feedstock_root/build_artifacts/jupyter_core_1727163409502/work
jupyterlab_widgets==3.0.13
kiwisolver==1.4.8
markdown-it-py==3.0.0
MarkupSafe==3.0.2
matplotlib==3.10.0
matplotlib-inline @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-inline_1733416936468/work
mdurl==0.1.2
mpmath==1.3.0
multidict==6.1.0
multiprocess==0.70.16
nest_asyncio @ file:///home/conda/feedstock_root/build_artifacts/nest-asyncio_1733325553580/work
networkx==3.4.2
nltk==3.9.1
numpy==1.26.4
nvidia-cublas-cu12==12.4.5.8
nvidia-cuda-cupti-cu12==12.4.127
nvidia-cuda-nvrtc-cu12==12.4.127
nvidia-cuda-runtime-cu12==12.4.127
nvidia-cudnn-cu12==9.1.0.70
nvidia-cufft-cu12==11.2.1.3
nvidia-curand-cu12==10.3.5.147
nvidia-cusolver-cu12==11.6.1.9
nvidia-cusparse-cu12==12.3.1.170
nvidia-nccl-cu12==2.21.5
nvidia-nvjitlink-cu12==12.4.127
nvidia-nvtx-cu12==12.4.127
packaging @ file:///home/conda/feedstock_root/build_artifacts/packaging_1733203243479/work
pandas==2.2.3
parso @ file:///home/conda/feedstock_root/build_artifacts/parso_1733271261340/work
peft==0.14.0
pexpect @ file:///home/conda/feedstock_root/build_artifacts/pexpect_1733301927746/work
pickleshare @ file:///home/conda/feedstock_root/build_artifacts/pickleshare_1733327343728/work
pillow==11.1.0
platformdirs @ file:///home/conda/feedstock_root/build_artifacts/platformdirs_1733232627818/work
prompt_toolkit @ file:///home/conda/feedstock_root/build_artifacts/prompt-toolkit_1733302527033/work
propcache==0.2.1
protobuf==5.29.1
psutil @ file:///home/conda/feedstock_root/build_artifacts/psutil_1729847040822/work
ptyprocess @ file:///home/conda/feedstock_root/build_artifacts/ptyprocess_1733302279685/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl#sha256=92c32ff62b5fd8cf325bec5ab90d7be3d2a8ca8c8a3813ff487a8d2002630d1f
pure_eval @ file:///home/conda/feedstock_root/build_artifacts/pure_eval_1733569405015/work
pyarrow==18.1.0
pydantic==2.10.3
pydantic_core==2.27.1
Pygments @ file:///home/conda/feedstock_root/build_artifacts/pygments_1733221634316/work
pyparsing==3.2.1
python-dateutil @ file:///home/conda/feedstock_root/build_artifacts/python-dateutil_1733215673016/work
pytz==2024.2
PyYAML==6.0.2
pyzmq @ file:///home/conda/feedstock_root/build_artifacts/pyzmq_1728642224099/work
regex==2024.11.6
requests==2.32.3
rich==13.9.4
rouge_score==0.1.2
safetensors==0.4.5
scikit-learn==1.6.1
scipy==1.15.1
sentence-transformers==3.3.1
sentencepiece==0.2.0
sentry-sdk==2.19.2
setproctitle==1.3.4
six @ file:///home/conda/feedstock_root/build_artifacts/six_1733380938961/work
smmap==5.0.1
stack_data @ file:///home/conda/feedstock_root/build_artifacts/stack_data_1733569443808/work
sympy==1.13.1
threadpoolctl==3.5.0
tokenizers==0.21.0
torch==2.5.1
tornado @ file:///home/conda/feedstock_root/build_artifacts/tornado_1732615898999/work
tqdm==4.67.1
traitlets @ file:///home/conda/feedstock_root/build_artifacts/traitlets_1733367359838/work
transformer-lens==2.10.0
transformers==4.48.2
triton==3.1.0
trl==0.14.0
typeguard==4.4.1
typing_extensions @ file:///home/conda/feedstock_root/build_artifacts/typing_extensions_1733188668063/work
tzdata==2024.2
urllib3==2.2.3
wandb==0.19.1
wcwidth @ file:///home/conda/feedstock_root/build_artifacts/wcwidth_1733231326287/work
widgetsnbextension==4.0.13
xxhash==3.5.0
yarl==1.18.3
zipp @ file:///home/conda/feedstock_root/build_artifacts/zipp_1732827521216/work
```
### Cuda
```
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
```
```
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.08 Driver Version: 545.23.08 CUDA Version: 12.3 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA RTX 6000 Ada Gene... Off | 00000000:01:00.0 Off | Off |
| 30% 40C P8 27W / 300W | 43531MiB / 49140MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA RTX 6000 Ada Gene... Off | 00000000:25:00.0 Off | Off |
| 30% 34C P8 23W / 300W | 3021MiB / 49140MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 2 NVIDIA RTX 6000 Ada Gene... Off | 00000000:41:00.0 Off | Off |
| 30% 37C P8 29W / 300W | 6MiB / 49140MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 3 NVIDIA RTX 6000 Ada Gene... Off | 00000000:61:00.0 Off | Off |
| 30% 40C P8 30W / 300W | 10881MiB / 49140MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 4 NVIDIA RTX 6000 Ada Gene... Off | 00000000:81:00.0 Off | Off |
| 30% 34C P8 24W / 300W | 1319MiB / 49140MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 5 NVIDIA RTX 6000 Ada Gene... Off | 00000000:A1:00.0 Off | Off |
| 40% 59C P2 71W / 300W | 5763MiB / 49140MiB | 6% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 6 NVIDIA RTX 6000 Ada Gene... Off | 00000000:C1:00.0 Off | Off |
| 30% 47C P2 91W / 300W | 43307MiB / 49140MiB | 74% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
+---------------------------------------------------------------------------------------+
```
### Who can help?
@benjaminbossan @sayakpaul
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
## Context
I do my model training for text generation just for CompletionOnlyLM with my own dataset (long dialogues with system/user/assistant remarks). I added to my model and tokenizer new tokens using:
```python
tokenizer.add_tokens(
[
AddedToken("<|start_thinking|>", normalized=False, special=False),
AddedToken("<|end_thinking|>", normalized=False, special=False),
AddedToken("<tool_response>", normalized=False, special=False),
AddedToken("</tool_response>", normalized=False, special=False),
AddedToken("<|start_response|>", normalized=False, special=False),
AddedToken("<|end_response|>", normalized=False, special=False),
]
)
model.resize_token_embeddings(len(tokenizer))
```
and I have saved it before training.
After that I just wanted training my extend model with PEFT + TRL + FSDP.
Model that I used like base:
```
Qwen2ForCausalLM(
(model): Qwen2Model(
(embed_tokens): Embedding(151671, 3584)
(layers): ModuleList(
(0-27): 28 x Qwen2DecoderLayer(
(self_attn): Qwen2Attention(
(q_proj): Linear(in_features=3584, out_features=3584, bias=True)
(k_proj): Linear(in_features=3584, out_features=512, bias=True)
(v_proj): Linear(in_features=3584, out_features=512, bias=True)
(o_proj): Linear(in_features=3584, out_features=3584, bias=False)
)
(mlp): Qwen2MLP(
(gate_proj): Linear(in_features=3584, out_features=18944, bias=False)
(up_proj): Linear(in_features=3584, out_features=18944, bias=False)
(down_proj): Linear(in_features=18944, out_features=3584, bias=False)
(act_fn): SiLU()
)
(input_layernorm): Qwen2RMSNorm((3584,), eps=1e-06)
(post_attention_layernorm): Qwen2RMSNorm((3584,), eps=1e-06)
)
)
(norm): Qwen2RMSNorm((3584,), eps=1e-06)
(rotary_emb): Qwen2RotaryEmbedding()
)
(lm_head): Linear(in_features=3584, out_features=151671, bias=False)
)
```
## Code
### Accelerate config
```yaml
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch: BACKWARD_PRE
fsdp_cpu_ram_efficient_loading: true
fsdp_forward_prefetch: false
fsdp_offload_params: false
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
### Training script
```python
import warnings
warnings.filterwarnings("ignore")
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0, 1, 2, 3'
os.environ['TOKENIZERS_PARALLELISM'] = 'true'
import wandb
import numpy as np
import torch
import json
from typing import List, Optional, Union, Any, Literal
from datasets import load_dataset, Dataset
import evaluate
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
EarlyStoppingCallback,
DataCollatorForLanguageModeling,
AddedToken,
)
from peft import (
LoraConfig,
get_peft_model,
TaskType,
PeftModelForCausalLM
)
from trl import (
SFTConfig,
SFTTrainer,
DataCollatorForCompletionOnlyLM
)
from special_utils import DataCollatorForMultiCompletionOnlyLM, CustomLossTrainer
##################################
# Enviroments and configurations #
##################################
CHECKPOINT_PATH = None
DATA_CACHE_DIR = "/home/raid/datasets/"
MODEL_CACHE_DIR = "/home/raid/hf_cache/"
MODEL_PATH = "/home/raid/models/extended_qwen"
METRICS_CACHE = "/home/raid/metrics_cache"
MAX_PROMPT_LENGTH = 5000
LR = 1e-5
STEP_SIZE = 10
BATCH_SIZE = 2
GA_SIZE = 4
TRAIN_EPOCHS = 1
REPORT_TO = ['none', 'wandb'][0]
LORA_R = 48
LORA_ALPHA = 96
TARGET_MODULES = [
"self_attn.q_proj",
"self_attn.k_proj",
"self_attn.v_proj",
"self_attn.o_proj",
"mlp.gate_proj",
"mlp.up_proj",
"mlp.down_proj",
]
MODULES_TO_SAVE = [
"embed_tokens",
"lm_head"
]
REVISION_NAME = f"TEST_qwen-tp-({LR})LR-({BATCH_SIZE})BATCH_SIZE-({GA_SIZE})GA_SIZE-({TRAIN_EPOCHS})TRAIN_EPOCHS-({LORA_R})LORA_R-({LORA_ALPHA})LORA_ALPHA"
LOGS_PATH = f"/home/raid/models/{REVISION_NAME}/logs"
print(REVISION_NAME)
def main():
#####################
# Model & Tokenizer #
#####################
model = AutoModelForCausalLM.from_pretrained(
MODEL_PATH,
# cache_dir=MODEL_CACHE_DIR,
torch_dtype=torch.bfloat16,
use_cache=False,
)
tokenizer = AutoTokenizer.from_pretrained(
MODEL_PATH,
# cache_dir=MODEL_CACHE_DIR,
)
tokenizer.padding_side = 'right'
### FREEZING ###
for param in model.parameters():
param.requires_grad = False
print(tokenizer.added_tokens_decoder)
###########
# Dataset #
###########
dataset = load_dataset(
"my/dataset",
"train",
cache_dir=DATA_CACHE_DIR
)
def prepare_texts(example):
example['text'] = tokenizer.apply_chat_template(
conversation=json.loads(example['conversation']),
tools=json.loads(example['tools']),
tokenize=False
)
return example
dataset = dataset.map(prepare_texts)
dataset_vvalid = Dataset.from_dict(dataset['train'][:100]) # For tests
print(dataset)
########
# PEFT #
########
lora_config = LoraConfig(
task_type=TaskType.CAUSAL_LM,
r=LORA_R,
lora_alpha=LORA_ALPHA,
target_modules=TARGET_MODULES,
modules_to_save=MODULES_TO_SAVE,
lora_dropout=0.1,
bias="none",
)
##################
# Trainer & Args #
##################
bertscore = evaluate.load(
"bertscore",
cache_dir=METRICS_CACHE
)
rouge = evaluate.load(
"rouge",
cache_dir=METRICS_CACHE
)
def preprocess_logits_for_metrics(logits, labels):
pred_ids = torch.argmax(logits, dim=-1)
return pred_ids, labels
def compute_metrics(eval_pred):
pred_ids = torch.tensor(eval_pred.predictions[0])
label_ids = torch.tensor(eval_pred.label_ids)
preds = tokenizer.batch_decode(torch.where(label_ids == -100, tokenizer.eos_token_id, pred_ids), skip_special_tokens=True)
labels = tokenizer.batch_decode(torch.where(label_ids == -100, tokenizer.eos_token_id, label_ids), skip_special_tokens=True)
if not os.path.exists(LOGS_PATH):
os.makedirs(LOGS_PATH, exist_ok=True)
with open(LOGS_PATH + "/data", "w") as f:
f.write(json.dumps([preds, labels]))
print("PREDS:", preds[0], "###")
print("LABELS:", labels[0], "###")
bertscore_results = bertscore.compute(
predictions=preds,
references=labels,
lang='en'
)
rouge_results = rouge.compute(
predictions=preds,
references=labels,
)
return {
"bert_score_f1": np.mean(bertscore_results['f1']),
"bert_score_recall": np.mean(bertscore_results['recall']),
"bert_score_precision": np.mean(bertscore_results['precision']),
"rouge1": rouge_results['rouge1'],
'rouge2': rouge_results['rouge2'],
'rougeL': rouge_results['rougeL'],
}
data_collator = DataCollatorForMultiCompletionOnlyLM(
tokenizer=tokenizer,
response_template="<|im_start|>assistant\n",
end_response_template="<|im_end|>",
mlm=False
)
special_token_ids = [151665, 151666, 151667, 151668, 151669, 151670]
special_token_weight = 1.2
training_args = SFTConfig(
## SFT Arguments ##
max_seq_length=MAX_PROMPT_LENGTH,
## Standard Arguments ##
do_train=True,
do_eval=True,
output_dir=f"/home/raid/checkpoints/{REVISION_NAME}",
overwrite_output_dir=True,
eval_strategy="steps",
eval_steps=STEP_SIZE,
torch_empty_cache_steps=STEP_SIZE,
num_train_epochs=TRAIN_EPOCHS,
per_device_train_batch_size=BATCH_SIZE,
per_device_eval_batch_size=BATCH_SIZE,
gradient_accumulation_steps=GA_SIZE,
optim="adamw_torch",
save_steps=STEP_SIZE,
save_total_limit=4,
logging_steps=STEP_SIZE,
learning_rate=LR,
lr_scheduler_type="cosine",
bf16=True,
gradient_checkpointing=True,
gradient_checkpointing_kwargs = {"use_reentrant": True},
load_best_model_at_end=True,
metric_for_best_model="eval_rougeL",
greater_is_better=True,
report_to=REPORT_TO,
run_name=REVISION_NAME,
resume_from_checkpoint=True if CHECKPOINT_PATH else False,
)
trainer = CustomLossTrainer(
model=model,
args=training_args,
peft_config=lora_config,
train_dataset=dataset_vvalid,#dataset['train'],
eval_dataset=dataset_vvalid,#dataset['valid'],
processing_class=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
preprocess_logits_for_metrics=preprocess_logits_for_metrics,
callbacks=[EarlyStoppingCallback(early_stopping_patience=100)],
special_token_ids=special_token_ids,
special_token_weight=special_token_weight,
)
print("MODEL DTYPE: ", trainer.model.dtype)
# handle PEFT+FSDP case
trainer.model.print_trainable_parameters()
if getattr(trainer.accelerator.state, "fsdp_plugin", None):
from peft.utils.other import fsdp_auto_wrap_policy
fsdp_plugin = trainer.accelerator.state.fsdp_plugin
fsdp_plugin.auto_wrap_policy = fsdp_auto_wrap_policy(trainer.model)
# Training
if CHECKPOINT_PATH is not None:
trainer.train(resume_from_checkpoint=CHECKPOINT_PATH)
else:
trainer.train()
if trainer.is_fsdp_enabled:
trainer.accelerator.state.fsdp_plugin.set_state_dict_type("FULL_STATE_DICT")
trainer.save_model(f"/home/raid/models/{REVISION_NAME}/adapter")
if __name__ == "__main__":
main()
```
### Custom Collator & Trainer (special_utils.py)
```python
import torch
from transformers import DataCollatorForLanguageModeling
from typing import List, Optional, Union, Any, Literal
from trl import SFTTrainer
import numpy as np
# Adding weights to new tokens
class CustomLossTrainer(SFTTrainer):
def __init__(self, *args, special_token_ids, special_token_weight=1.2, **kwargs):
super().__init__(*args, **kwargs)
self.special_token_ids = special_token_ids
self.special_token_weight = special_token_weight
self.weights = None
def _init_weights(self, model):
self.weights = torch.ones(model.config.vocab_size, device=model.device)
for token_id in self.special_token_ids:
self.weights[token_id] = self.special_token_weight
self.cross_entropy = torch.nn.CrossEntropyLoss(weight=self.weights)
def compute_loss(self, model, inputs, return_outputs=False, **kwargs):
if self.weights is None:
self._init_weights(model)
labels = inputs.pop("labels").to(model.device)
outputs = model(**inputs)
logits = outputs.get("logits").to(model.device)
loss = self.cross_entropy(logits.view(-1, logits.size(-1)), labels.view(-1))
if return_outputs:
return loss, outputs
return loss
# For Completion with many different instruction templates
class DataCollatorForMultiCompletionOnlyLM(DataCollatorForLanguageModeling):
def __init__(
self,
response_template: Union[str, list[int]],
end_response_template: Union[str, list[int]],
instruction_template: Optional[Union[str, list[int]]] = None,
*args,
mlm: bool = False,
ignore_index: int = -100,
padding_free: bool = False,
**kwargs,
):
super().__init__(*args, mlm=mlm, **kwargs)
self.instruction_template = instruction_template
if isinstance(instruction_template, str):
# The user provides a string, must tokenize
self.instruction_token_ids = self.tokenizer.encode(self.instruction_template, add_special_tokens=False)
else:
# The user already provides the token ids
self.instruction_token_ids = instruction_template
self.response_template = response_template
if isinstance(response_template, str):
# The user provides a string, must tokenize
self.response_token_ids = self.tokenizer.encode(self.response_template, add_special_tokens=False)
else:
# The user already provides the token ids
self.response_token_ids = response_template
self.end_response_template = end_response_template
if isinstance(end_response_template, str):
# The user provides a string, must tokenize
self.end_response_token_ids = self.tokenizer.encode(self.end_response_template, add_special_tokens=False)
else:
# The user already provides the token ids
self.end_response_token_ids = end_response_template
if not self.mlm and self.instruction_template and self.tokenizer.pad_token_id == self.tokenizer.eos_token_id:
warnings.warn(
"The pad_token_id and eos_token_id values of this tokenizer are identical. "
"If you are planning for multi-turn training, "
"it can result in the model continuously generating questions and answers without eos token. "
"To avoid this, set the pad_token_id to a different value.",
UserWarning,
)
self.ignore_index = ignore_index
self.padding_free = padding_free
def torch_call(self, examples: list[Union[list[int], Any, dict[str, Any]]]) -> dict[str, Any]:
batch = super().torch_call(examples)
for i in range(len(examples)):
batch["labels"][i] = torch.where(batch["labels"][i] == 0, 999999, batch["labels"][i])
response_token_ids_start_ids = []
for idx in np.where(batch["labels"][i] == self.response_token_ids[0])[0]:
# `response_token_ids` is `'### Response:\n'`, here we are just making sure that the token IDs match
if (
self.response_token_ids
== batch["labels"][i][idx : idx + len(self.response_token_ids)].tolist()
):
response_token_ids_start_ids.append(idx)
if len(response_token_ids_start_ids) == 0:
warnings.warn(
f"Could not find response key `{self.response_template}` in the following instance: "
f"{self.tokenizer.decode(batch['input_ids'][i])}. This instance will be ignored in loss "
"calculation. Note, if this happens often, consider increasing the `max_seq_length`.",
UserWarning,
)
batch["labels"][i, :] = self.ignore_index
else:
response_token_ids_end_ids = [response_token_ids_start_idx + len(self.response_token_ids) for response_token_ids_start_idx in response_token_ids_start_ids]
end_response_token_ids_idxs = []
for idx in np.where(batch["labels"][i] == self.end_response_token_ids[0])[0]:
# `response_token_ids` is `'### Response:\n'`, here we are just making sure that the token IDs match
if (
self.end_response_token_ids
== batch["labels"][i][idx : idx + len(self.end_response_token_ids)].tolist()
):
end_response_token_ids_idxs.append(idx)
if len(end_response_token_ids_idxs) == 0:
warnings.warn(
f"Could not find end response key `{self.response_template}` in the following instance: "
f"{self.tokenizer.decode(batch['input_ids'][i])}. This instance will be ignored in loss "
"calculation. Note, if this happens often, consider increasing the `max_seq_length`.",
UserWarning,
)
batch["labels"][i, :] = self.ignore_index
assistant_end_idxs = []
for assistant_start_idx in response_token_ids_end_ids:
for assistant_end_idx in end_response_token_ids_idxs:
if assistant_start_idx < assistant_end_idx:
assistant_end_idxs.append(assistant_end_idx)
break
assert len(response_token_ids_end_ids) == len(assistant_end_idxs), "Error, need count assistant replics == count after assistant end suffixes"
mask = torch.ones_like(batch['labels'][i, :]) * -1
mask = torch.where(batch['labels'][i, :] == self.ignore_index, 1, mask)
for start_id, end_id in zip(response_token_ids_end_ids, assistant_end_idxs):
mask[start_id : end_id + 1] = 1
labels = mask * batch['labels'][i, :]
batch['labels'][i, :] = torch.where(labels < 0, self.ignore_index, labels)
batch["labels"][i] = torch.where(batch["labels"][i] == 999999, 0, batch["labels"][i])
if self.padding_free:
# remove padding, `attention_mask` and add `position_ids`
attn_mask = batch.pop("attention_mask")
batch["input_ids"] = batch["input_ids"][attn_mask.bool()].unsqueeze(0)
batch["position_ids"] = attn_mask.cumsum(1)[attn_mask.bool()].unsqueeze(0) - 1
batch["labels"] = batch["labels"][attn_mask.bool()].unsqueeze(0)
batch["labels"][batch["position_ids"] == 0] = self.ignore_index
# Calculate cumulative sequence lengths for queries and keys to prevent graph breaks during further computations.
flattened_position_ids = batch["position_ids"].flatten()
indices_q = torch.arange(
flattened_position_ids.size(0), device=flattened_position_ids.device, dtype=torch.int32
)
batch["cu_seq_lens_q"] = torch.cat(
(
indices_q[flattened_position_ids == 0],
torch.tensor(
flattened_position_ids.size(), device=flattened_position_ids.device, dtype=torch.int32
),
)
)
batch["cu_seq_lens_k"] = batch["cu_seq_lens_q"]
# Determine maximum sequence lengths to prevent graph breaks during further computations.
batch["max_length_k"] = flattened_position_ids.max().item() + 1
batch["max_length_q"] = batch["max_length_k"]
return batch
```
## During training
To be as sure as possible that this error is not in the learning process, I additionally save the validation examples to a separate file and log the metrics.
Metrics from wandb:

I tracked the direct text saved for validation, everything was fine.
## After training
After training process I have tried load model to check autoregressive inference:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_CACHE_DIR = "/home/raid/hf_cache"
DATA_CACHE_DIR = "/home/raid/datasets"
MODEL_PATH = "/home/raid/models/extended_qwen"
lora_path = "/home/raid/models/tool-plannings/qwen-tp-(1e-05)LR-(2)BATCH_SIZE-(4)GA_SIZE-(6)TRAIN_EPOCHS-(48)LORA_R-(96)LORA_ALPHA/adapter"
model = AutoModelForCausalLM.from_pretrained(
MODEL_PATH,
torch_dtype=torch.bfloat16,
use_cache=False,
)
tokenizer = AutoTokenizer.from_pretrained(
MODEL_PATH,
)
from peft import PeftModelForCausalLM
model = PeftModelForCausalLM.from_pretrained(
model,
lora_path # This contains adapter_model.safetensors, adapter_config.json, etc.
)
model
```
```
PeftModelForCausalLM(
(base_model): LoraModel(
(model): Qwen2ForCausalLM(
(model): Qwen2Model(
(embed_tokens): ModulesToSaveWrapper(
(original_module): Embedding(151671, 3584)
(modules_to_save): ModuleDict(
(default): Embedding(151671, 3584)
)
)
(layers): ModuleList(
(0-27): 28 x Qwen2DecoderLayer(
(self_attn): Qwen2Attention(
(q_proj): lora.Linear(
(base_layer): Linear(in_features=3584, out_features=3584, bias=True)
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=3584, out_features=48, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=48, out_features=3584, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
(k_proj): lora.Linear(
(base_layer): Linear(in_features=3584, out_features=512, bias=True)
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=3584, out_features=48, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=48, out_features=512, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
(v_proj): lora.Linear(
(base_layer): Linear(in_features=3584, out_features=512, bias=True)
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=3584, out_features=48, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=48, out_features=512, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
(o_proj): lora.Linear(
(base_layer): Linear(in_features=3584, out_features=3584, bias=False)
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=3584, out_features=48, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=48, out_features=3584, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
)
(mlp): Qwen2MLP(
(gate_proj): lora.Linear(
(base_layer): Linear(in_features=3584, out_features=18944, bias=False)
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=3584, out_features=48, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=48, out_features=18944, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
(up_proj): lora.Linear(
(base_layer): Linear(in_features=3584, out_features=18944, bias=False)
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=3584, out_features=48, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=48, out_features=18944, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
(down_proj): lora.Linear(
(base_layer): Linear(in_features=18944, out_features=3584, bias=False)
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=18944, out_features=48, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=48, out_features=3584, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
(act_fn): SiLU()
)
(input_layernorm): Qwen2RMSNorm((3584,), eps=1e-06)
(post_attention_layernorm): Qwen2RMSNorm((3584,), eps=1e-06)
)
)
(norm): Qwen2RMSNorm((3584,), eps=1e-06)
(rotary_emb): Qwen2RotaryEmbedding()
)
(lm_head): ModulesToSaveWrapper(
(original_module): Linear(in_features=3584, out_features=151671, bias=False)
(modules_to_save): ModuleDict(
(default): Linear(in_features=3584, out_features=151671, bias=False)
)
)
)
)
)
```
And during inference I had something like that:
```python
outputs = model.generate(
**inputs_tokens,
max_new_tokens=20,
)[0]
print(tokenizer.decode(outputs, skip_special_tokens=False))
```
```
...ngle stepA journey of a thousand miles'.<|im_end|>
<|im_start|>assistant # here start new tokens
write write write write write write write write write write write write write write write write write write write...
```
## Problem
I thought there was a mistake in saving the adapter and instead of saving the adapter, I tried to merge model and adapter immediately after the end of the training in script like that:
```python
merged_model = trainer.model.merge_and_unload(safe_merge=True)
merged_model.save_pretrained(f"/home/raid/models/{REVISION_NAME}")
```
and I have occured the error:
```
MODEL DTYPE: torch.bfloat16
trainable params: 1,107,362,816 || all params: 8,720,162,304 || trainable%: 12.6989
{'train_runtime': 79.4632, 'train_samples_per_second': 1.258, 'train_steps_per_second': 0.038, 'train_loss': 108.3709716796875, 'epoch': 0.92}
100%|██████████████████████████████████████████████████████████████| 3/3 [01:19<00:00, 26.51s/it]
[rank2]: Traceback (most recent call last):
[rank2]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 268, in <module>
[rank2]: main()
[rank2]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 264, in main
[rank2]: merged_model = trainer.model.merge_and_unload(safe_merge=True)
[rank2]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 892, in merge_and_unload
[rank2]: return self._unload_and_optionally_merge(
[rank2]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 514, in _unload_and_optionally_merge
[rank2]: target.merge(safe_merge=safe_merge, adapter_names=adapter_names)
[rank2]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 477, in merge
[rank2]: delta_weight = self.get_delta_weight(active_adapter)
[rank2]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 585, in get_delta_weight
[rank2]: output_tensor = transpose(weight_B @ weight_A, self.fan_in_fan_out) * self.scaling[adapter]
[rank2]: RuntimeError: inconsistent tensor size, expected tensor [1024] and src [7168] to have the same number of elements, but got 1024 and 7168 elements respectively
[rank1]: Traceback (most recent call last):
[rank1]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 268, in <module>
[rank1]: main()
[rank1]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 264, in main
[rank1]: merged_model = trainer.model.merge_and_unload(safe_merge=True)
[rank1]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 892, in merge_and_unload
[rank1]: return self._unload_and_optionally_merge(
[rank1]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 514, in _unload_and_optionally_merge
[rank1]: target.merge(safe_merge=safe_merge, adapter_names=adapter_names)
[rank1]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 477, in merge
[rank1]: delta_weight = self.get_delta_weight(active_adapter)
[rank1]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 585, in get_delta_weight
[rank1]: output_tensor = transpose(weight_B @ weight_A, self.fan_in_fan_out) * self.scaling[adapter]
[rank1]: RuntimeError: inconsistent tensor size, expected tensor [1024] and src [7168] to have the same number of elements, but got 1024 and 7168 elements respectively
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 268, in <module>
[rank0]: main()
[rank0]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 264, in main
[rank0]: merged_model = trainer.model.merge_and_unload(safe_merge=True)
[rank0]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 892, in merge_and_unload
[rank0]: return self._unload_and_optionally_merge(
[rank0]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 514, in _unload_and_optionally_merge
[rank0]: target.merge(safe_merge=safe_merge, adapter_names=adapter_names)
[rank0]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 477, in merge
[rank0]: delta_weight = self.get_delta_weight(active_adapter)
[rank0]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 585, in get_delta_weight
[rank0]: output_tensor = transpose(weight_B @ weight_A, self.fan_in_fan_out) * self.scaling[adapter]
[rank0]: RuntimeError: inconsistent tensor size, expected tensor [1024] and src [7168] to have the same number of elements, but got 1024 and 7168 elements respectively
[rank3]: Traceback (most recent call last):
[rank3]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 268, in <module>
[rank3]: main()
[rank3]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 264, in main
[rank3]: merged_model = trainer.model.merge_and_unload(safe_merge=True)
[rank3]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 892, in merge_and_unload
[rank3]: return self._unload_and_optionally_merge(
[rank3]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 514, in _unload_and_optionally_merge
[rank3]: target.merge(safe_merge=safe_merge, adapter_names=adapter_names)
[rank3]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 477, in merge
[rank3]: delta_weight = self.get_delta_weight(active_adapter)
[rank3]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 585, in get_delta_weight
[rank3]: output_tensor = transpose(weight_B @ weight_A, self.fan_in_fan_out) * self.scaling[adapter]
[rank3]: RuntimeError: inconsistent tensor size, expected tensor [1024] and src [7168] to have the same number of elements, but got 1024 and 7168 elements respectively
```
Besides, I tried load adapter manually by safetensors script smth like that:
```python
from safetensors import safe_open
lora_state_dict = {}
with safe_open(lora_path, framework="pt", device="cpu") as f:
for key in f.keys():
new_key = key.replace("lora_A.", "lora_A.default.").replace("lora_B.", "lora_B.default.")
new_key = new_key.replace("embed_tokens.weight", "embed_tokens.original_module.weight")
new_key = new_key.replace("lm_head.weight", "lm_head.modules_to_save.default.weight")
lora_state_dict[new_key] = f.get_tensor(key)
m, u = model.load_state_dict(lora_state_dict, strict=False)
```
I was able to upload the adapter in my model, but I was still getting catastrophical hallucinations like:
```
...<|im_start|>assistant
# generated spaces
```
I assume that the error lies in the adapter merge and may be floating bf16 fp16 or something.
P.S. BTW I tried to train model with fp16 and I had same problem
### Expected behavior
Expected behavior of generation after merging adapter with my model
|
{
"login": "DmitryDiTy",
"id": 90377536,
"node_id": "MDQ6VXNlcjkwMzc3NTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/90377536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DmitryDiTy",
"html_url": "https://github.com/DmitryDiTy",
"followers_url": "https://api.github.com/users/DmitryDiTy/followers",
"following_url": "https://api.github.com/users/DmitryDiTy/following{/other_user}",
"gists_url": "https://api.github.com/users/DmitryDiTy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DmitryDiTy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DmitryDiTy/subscriptions",
"organizations_url": "https://api.github.com/users/DmitryDiTy/orgs",
"repos_url": "https://api.github.com/users/DmitryDiTy/repos",
"events_url": "https://api.github.com/users/DmitryDiTy/events{/privacy}",
"received_events_url": "https://api.github.com/users/DmitryDiTy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2368/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2367
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2367/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2367/comments
|
https://api.github.com/repos/huggingface/peft/issues/2367/events
|
https://github.com/huggingface/peft/issues/2367
| 2,838,045,820
|
I_kwDOIf9iDM6pKSR8
| 2,367
|
Some weights of MistralForSequenceClassification were not initialized from the model checkpoint at mistralai/Mistral-7B-Instruct-v0.3 and are newly initialized: ['score.weight']
|
{
"login": "amritansh6",
"id": 46628209,
"node_id": "MDQ6VXNlcjQ2NjI4MjA5",
"avatar_url": "https://avatars.githubusercontent.com/u/46628209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amritansh6",
"html_url": "https://github.com/amritansh6",
"followers_url": "https://api.github.com/users/amritansh6/followers",
"following_url": "https://api.github.com/users/amritansh6/following{/other_user}",
"gists_url": "https://api.github.com/users/amritansh6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amritansh6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amritansh6/subscriptions",
"organizations_url": "https://api.github.com/users/amritansh6/orgs",
"repos_url": "https://api.github.com/users/amritansh6/repos",
"events_url": "https://api.github.com/users/amritansh6/events{/privacy}",
"received_events_url": "https://api.github.com/users/amritansh6/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2025-02-07T12:29:22
| 2025-02-10T11:01:57
| 2025-02-10T11:01:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
I have been trying to fine tune mistral 7b v0.3 for a downstream task using lora and I get the following warning while running inference.
```python
base_model = AutoModelForSequenceClassification.from_pretrained(
model_id, use_auth_token="hf_***",
num_labels=2,
problem_type="single_label_classification"
)
base_model.config.pad_token_id = tokenizer.pad_token_id
lora_config = LoraConfig(
r=8,
lora_alpha=32,
target_modules=["q_proj", "v_proj"],
bias="none",
task_type="SEQ_CLS",
modules_to_save=["score"]
)
model_with_lora = get_peft_model(base_model, lora_config)
model_with_lora.print_trainable_parameters()
training_args = TrainingArguments(
output_dir="./results_4",
evaluation_strategy="epoch",
save_strategy="steps",
save_steps=0.1,
logging_dir="./logs",
learning_rate=5e-5,
per_device_train_batch_size=2,
num_train_epochs=2,
weight_decay=0.01,
report_to="wandb",
save_total_limit=2,
logging_steps=10,
)
trainer = Trainer(
model=model_with_lora,
args=training_args,
train_dataset=hf_dataset,
eval_dataset=hf_eval_dataset,
tokenizer=tokenizer,
compute_metrics=None,
)
```
This is my training script and while loading for inference I get the warning as,
Some weights of MistralForSequenceClassification were not initialized from the model checkpoint at mistralai/Mistral-7B-Instruct-v0.3 and are newly initialized: ['score.weight']
Can someone check this.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
```python
base_model = AutoModelForSequenceClassification.from_pretrained(
model_id, use_auth_token="hf_***",
num_labels=2,
problem_type="single_label_classification"
)
base_model.config.pad_token_id = tokenizer.pad_token_id
lora_config = LoraConfig(
r=8,
lora_alpha=32,
target_modules=["q_proj", "v_proj"],
bias="none",
task_type="SEQ_CLS",
modules_to_save=["score"]
)
model_with_lora = get_peft_model(base_model, lora_config)
model_with_lora.print_trainable_parameters()
training_args = TrainingArguments(
output_dir="./results_4",
evaluation_strategy="epoch",
save_strategy="steps",
save_steps=0.1,
logging_dir="./logs",
learning_rate=5e-5,
per_device_train_batch_size=2,
num_train_epochs=2,
weight_decay=0.01,
report_to="wandb",
save_total_limit=2,
logging_steps=10,
)
trainer = Trainer(
model=model_with_lora,
args=training_args,
train_dataset=hf_dataset,
eval_dataset=hf_eval_dataset,
tokenizer=tokenizer,
compute_metrics=None,
)
```
### Expected behavior
Ideally this warning should not come.
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2367/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2364
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2364/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2364/comments
|
https://api.github.com/repos/huggingface/peft/issues/2364/events
|
https://github.com/huggingface/peft/issues/2364
| 2,835,746,171
|
I_kwDOIf9iDM6pBg17
| 2,364
|
docs: broken links to boft
|
{
"login": "makelinux",
"id": 2335185,
"node_id": "MDQ6VXNlcjIzMzUxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2335185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/makelinux",
"html_url": "https://github.com/makelinux",
"followers_url": "https://api.github.com/users/makelinux/followers",
"following_url": "https://api.github.com/users/makelinux/following{/other_user}",
"gists_url": "https://api.github.com/users/makelinux/gists{/gist_id}",
"starred_url": "https://api.github.com/users/makelinux/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/makelinux/subscriptions",
"organizations_url": "https://api.github.com/users/makelinux/orgs",
"repos_url": "https://api.github.com/users/makelinux/repos",
"events_url": "https://api.github.com/users/makelinux/events{/privacy}",
"received_events_url": "https://api.github.com/users/makelinux/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2025-02-06T14:48:16
| 2025-02-07T10:14:44
| 2025-02-07T10:14:44
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
on page: https://huggingface.co/docs/peft/v0.14.0/en/conceptual_guides/oft
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
on page: https://huggingface.co/docs/peft/v0.14.0/en/conceptual_guides/oft
Snippet:
Take a look at the following step-by-step guides on how to finetune a model with BOFT:
[Dreambooth finetuning with BOFT](https://huggingface.co/docs/peft/v0.14.0/en/task_guides/boft_dreambooth)
[Controllable generation finetuning with BOFT (ControlNet)](https://huggingface.co/docs/peft/v0.14.0/en/task_guides/boft_controlnet)
### Expected behavior
perhaps the links should lead to
https://github.com/huggingface/peft/blob/main/examples/boft_dreambooth/boft_dreambooth.md
https://github.com/huggingface/peft/blob/main/examples/boft_controlnet/boft_controlnet.md
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2364/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2362
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2362/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2362/comments
|
https://api.github.com/repos/huggingface/peft/issues/2362/events
|
https://github.com/huggingface/peft/issues/2362
| 2,833,885,059
|
I_kwDOIf9iDM6o6aeD
| 2,362
|
Import error
|
{
"login": "ikamensh",
"id": 23004004,
"node_id": "MDQ6VXNlcjIzMDA0MDA0",
"avatar_url": "https://avatars.githubusercontent.com/u/23004004?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ikamensh",
"html_url": "https://github.com/ikamensh",
"followers_url": "https://api.github.com/users/ikamensh/followers",
"following_url": "https://api.github.com/users/ikamensh/following{/other_user}",
"gists_url": "https://api.github.com/users/ikamensh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ikamensh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ikamensh/subscriptions",
"organizations_url": "https://api.github.com/users/ikamensh/orgs",
"repos_url": "https://api.github.com/users/ikamensh/repos",
"events_url": "https://api.github.com/users/ikamensh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ikamensh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2025-02-05T20:19:35
| 2025-02-05T20:38:50
| 2025-02-05T20:38:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Successfully installed accelerate-1.3.0 aiohappyeyeballs-2.4.4 aiohttp-3.11.11 aiosignal-1.3.2 bitsandbytes-0.45.1 datasets-3.2.0 dill-0.3.8 frozenlist-1.5.0 huggingface_hub-0.28.1 multidict-6.1.0 multiprocess-0.70.16 pandas-2.2.3 peft-0.14.0 propcache-0.2.1 pyarrow-19.0.0 pytz-2025.1 regex-2024.11.6 safetensors-0.5.2 tokenizers-0.13.3 tqdm-4.67.1 transformers-4.30.2 tzdata-2025.1 xxhash-3.5.0 yarl-1.18.3
root@77c297c83b18:/workspace# python qlora.py
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/transformers/utils/import_utils.py", line 1086, in _get_module
return importlib.import_module("." + module_name, self.__name__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[...]
File "/usr/local/lib/python3.11/dist-packages/transformers/trainer.py", line 212, in <module>
from peft import PeftModel
File "/usr/local/lib/python3.11/dist-packages/peft/__init__.py", line 22, in <module>
from .auto import (
File "/usr/local/lib/python3.11/dist-packages/peft/auto.py", line 32, in <module>
from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING
File "/usr/local/lib/python3.11/dist-packages/peft/mapping.py", line 25, in <module>
from .mixed_model import PeftMixedModel
File "/usr/local/lib/python3.11/dist-packages/peft/mixed_model.py", line 29, in <module>
from .peft_model import PeftModel
File "/usr/local/lib/python3.11/dist-packages/peft/peft_model.py", line 37, in <module>
from transformers import Cache, DynamicCache, EncoderDecoderCache, PreTrainedModel
ImportError: cannot import name 'Cache' from 'transformers' (/usr/local/lib/python3.11/dist-packages/transformers/__init__.py)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workspace/qlora.py", line 17, in <module>
from transformers import (
File "<frozen importlib._bootstrap>", line 1229, in _handle_fromlist
File "/usr/local/lib/python3.11/dist-packages/transformers/utils/import_utils.py", line 1076, in __getattr__
module = self._get_module(self._class_to_module[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/transformers/utils/import_utils.py", line 1088, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):
cannot import name 'Cache' from 'transformers' (/usr/local/lib/python3.11/dist-packages/transformers/__init__.py)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
`pip install peft-0.14.0 transformers-4.30.2` on linux + py3.11
run following:
```python
from transformers import (
LlamaForCausalLM,
LlamaTokenizer,
Trainer,
TrainingArguments,
DataCollatorForLanguageModeling,
)
```
### Expected behavior
imports work (or crash outside peft)
|
{
"login": "ikamensh",
"id": 23004004,
"node_id": "MDQ6VXNlcjIzMDA0MDA0",
"avatar_url": "https://avatars.githubusercontent.com/u/23004004?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ikamensh",
"html_url": "https://github.com/ikamensh",
"followers_url": "https://api.github.com/users/ikamensh/followers",
"following_url": "https://api.github.com/users/ikamensh/following{/other_user}",
"gists_url": "https://api.github.com/users/ikamensh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ikamensh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ikamensh/subscriptions",
"organizations_url": "https://api.github.com/users/ikamensh/orgs",
"repos_url": "https://api.github.com/users/ikamensh/repos",
"events_url": "https://api.github.com/users/ikamensh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ikamensh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2362/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2359
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2359/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2359/comments
|
https://api.github.com/repos/huggingface/peft/issues/2359/events
|
https://github.com/huggingface/peft/issues/2359
| 2,829,346,186
|
I_kwDOIf9iDM6opGWK
| 2,359
|
Inconsistent documentation
|
{
"login": "makelinux",
"id": 2335185,
"node_id": "MDQ6VXNlcjIzMzUxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2335185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/makelinux",
"html_url": "https://github.com/makelinux",
"followers_url": "https://api.github.com/users/makelinux/followers",
"following_url": "https://api.github.com/users/makelinux/following{/other_user}",
"gists_url": "https://api.github.com/users/makelinux/gists{/gist_id}",
"starred_url": "https://api.github.com/users/makelinux/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/makelinux/subscriptions",
"organizations_url": "https://api.github.com/users/makelinux/orgs",
"repos_url": "https://api.github.com/users/makelinux/repos",
"events_url": "https://api.github.com/users/makelinux/events{/privacy}",
"received_events_url": "https://api.github.com/users/makelinux/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 5
| 2025-02-04T07:25:29
| 2025-03-06T15:03:57
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Content of https://huggingface.co/docs/peft/index is not synchronised with ToC.
"How-to guides" is already "PEFT method guides".
"PEFT method guides" are under directory `task_guides`.

### Expected behavior
Consistent documentation.
Clear unambiguous names.
Links match titles and the content.
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2359/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2355
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2355/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2355/comments
|
https://api.github.com/repos/huggingface/peft/issues/2355/events
|
https://github.com/huggingface/peft/issues/2355
| 2,823,704,539
|
I_kwDOIf9iDM6oTk_b
| 2,355
|
dataclass config handling
|
{
"login": "moghadas76",
"id": 23231913,
"node_id": "MDQ6VXNlcjIzMjMxOTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/23231913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moghadas76",
"html_url": "https://github.com/moghadas76",
"followers_url": "https://api.github.com/users/moghadas76/followers",
"following_url": "https://api.github.com/users/moghadas76/following{/other_user}",
"gists_url": "https://api.github.com/users/moghadas76/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moghadas76/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moghadas76/subscriptions",
"organizations_url": "https://api.github.com/users/moghadas76/orgs",
"repos_url": "https://api.github.com/users/moghadas76/repos",
"events_url": "https://api.github.com/users/moghadas76/events{/privacy}",
"received_events_url": "https://api.github.com/users/moghadas76/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2025-01-31T14:48:29
| 2025-03-10T15:04:18
| 2025-03-10T15:04:18
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:12:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 555.42.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900F
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.0
[pip3] torchtune==0.5.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.1.105 0 nvidia
[conda] cuda-cupti 12.1.105 0 nvidia
[conda] cuda-libraries 12.1.0 0 nvidia
[conda] cuda-nvrtc 12.1.105 0 nvidia
[conda] cuda-nvtx 12.1.105 0 nvidia
[conda] cuda-opencl 12.3.52 0 nvidia
[conda] cuda-runtime 12.1.0 0 nvidia
[conda] easy-torch 1.3.2 pypi_0 pypi
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.1.0.26 0 nvidia
[conda] libcufft 11.0.2.4 0 nvidia
[conda] libcurand 10.3.4.52 0 nvidia
[conda] libcusolver 11.4.4.55 0 nvidia
[conda] libcusparse 12.0.2.55 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.1.105 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.8 py311h5eee18b_0
[conda] mkl_random 1.2.4 py311hdb19cb5_0
[conda] numpy 1.24.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch
[conda] pytorch-forecasting 1.2.0 pypi_0 pypi
[conda] pytorch-lightning 2.2.0 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 2.3.0 pypi_0 pypi
[conda] torch-cluster 1.6.3+pt23cu121 pypi_0 pypi
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt23cu121 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt23cu121 pypi_0 pypi
[conda] torch-spline-conv 1.2.2+pt23cu121 pypi_0 pypi
[conda] torch-summary 1.4.5 pypi_0 pypi
[conda] torchaudio 2.3.0 pypi_0 pypi
[conda] torchinfo 1.8.0 pypi_0 pypi
[conda] torchmetrics 1.3.0.post0 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchtune 0.5.0 pypi_0 pypi
[conda] torchvision 0.18.0 pypi_0 pypi
[conda] triton 2.3.0 pypi_0 pypi
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
See PR
### Expected behavior
See PR
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2355/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2354
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2354/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2354/comments
|
https://api.github.com/repos/huggingface/peft/issues/2354/events
|
https://github.com/huggingface/peft/issues/2354
| 2,823,156,387
|
I_kwDOIf9iDM6oRfKj
| 2,354
|
Commented PeftConfig
|
{
"login": "moghadas76",
"id": 23231913,
"node_id": "MDQ6VXNlcjIzMjMxOTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/23231913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moghadas76",
"html_url": "https://github.com/moghadas76",
"followers_url": "https://api.github.com/users/moghadas76/followers",
"following_url": "https://api.github.com/users/moghadas76/following{/other_user}",
"gists_url": "https://api.github.com/users/moghadas76/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moghadas76/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moghadas76/subscriptions",
"organizations_url": "https://api.github.com/users/moghadas76/orgs",
"repos_url": "https://api.github.com/users/moghadas76/repos",
"events_url": "https://api.github.com/users/moghadas76/events{/privacy}",
"received_events_url": "https://api.github.com/users/moghadas76/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2025-01-31T11:33:50
| 2025-03-10T15:04:20
| 2025-03-10T15:04:20
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
# from .config import PeftConfig, PeftType, PromptLearningConfig, TaskType
@ ./peft/utils/__init__.py
Why?
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
from peft.utils import PeftConfig
### Expected behavior
accessing to PeftConfig!
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2354/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2348
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2348/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2348/comments
|
https://api.github.com/repos/huggingface/peft/issues/2348/events
|
https://github.com/huggingface/peft/issues/2348
| 2,811,752,952
|
I_kwDOIf9iDM6nl_H4
| 2,348
|
Incorrect Magnitude Calculation for DoRA Linear Layers (Violates DoRA Paper Methodology)
|
{
"login": "arcteryox",
"id": 195980235,
"node_id": "U_kgDOC65ryw",
"avatar_url": "https://avatars.githubusercontent.com/u/195980235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arcteryox",
"html_url": "https://github.com/arcteryox",
"followers_url": "https://api.github.com/users/arcteryox/followers",
"following_url": "https://api.github.com/users/arcteryox/following{/other_user}",
"gists_url": "https://api.github.com/users/arcteryox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arcteryox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arcteryox/subscriptions",
"organizations_url": "https://api.github.com/users/arcteryox/orgs",
"repos_url": "https://api.github.com/users/arcteryox/repos",
"events_url": "https://api.github.com/users/arcteryox/events{/privacy}",
"received_events_url": "https://api.github.com/users/arcteryox/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2025-01-26T19:43:50
| 2025-01-30T18:56:52
| 2025-01-30T18:41:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### **Description**
The current `DoraLinearLayer` incorrectly computes weight magnitude norms **per input channel** instead of **per output channel**, violating the methodology outlined in the [DoRA paper (Section 3.1)](https://arxiv.org/abs/2402.09353). This leads to degraded performance for linear layers (e.g., in LLMs).
---
### **Issue Details**
#### **Affected Code**:
`peft/tuners/lora/dora.py` → `DoraLinearLayer.get_weight_norm`
```python
def get_weight_norm(self, weight, lora_weight, scaling):
weight = transpose(weight, self.fan_in_fan_out) # ❌ Transposes to [in_features, out_features]
weight = weight + scaling * lora_weight
weight_norm = torch.linalg.norm(weight, dim=1) # Norm over input channels (dim=1)
return weight_norm
```
#### **Problem**:
- For a linear layer with weight shape `[out_features, in_features]`, transposing to `[in_features, out_features]` causes `dim=1` to represent **input channels**, not output channels.
- This contradicts the DoRA paper’s requirement to compute magnitude **per output channel** (rows of the weight matrix).
---
### **Steps to Reproduce**
1. Initialize a DoRA-linear layer:
```python
base_layer = nn.Linear(10, 5) # out_features=5, in_features=10
dora_layer = DoraLinearLayer(fan_in_fan_out=False)
```
2. Check weight norm dimensions:
```python
weight = base_layer.weight # Shape [5, 10]
lora_weight = torch.randn(5, 10) # Simulate LoRA delta
norm = dora_layer.get_weight_norm(weight, lora_weight, scaling=1.0)
print(norm.shape) # Outputs [10] (input channels) instead of [5] (output channels)
```
---
### **Expected vs Actual Behavior**
| Expected (Per Paper) | Actual (Current Code) |
|-----------------------|-----------------------|
| Norms computed over **output channels** (`out_features`). | Norms computed over **input channels** (`in_features`). |
---
### **Proposed Fix**
Remove the transpose and compute norms over `dim=1` directly:
```python
def get_weight_norm(self, weight, lora_weight, scaling):
# Remove transpose - work directly with [out_features, in_features]
weight = weight + scaling * lora_weight
weight_norm = torch.linalg.norm(weight, dim=1) # ✅ Norm over output channels (dim=1)
return weight_norm
```
#### **Impact of Fix**:
- Aligns with DoRA paper’s methodology for linear layers.
- Convolutional layers (e.g., `DoraConv2dLayer`) are unaffected and already correct.
---
### **Additional Context**
1. **Paper Reference**:
- Section 3.1 defines magnitude as the L2 norm of **rows** (output channels) for linear layers.
- Example: For weight matrix `W ∈ ℝ^{d×k}`, magnitude `m_j = ||W_j||_2` (row-wise norm).
2. **Why This Matters**:
- Magnitude scaling is critical for DoRA’s ability to decouple direction and magnitude updates.
- Incorrect scaling invalidates the method’s theoretical guarantees and reduces performance (e.g., on LLM fine-tuning tasks).
---
### **Verification**
After applying the fix:
```python
print(norm.shape) # Now outputs [5] (correct for out_features=5)
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
### **Steps to Reproduce**
1. Initialize a DoRA-linear layer:
```python
base_layer = nn.Linear(10, 5) # out_features=5, in_features=10
dora_layer = DoraLinearLayer(fan_in_fan_out=False)
```
2. Check weight norm dimensions:
```python
weight = base_layer.weight # Shape [5, 10]
lora_weight = torch.randn(5, 10) # Simulate LoRA delta
norm = dora_layer.get_weight_norm(weight, lora_weight, scaling=1.0)
print(norm.shape) # Outputs [10] (input channels) instead of [5] (output channels)
```
### Expected behavior
### **Expected vs Actual Behavior**
| Expected (Per Paper) | Actual (Current Code) |
|-----------------------|-----------------------|
| Norms computed over **output channels** (`out_features`). | Norms computed over **input channels** (`in_features`). |
|
{
"login": "arcteryox",
"id": 195980235,
"node_id": "U_kgDOC65ryw",
"avatar_url": "https://avatars.githubusercontent.com/u/195980235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arcteryox",
"html_url": "https://github.com/arcteryox",
"followers_url": "https://api.github.com/users/arcteryox/followers",
"following_url": "https://api.github.com/users/arcteryox/following{/other_user}",
"gists_url": "https://api.github.com/users/arcteryox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arcteryox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arcteryox/subscriptions",
"organizations_url": "https://api.github.com/users/arcteryox/orgs",
"repos_url": "https://api.github.com/users/arcteryox/repos",
"events_url": "https://api.github.com/users/arcteryox/events{/privacy}",
"received_events_url": "https://api.github.com/users/arcteryox/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2348/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2344
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2344/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2344/comments
|
https://api.github.com/repos/huggingface/peft/issues/2344/events
|
https://github.com/huggingface/peft/issues/2344
| 2,807,348,808
|
I_kwDOIf9iDM6nVL5I
| 2,344
|
FSDP2 and peft
|
{
"login": "psinger",
"id": 1677826,
"node_id": "MDQ6VXNlcjE2Nzc4MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1677826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/psinger",
"html_url": "https://github.com/psinger",
"followers_url": "https://api.github.com/users/psinger/followers",
"following_url": "https://api.github.com/users/psinger/following{/other_user}",
"gists_url": "https://api.github.com/users/psinger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/psinger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/psinger/subscriptions",
"organizations_url": "https://api.github.com/users/psinger/orgs",
"repos_url": "https://api.github.com/users/psinger/repos",
"events_url": "https://api.github.com/users/psinger/events{/privacy}",
"received_events_url": "https://api.github.com/users/psinger/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2025-01-23T16:20:47
| 2025-03-03T15:04:06
| 2025-03-03T15:04:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hey, sorry if this is the wrong place. Feel free to move it to discussion.
I am trying to get peft working with fsdp2 and am wondering if someone else attempted that already?
The issue is that Im always getting errors along the lines of:
`RuntimeError: aten.mm.default: got mixed torch.Tensor and DTensor, need to convert all torch.Tensor to DTensor before calling distributed operators!`
Happy for any pointers.
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2344/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2342
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2342/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2342/comments
|
https://api.github.com/repos/huggingface/peft/issues/2342/events
|
https://github.com/huggingface/peft/issues/2342
| 2,806,843,497
|
I_kwDOIf9iDM6nTQhp
| 2,342
|
CI: Add gptqmodel to the CI
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5192585063,
"node_id": "LA_kwDOIf9iDM8AAAABNYCPZw",
"url": "https://api.github.com/repos/huggingface/peft/labels/wip",
"name": "wip",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 4
| 2025-01-23T12:57:29
| 2025-02-28T10:35:25
| null |
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
This issue is to track the TODO from [this comment](https://github.com/huggingface/peft/pull/2247#pullrequestreview-2569656574). Once optimum 1.24.0 and transformers 4.49.0 are released, we should enable gptqmodel in the CI (and remove auto-gptq).
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2342/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2339
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2339/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2339/comments
|
https://api.github.com/repos/huggingface/peft/issues/2339/events
|
https://github.com/huggingface/peft/issues/2339
| 2,802,697,166
|
I_kwDOIf9iDM6nDcPO
| 2,339
|
Peft version upgrade from 0.4.0 to 0.14.0 results in "No module named \u0027peft.utils.config\u0027" error
|
{
"login": "incchar",
"id": 184541983,
"node_id": "U_kgDOCv_jHw",
"avatar_url": "https://avatars.githubusercontent.com/u/184541983?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/incchar",
"html_url": "https://github.com/incchar",
"followers_url": "https://api.github.com/users/incchar/followers",
"following_url": "https://api.github.com/users/incchar/following{/other_user}",
"gists_url": "https://api.github.com/users/incchar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/incchar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/incchar/subscriptions",
"organizations_url": "https://api.github.com/users/incchar/orgs",
"repos_url": "https://api.github.com/users/incchar/repos",
"events_url": "https://api.github.com/users/incchar/events{/privacy}",
"received_events_url": "https://api.github.com/users/incchar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2025-01-21T20:00:07
| 2025-03-02T15:03:46
| 2025-03-02T15:03:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Hello,
I'm migrating my sagemaker endpoint from the `huggingface-pytorch-inference:2.1.0-transformers4.37.0-gpu-py310-cu118-ubuntu20.04` image (which is being deprecated) to the `huggingface-pytorch-inference:2.3.0-transformers4.46.1-gpu-py311-cu121-ubuntu20.04-v1.0` image, which is supported.
This new version does not support the 0.4.0 version of peft, so we have upgraded to 1.14.0 and upgraded to a compatible diffusers version. The sagemaker endpoint deploys correctly with these new versions, but once it's run, we receive the following error:
`No module named \u0027peft.utils.config\u0027`
I dug around and found that there' no usage of peft.utils.config in our inference code. The only usage I could find is here, in the peft code itself: https://github.com/huggingface/peft/blob/main/src/peft/config.py. However, in this code, It looks like utils.config does not exist at all.
Here's what I'm currently using:
diffusers==0.32.2
peft==0.14.0
Is the peft library somehow breaking itself by looking for a peft.utils.config that doesn't exist? Have I missed a step that would create the utils.config file? Or is there another hidden dependency using peft.utils.config?
### Who can help?
@BenjaminBossan @sayakpaul
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
Create a sagemaker endpoint using the new `huggingface-pytorch-inference:2.3.0-transformers4.46.1-gpu-py311-cu121-ubuntu20.04-v1.0` huggingface DLC image.
Use a requirements.txt that looks like the following:
diffusers==0.32.2
peft==0.14.0
Observe that all requests to the sagemaker endpoint respond with 500 errors.
### Expected behavior
The Sagemaker endpoint should continue to process requests as it did before the version upgrade (using peft 0.4.0)
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2339/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2337
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2337/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2337/comments
|
https://api.github.com/repos/huggingface/peft/issues/2337/events
|
https://github.com/huggingface/peft/issues/2337
| 2,800,325,334
|
I_kwDOIf9iDM6m6ZLW
| 2,337
|
AdaLora kthvalue(): selected number k out of range for dimension 0
|
{
"login": "PKaralupov",
"id": 152442722,
"node_id": "U_kgDOCRYXYg",
"avatar_url": "https://avatars.githubusercontent.com/u/152442722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PKaralupov",
"html_url": "https://github.com/PKaralupov",
"followers_url": "https://api.github.com/users/PKaralupov/followers",
"following_url": "https://api.github.com/users/PKaralupov/following{/other_user}",
"gists_url": "https://api.github.com/users/PKaralupov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PKaralupov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PKaralupov/subscriptions",
"organizations_url": "https://api.github.com/users/PKaralupov/orgs",
"repos_url": "https://api.github.com/users/PKaralupov/repos",
"events_url": "https://api.github.com/users/PKaralupov/events{/privacy}",
"received_events_url": "https://api.github.com/users/PKaralupov/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2025-01-20T21:56:43
| 2025-01-23T05:25:02
| 2025-01-23T05:25:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Using docker image pytorch/pytorch:2.5.1-cuda12.4-cudnn9-runtime
transformers 4.48.0
accelerate 1.2.1
peft 0.14.0
torch 2.5.1+cu124
Python 3.11.10
### Who can help?
@sayakpaul, @benjaminbossan
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
Using peft AdaLora for finetuning Whiper large v3
```
model = prepare_model_for_kbit_training(model)
target_modules=["q_proj", "v_proj", "k_proj"]
t_modules = []
for id, (name, param) in enumerate(model.named_modules()):
if 'model.decoder' in name and any([module in name for module in target_modules]):
t_modules.append(name)
target_modules=t_modules
config = AdaLoraConfig(
init_r= 96,
target_r=64,
beta1=0.85,
beta2=0.85,
tinit=6000,
tfinal=11000,
deltaT=100,
lora_alpha=128,
lora_dropout=0.1,
target_modules=target_modules,
orth_reg_weight=0.5,
total_step= 13500
)
model = get_peft_model(model, config)
model.print_trainable_parameters()
```
Using trainer callback for update_and_allocate
```
class OptimizerStepCllback(TrainerCallback):
def on_optimizer_step(self, args, state, control, **kwargs):
model.update_and_allocate(state.global_step)
```
```
training_args = Seq2SeqTrainingArguments(
output_dir=args.output_dir,
per_device_train_batch_size=args.train_batchsize,
gradient_accumulation_steps=1,
learning_rate=args.learning_rate,
warmup_steps=args.warmup,
gradient_checkpointing=gradient_checkpointing,
fp16 = not torch.cuda.is_bf16_supported(),
bf16 = torch.cuda.is_bf16_supported(),
evaluation_strategy="epoch",
save_strategy="epoch",
num_train_epochs=args.num_epochs,
per_device_eval_batch_size=args.eval_batchsize,
predict_with_generate=True,
generation_max_length=256,
logging_steps=25,
report_to=["tensorboard"],
load_best_model_at_end=True,
metric_for_best_model="eval_librispeech_asr_wer",
greater_is_better=False,
optim="adamw_bnb_8bit",
remove_unused_columns=False,
dataloader_num_workers=args.num_proc
)
trainer = Seq2SeqTrainer(
args=training_args,
model=model,
train_dataset=raw_dataset["train"],
eval_dataset=raw_dataset["eval"],
data_collator=data_collator,
compute_metrics=compute_metrics,
tokenizer=processor.feature_extractor
)
trainer.add_callback(OptimizerStepCllback)
trainer.train(resume_from_checkpoint=resume_from_checkpoint)
```
Error after 2500 steps:
```
ERROR 2025-01-18T20:36:17.740476732Z [resource.labels.taskName: workerpool0-0] trainer.train(resume_from_checkpoint=resume_from_checkpoint)
ERROR 2025-01-18T20:36:17.740483350Z [resource.labels.taskName: workerpool0-0] File "/opt/conda/lib/python3.11/site-packages/transformers/trainer.py", line 2171, in train
ERROR 2025-01-18T20:36:17.740489895Z [resource.labels.taskName: workerpool0-0] return inner_training_loop(
ERROR 2025-01-18T20:36:17.740496256Z [resource.labels.taskName: workerpool0-0] ^^^^^^^^^^^^^^^^^^^^
ERROR 2025-01-18T20:36:17.740502909Z [resource.labels.taskName: workerpool0-0] File "/opt/conda/lib/python3.11/site-packages/transformers/trainer.py", line 2586, in _inner_training_loop
ERROR 2025-01-18T20:36:17.740509254Z [resource.labels.taskName: workerpool0-0] self.control = self.callback_handler.on_optimizer_step(args, self.state, self.control)
ERROR 2025-01-18T20:36:17.740515900Z [resource.labels.taskName: workerpool0-0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 2025-01-18T20:36:17.740522460Z [resource.labels.taskName: workerpool0-0] File "/opt/conda/lib/python3.11/site-packages/transformers/trainer_callback.py", line 491, in on_optimizer_step
ERROR 2025-01-18T20:36:17.740529629Z [resource.labels.taskName: workerpool0-0] return self.call_event("on_optimizer_step", args, state, control)
ERROR 2025-01-18T20:36:17.740535418Z [resource.labels.taskName: workerpool0-0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 2025-01-18T20:36:17.740541637Z [resource.labels.taskName: workerpool0-0] File "/opt/conda/lib/python3.11/site-packages/transformers/trainer_callback.py", line 519, in call_event
ERROR 2025-01-18T20:36:17.740547789Z [resource.labels.taskName: workerpool0-0] result = getattr(callback, event)(
ERROR 2025-01-18T20:36:17.740554197Z [resource.labels.taskName: workerpool0-0] ^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 2025-01-18T20:36:17.740560199Z [resource.labels.taskName: workerpool0-0] File "/workspace/task.py", line 752, in on_optimizer_step
ERROR 2025-01-18T20:36:17.740566453Z [resource.labels.taskName: workerpool0-0] model.update_and_allocate(state.global_step)
ERROR 2025-01-18T20:36:17.740572647Z [resource.labels.taskName: workerpool0-0] File "/opt/conda/lib/python3.11/site-packages/peft/tuners/adalora/model.py", line 343, in update_and_allocate
ERROR 2025-01-18T20:36:17.740578651Z [resource.labels.taskName: workerpool0-0] _, rank_pattern = self.rankallocator.update_and_allocate(self.model, global_step, force_mask=True)
ERROR 2025-01-18T20:36:17.740589951Z [resource.labels.taskName: workerpool0-0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 2025-01-18T20:36:17.740596643Z [resource.labels.taskName: workerpool0-0] File "/opt/conda/lib/python3.11/site-packages/peft/tuners/adalora/layer.py", line 342, in update_and_allocate
ERROR 2025-01-18T20:36:17.740605933Z [resource.labels.taskName: workerpool0-0] rank_pattern = self.mask_to_budget(model, budget)
ERROR 2025-01-18T20:36:17.740612342Z [resource.labels.taskName: workerpool0-0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 2025-01-18T20:36:17.740618182Z [resource.labels.taskName: workerpool0-0] File "/opt/conda/lib/python3.11/site-packages/peft/tuners/adalora/layer.py", line 321, in mask_to_budget
ERROR 2025-01-18T20:36:17.740627268Z [resource.labels.taskName: workerpool0-0] mask_threshold = torch.kthvalue(
ERROR 2025-01-18T20:36:17.740634138Z [resource.labels.taskName: workerpool0-0] ^^^^^^^^^^^^^^^
ERROR 2025-01-18T20:36:17.740640759Z [resource.labels.taskName: workerpool0-0] RuntimeError: kthvalue(): selected number k out of range for dimension 0
```
### Expected behavior
I believe something is wrong with my configuration, as this error was not raised with other peft config parameters
However, I am not sure why it happenned
|
{
"login": "PKaralupov",
"id": 152442722,
"node_id": "U_kgDOCRYXYg",
"avatar_url": "https://avatars.githubusercontent.com/u/152442722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PKaralupov",
"html_url": "https://github.com/PKaralupov",
"followers_url": "https://api.github.com/users/PKaralupov/followers",
"following_url": "https://api.github.com/users/PKaralupov/following{/other_user}",
"gists_url": "https://api.github.com/users/PKaralupov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PKaralupov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PKaralupov/subscriptions",
"organizations_url": "https://api.github.com/users/PKaralupov/orgs",
"repos_url": "https://api.github.com/users/PKaralupov/repos",
"events_url": "https://api.github.com/users/PKaralupov/events{/privacy}",
"received_events_url": "https://api.github.com/users/PKaralupov/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2337/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2336
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2336/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2336/comments
|
https://api.github.com/repos/huggingface/peft/issues/2336/events
|
https://github.com/huggingface/peft/issues/2336
| 2,799,925,050
|
I_kwDOIf9iDM6m43c6
| 2,336
|
After using peft, the performance indicators decreased.
|
{
"login": "KQDtianxiaK",
"id": 92998962,
"node_id": "U_kgDOBYsNMg",
"avatar_url": "https://avatars.githubusercontent.com/u/92998962?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KQDtianxiaK",
"html_url": "https://github.com/KQDtianxiaK",
"followers_url": "https://api.github.com/users/KQDtianxiaK/followers",
"following_url": "https://api.github.com/users/KQDtianxiaK/following{/other_user}",
"gists_url": "https://api.github.com/users/KQDtianxiaK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KQDtianxiaK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KQDtianxiaK/subscriptions",
"organizations_url": "https://api.github.com/users/KQDtianxiaK/orgs",
"repos_url": "https://api.github.com/users/KQDtianxiaK/repos",
"events_url": "https://api.github.com/users/KQDtianxiaK/events{/privacy}",
"received_events_url": "https://api.github.com/users/KQDtianxiaK/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 5
| 2025-01-20T17:04:33
| 2025-03-09T15:04:20
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Sorry, I just finished the previous question and I still have to ask you a new question. I use the DNABert2 model, whose original structure is as follows:
```
BertForSequenceClassification(
(bert): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(4096, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0): BertLayer(
(attention): BertUnpadAttention(
(self): BertUnpadSelfAttention(
(dropout): Dropout(p=0.0, inplace=False)
(Wqkv): Linear(in_features=768, out_features=2304, bias=True)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(mlp): BertGatedLinearUnitMLP(
(gated_layers): Linear(in_features=768, out_features=6144, bias=False)
(act): GELU(approximate='none')
(wo): Linear(in_features=3072, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(layernorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
)
......
(11): BertLayer(
(attention): BertUnpadAttention(
(self): BertUnpadSelfAttention(
(dropout): Dropout(p=0.0, inplace=False)
(Wqkv): Linear(in_features=768, out_features=2304, bias=True)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(mlp): BertGatedLinearUnitMLP(
(gated_layers): Linear(in_features=768, out_features=6144, bias=False)
(act): GELU(approximate='none')
(wo): Linear(in_features=3072, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(layernorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
(dropout): Dropout(p=0.1, inplace=False)
(classifier): Linear(in_features=768, out_features=157, bias=True)
)
```
On three different classification tasks, I used OFT, LNTuning and other methods to add fine-tuning modules to linear layers such as ['Wqkv'/'wo'/'gated_layers'], or ['LayerNorm'] and other parts for supervised training. During the training process, the logs output at each logging_steps are normal, the loss keeps decreasing, and the performance indicators keep rising. However, when the saved model weights are finally called to evaluate on the independent test set, the performance will be very poor, and the results are basically equivalent to It's the same as having no training at all. The following error is reported when calling:
```
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at model/DNABERT2-117M and are newly initialized: ['bert.pooler.dense.bias', 'bert.pooler.dense.weight', 'classifier.bias', 'classifier.weight']
```
### Who can help?
@BenjaminBossan
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
and the code I use to call the model from each path that holds the best model weights:
```
def load_best_model_for_test(checkpoint_dir, fold_number):
fold_dir = os.path.join(checkpoint_dir)
checkpoint_folders = [d for d in os.scandir(fold_dir) if d.is_dir() and d.name.startswith('checkpoint')]
best_model_dir = max(checkpoint_folders, key=lambda d: os.path.getmtime(d.path), default=None)
best_model_path = best_model_dir.path
model = AutoPeftModelForSequenceClassification.from_pretrained(best_model_path, trust_remote_code=True, num_labels=2)
return model
def evaluate_on_test_set(models, test_dataset):
test_results = []
for model in models:
trainer = Trainer(
model=model,
args=training_args,
eval_dataset=test_dataset,
data_collator=DataCollatorWithPadding(tokenizer=tokenizer),
compute_metrics=eval_predict
)
metrics = trainer.evaluate()
test_results.append(metrics)
average_metrics = {key: np.mean([result[key] for result in test_results]) for key in test_results[0].keys()}
return average_metrics
```
However, when I did full-parameter supervised fine-tuning without using peft, the final results on the independent test set were all normal. I changed different tasks, changed different peft methods, changed different parts of fine-tuning, and used the latest version of peft and still can't solve the problem.
### Expected behavior
Find out the cause and fix the problem
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2336/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2330
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2330/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2330/comments
|
https://api.github.com/repos/huggingface/peft/issues/2330/events
|
https://github.com/huggingface/peft/issues/2330
| 2,789,282,442
|
I_kwDOIf9iDM6mQRKK
| 2,330
|
MoELorA
|
{
"login": "moghadas76",
"id": 23231913,
"node_id": "MDQ6VXNlcjIzMjMxOTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/23231913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moghadas76",
"html_url": "https://github.com/moghadas76",
"followers_url": "https://api.github.com/users/moghadas76/followers",
"following_url": "https://api.github.com/users/moghadas76/following{/other_user}",
"gists_url": "https://api.github.com/users/moghadas76/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moghadas76/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moghadas76/subscriptions",
"organizations_url": "https://api.github.com/users/moghadas76/orgs",
"repos_url": "https://api.github.com/users/moghadas76/repos",
"events_url": "https://api.github.com/users/moghadas76/events{/privacy}",
"received_events_url": "https://api.github.com/users/moghadas76/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2025-01-15T09:29:58
| 2025-02-23T15:03:30
| 2025-02-23T15:03:30
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Feature request
Feature request
The paper "MoELoRA: Contrastive Learning Guided Mixture of Experts on
Parameter-Efficient Fine-Tuning for Large Language Models" introduced MoLoRA, a Mixutre-of-Experts approach using LoRA adapters. I am using it to conduct some research for my MSc thesis, and have implemented it in peft. I was wondering if this method is interesting and would be worth it to clean up my code and submit a PR.
Motivation
The motivation is to include more PEFT methods that the community can benefit from.
Your contribution
I can contribute a PR with the implementation of MoLoRA.
### Motivation
Feature request
The paper "MoELoRA: Contrastive Learning Guided Mixture of Experts on
Parameter-Efficient Fine-Tuning for Large Language Models" introduced MoLoRA, a Mixutre-of-Experts approach using LoRA adapters. I am using it to conduct some research for my MSc thesis, and have implemented it in peft. I was wondering if this method is interesting and would be worth it to clean up my code and submit a PR.
Motivation
The motivation is to include more PEFT methods that the community can benefit from.
Your contribution
I can contribute a PR with the implementation of MoLoRA.
### Your contribution
Feature request
The paper "MoELoRA: Contrastive Learning Guided Mixture of Experts on
Parameter-Efficient Fine-Tuning for Large Language Models" introduced MoLoRA, a Mixutre-of-Experts approach using LoRA adapters. I am using it to conduct some research for my MSc thesis, and have implemented it in peft. I was wondering if this method is interesting and would be worth it to clean up my code and submit a PR.
Motivation
The motivation is to include more PEFT methods that the community can benefit from.
Your contribution
I can contribute a PR with the implementation of MoLoRA.
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2330/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2330/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2329
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2329/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2329/comments
|
https://api.github.com/repos/huggingface/peft/issues/2329/events
|
https://github.com/huggingface/peft/issues/2329
| 2,788,385,643
|
I_kwDOIf9iDM6mM2Nr
| 2,329
|
Request to intergrate Structure Sparsity-based PEFT (S2FT)
|
{
"login": "Hanyuezhuohua",
"id": 58478765,
"node_id": "MDQ6VXNlcjU4NDc4NzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/58478765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hanyuezhuohua",
"html_url": "https://github.com/Hanyuezhuohua",
"followers_url": "https://api.github.com/users/Hanyuezhuohua/followers",
"following_url": "https://api.github.com/users/Hanyuezhuohua/following{/other_user}",
"gists_url": "https://api.github.com/users/Hanyuezhuohua/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hanyuezhuohua/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hanyuezhuohua/subscriptions",
"organizations_url": "https://api.github.com/users/Hanyuezhuohua/orgs",
"repos_url": "https://api.github.com/users/Hanyuezhuohua/repos",
"events_url": "https://api.github.com/users/Hanyuezhuohua/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hanyuezhuohua/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 3
| 2025-01-14T22:18:53
| 2025-02-14T15:29:31
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Feature request
This request proposes to intergrate S2FT, a pure structure sparsity-based PEFT method that concurrently achieve state-of-theart fine-tuning performance, training efficiency, and inference scalability. More information about our NeurIPS paper can be found here: https://infini-ai-lab.github.io/S2FT-Page/, of which i'm the first author. Here is our code for the implementation: https://github.com/Infini-AI-Lab/S2FT.
### Motivation
As far as I know, S2FT is the first one to offer efficient and flexible sparsity-based PEFT for LLMs (previously only some add sparsity to LoRA or use layerwise freezing). Here, we'd like to mention several importance features of S2FT:
- Model Versatility: The design of our structure sparsity is based on the coupled structure in LLMs, which commonly exists in LLMs, VLMs, CNNs, and GNNs. Therefore, our method should work for many different structures.
- Generalization Ability: When evaluated on more recent models such as LLaMA-3-8B, we observe that our method can outperform both LoRA and Full FT, which is because we only modified a small fraction of the original parameters. Therefore, we can maintain most advanced abilities during pre-training.
<img width="806" alt="Image" src="https://github.com/user-attachments/assets/ce046f07-5f0a-4ef3-a17f-13b836cf9473" />
- Training Efficiency: Instead of focusing on the parameter efficiency, S2FT can provide practical acceleration for model training. In our experiments, we show that S2FT can surpass LoRA in both training memory and time by 10%, which is important for resource-limited settings.
<img width="794" alt="Image" src="https://github.com/user-attachments/assets/39122dce-f948-421e-936e-592b08463bc6" />
- Scalable Serving: Finally, S2FT also shows good serving ability in comparison with LoRA, where we consider adapter fusion, switch, and parallelism. For these settings, S2FT always outperforms LoRA in both efficiency and performance.
<img width="809" alt="Image" src="https://github.com/user-attachments/assets/29dcc747-8cdf-42d9-9474-7b8c7b77c052" />
- Controllability: The model parameters to be updated in S2FT can be selected with user-specific functions, where LoRA cannot achieve this.
Based on these information, although S2FT is just released, we think it is new kind of PEFT method showing very good potential. And the integration of it should be benefit for future sparsity-based PEFT methods.
### Your contribution
I will try to write most code for this new PEFT method based on the current PEFT
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2329/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2326
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2326/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2326/comments
|
https://api.github.com/repos/huggingface/peft/issues/2326/events
|
https://github.com/huggingface/peft/issues/2326
| 2,784,601,999
|
I_kwDOIf9iDM6l-aeP
| 2,326
|
AttributeError: ModulesToSaveWrapper has no attribute `dense`
|
{
"login": "KQDtianxiaK",
"id": 92998962,
"node_id": "U_kgDOBYsNMg",
"avatar_url": "https://avatars.githubusercontent.com/u/92998962?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KQDtianxiaK",
"html_url": "https://github.com/KQDtianxiaK",
"followers_url": "https://api.github.com/users/KQDtianxiaK/followers",
"following_url": "https://api.github.com/users/KQDtianxiaK/following{/other_user}",
"gists_url": "https://api.github.com/users/KQDtianxiaK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KQDtianxiaK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KQDtianxiaK/subscriptions",
"organizations_url": "https://api.github.com/users/KQDtianxiaK/orgs",
"repos_url": "https://api.github.com/users/KQDtianxiaK/repos",
"events_url": "https://api.github.com/users/KQDtianxiaK/events{/privacy}",
"received_events_url": "https://api.github.com/users/KQDtianxiaK/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2025-01-13T16:49:37
| 2025-01-20T16:29:05
| 2025-01-20T16:29:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
**Original model architecture:**
```
EsmForSequenceClassification(
(esm): EsmModel(
(embeddings): EsmEmbeddings(
(word_embeddings): Embedding(33, 640, padding_idx=1)
(dropout): Dropout(p=0.0, inplace=False)
(position_embeddings): Embedding(1026, 640, padding_idx=1)
)
(encoder): EsmEncoder(
(layer): ModuleList(
(0-29): 30 x EsmLayer(
(attention): EsmAttention(
(self): EsmSelfAttention(
...
**(output): EsmSelfOutput(
(dense): Linear(in_features=640, out_features=640, bias=True)**
(dropout): Dropout(p=0.0, inplace=False)
)
...
**(intermediate): EsmIntermediate(
(dense): Linear(in_features=640, out_features=2560, bias=True)
)**
**(output): EsmOutput(
(dense): Linear(in_features=2560, out_features=640, bias=True)**
...
**(classifier): EsmClassificationHead(
(dense): Linear(in_features=640, out_features=640, bias=True)**
...
```
**my code:**
```
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=7)
config = OFTConfig(task_type=TaskType.SEQ_CLS, target_modules=['dense'])
model_OFT = get_peft_model(model, config)
```
**Peft model architecture:**
```
PeftModelForSequenceClassification(
(base_model): OFTModel(
(model): EsmForSequenceClassification(
(esm): EsmModel(
(embeddings): EsmEmbeddings(
(word_embeddings): Embedding(33, 640, padding_idx=1)
(dropout): Dropout(p=0.0, inplace=False)
(position_embeddings): Embedding(1026, 640, padding_idx=1)
)
(encoder): EsmEncoder(
(layer): ModuleList(
(0-29): 30 x EsmLayer(
(attention): EsmAttention(
(self): EsmSelfAttention(
(query): Linear(in_features=640, out_features=640, bias=True)
(key): Linear(in_features=640, out_features=640, bias=True)
(value): Linear(in_features=640, out_features=640, bias=True)
...
**(dense): oft.Linear(
(base_layer): Linear(in_features=640, out_features=640, bias=True)
(oft_r): ParameterDict( (default): Parameter containing: [torch.FloatTensor of size 8x80x80])
)**
...
**(intermediate): EsmIntermediate(
(dense): oft.Linear(
(base_layer): Linear(in_features=640, out_features=2560, bias=True)
(oft_r): ParameterDict( (default): Parameter containing: [torch.FloatTensor of size 8x320x320])
)**
)
**(output): EsmOutput(
(dense): oft.Linear(
(base_layer): Linear(in_features=2560, out_features=640, bias=True)
(oft_r): ParameterDict( (default): Parameter containing: [torch.FloatTensor of size 8x80x80])
)**
...
**(classifier): ModulesToSaveWrapper(
(original_module): EsmClassificationHead(
(dense): oft.Linear(
(base_layer): Linear(in_features=640, out_features=640, bias=True)
(oft_r): ParameterDict( (default): Parameter containing: [torch.FloatTensor of size 8x80x80])
)**
...
(modules_to_save): ModuleDict(
(default): EsmClassificationHead(
**(dense): oft.Linear(
(base_layer): Linear(in_features=640, out_features=640, bias=True)
(oft_r): ParameterDict( (default): Parameter containing: [torch.FloatTensor of size 8x80x80])
)**
...
```
**adapter_config.json:**
```
{
"alpha_pattern": {},
"auto_mapping": null,
"base_model_name_or_path": "model/esm2_35M",
"block_share": false,
"coft": false,
"eps": 6e-05,
"inference_mode": true,
"init_weights": true,
"layers_pattern": null,
"layers_to_transform": null,
"module_dropout": 0.0,
"modules_to_save": [
"classifier",
"score"
],
"peft_type": "OFT",
"r": 8,
"rank_pattern": {},
"revision": null,
"target_modules": [
"dense"
],
"task_type": "SEQ_CLS"
}
```
### Who can help?
@BenjaminBossan
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
**After training, I load the model from the saved checkpoint, using the following codes:**
```
best_model_path = best_model_dir.path
model_peft = AutoPeftModelForSequenceClassification.from_pretrained(best_model_path, num_labels=7)
```
**Got this error:**
```
Traceback (most recent call last):
File "/root/autodl-tmp/PEFT-PLM/ESM2_scop_OFT.py", line 213, in <module>
best_model = load_best_model_for_test(training_args.output_dir, i+1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/autodl-tmp/PEFT-PLM/ESM2_scop_OFT.py", line 189, in load_best_model_for_test
model_peft = AutoPeftModelForSequenceClassification.from_pretrained(best_model_path, num_labels=7)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.12/site-packages/peft/auto.py", line 130, in from_pretrained
return cls._target_peft_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.12/site-packages/peft/peft_model.py", line 541, in from_pretrained
model = MODEL_TYPE_TO_PEFT_MODEL_MAPPING[config.task_type](
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.12/site-packages/peft/peft_model.py", line 1311, in __init__
super().__init__(model, peft_config, adapter_name, **kwargs)
File "/root/miniconda3/lib/python3.12/site-packages/peft/peft_model.py", line 155, in __init__
self.base_model = cls(model, {adapter_name: peft_config}, adapter_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.12/site-packages/peft/tuners/lycoris_utils.py", line 196, in __init__
super().__init__(model, config, adapter_name)
File "/root/miniconda3/lib/python3.12/site-packages/peft/tuners/tuners_utils.py", line 175, in __init__
self.inject_adapter(self.model, adapter_name)
File "/root/miniconda3/lib/python3.12/site-packages/peft/tuners/tuners_utils.py", line 430, in inject_adapter
parent, target, target_name = _get_submodules(model, key)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.12/site-packages/peft/utils/other.py", line 313, in _get_submodules
target = model.get_submodule(key)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 717, in get_submodule
raise AttributeError(
AttributeError: ModulesToSaveWrapper has no attribute `dense`
```
### Expected behavior
Find out the cause and solve the problem
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2326/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2322
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2322/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2322/comments
|
https://api.github.com/repos/huggingface/peft/issues/2322/events
|
https://github.com/huggingface/peft/issues/2322
| 2,782,367,731
|
I_kwDOIf9iDM6l14_z
| 2,322
|
model merge and unload feature for AdaLora
|
{
"login": "DaehanKim",
"id": 20675681,
"node_id": "MDQ6VXNlcjIwNjc1Njgx",
"avatar_url": "https://avatars.githubusercontent.com/u/20675681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DaehanKim",
"html_url": "https://github.com/DaehanKim",
"followers_url": "https://api.github.com/users/DaehanKim/followers",
"following_url": "https://api.github.com/users/DaehanKim/following{/other_user}",
"gists_url": "https://api.github.com/users/DaehanKim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DaehanKim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DaehanKim/subscriptions",
"organizations_url": "https://api.github.com/users/DaehanKim/orgs",
"repos_url": "https://api.github.com/users/DaehanKim/repos",
"events_url": "https://api.github.com/users/DaehanKim/events{/privacy}",
"received_events_url": "https://api.github.com/users/DaehanKim/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2025-01-12T09:20:01
| 2025-01-14T12:47:35
| 2025-01-14T12:47:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Feature request
unlike Lora or IA3 adapter type, AdaLora does not provide a method to merge lora adapter weights into original weights so that it can be used as a standalone model. I made that feature for a personal usecase and want to make a PR to make this feature accessible to everyone.
### Motivation
This feature makes people easily merge AdaLora adapter weights into original weights, which makes further finetuning on it possible (i.e. when one wants to resume adalora training for checkpoints that was already trained with adalora, resuming training is not possible with unmerged weights. )
### Your contribution
I'll submit a PR. I followed the example of IA3 `merge_and_unload`
Following is the overview of change :
```
def _unload_and_optionally_merge(
self,
merge: bool = True,
safe_merge: bool = False,
adapter_names: Optional[list[str]] = None,
eps: float = 1e-5
) -> torch.nn.Module:
"""
This method unloads the AdaLoRA adapter modules and optionally merges them into the base model weights.
Args:
merge (`bool`, defaults to `True`):
If True, merges the adapter weights into base model weights.
If False, it will only unload the adapters without merging.
safe_merge (`bool`, defaults to `False`):
If True, performs the merge operation with extra safety checks.
adapter_names (`List[str]`, *optional*):
The list of adapter names to merge. If None, all active adapters will be merged.
eps (`float`, defaults to 1e-5):
Small constant for numerical stability when dividing by ranknum.
Returns:
model (`torch.nn.Module`):
The resulting PyTorch model.
"""
if getattr(self.model, "is_loaded_in_8bit", False):
raise ValueError("Cannot merge adalora layers when the model is loaded in 8-bit mode")
if getattr(self.model, "is_loaded_in_4bit", False):
raise ValueError("Cannot merge adalora layers when the model is loaded in 4-bit mode")
if adapter_names is not None:
raise ValueError("AdaLoRA does not support merging specific adapters. Got adapter_names={adapter_names}")
# Create a copy of the base model state dict to modify
original_state_dict = self.model.state_dict()
if merge:
for name, module in self.model.named_modules():
if hasattr(module, "base_layer") and hasattr(module, "lora_A"):
# Extract base layer weight name
layer_name = name.replace(".lora_A", "")
layer_name = layer_name.replace("base_model.model.", "")
base_weight_name = f"{layer_name}.weight"
# Get SVD parameters
lora_A = module.lora_A["default"] # [r x d_in]
lora_B = module.lora_B["default"] # [d_out x r]
lora_E = module.lora_E["default"] # [r x 1]
# Calculate active ranks
ranknum = (lora_E != 0).sum()
scaling = module.scaling["default"] if hasattr(module, "scaling") else 16
# Safety check if requested
if safe_merge and (torch.isnan(lora_A).any() or torch.isnan(lora_B).any() or torch.isnan(lora_E).any()):
raise ValueError(f"NaN detected in adapter weights for layer {name}")
# Scale A with E: A' = AE
scaled_A = lora_A * lora_E # [r x d_in]
# Compute update: ΔW = BA'
if ranknum > 0:
update = (lora_B @ scaled_A) * scaling / (ranknum + eps)
else:
update = torch.zeros_like(original_state_dict[base_weight_name])
# Update base weights
if base_weight_name in original_state_dict:
original_state_dict[base_weight_name] += update
# Load the merged state dict back into a clean version of the model
self.model.load_state_dict(original_state_dict)
return self.model
def merge_and_unload(
self,
safe_merge: bool = False,
adapter_names: Optional[list[str]] = None,
eps: float = 1e-5
) -> torch.nn.Module:
"""
Merge the active adapters into the base model and unload the adapters.
Args:
safe_merge (`bool`, defaults to `False`):
If True, performs the merge operation with extra safety checks.
adapter_names (`List[str]`, *optional*):
List of adapter names to merge. If None, merges all active adapters.
eps (`float`, defaults to 1e-5):
Small constant for numerical stability when dividing by ranknum.
Returns:
`torch.nn.Module`: The merged model.
"""
return self._unload_and_optionally_merge(
safe_merge=safe_merge,
adapter_names=adapter_names,
eps=eps
)
def unload(self) -> torch.nn.Module:
"""
Unload the adapters without merging them into the base model.
Returns:
`torch.nn.Module`: The unloaded model.
"""
return self._unload_and_optionally_merge(merge=False)
```
|
{
"login": "DaehanKim",
"id": 20675681,
"node_id": "MDQ6VXNlcjIwNjc1Njgx",
"avatar_url": "https://avatars.githubusercontent.com/u/20675681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DaehanKim",
"html_url": "https://github.com/DaehanKim",
"followers_url": "https://api.github.com/users/DaehanKim/followers",
"following_url": "https://api.github.com/users/DaehanKim/following{/other_user}",
"gists_url": "https://api.github.com/users/DaehanKim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DaehanKim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DaehanKim/subscriptions",
"organizations_url": "https://api.github.com/users/DaehanKim/orgs",
"repos_url": "https://api.github.com/users/DaehanKim/repos",
"events_url": "https://api.github.com/users/DaehanKim/events{/privacy}",
"received_events_url": "https://api.github.com/users/DaehanKim/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2322/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2321
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2321/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2321/comments
|
https://api.github.com/repos/huggingface/peft/issues/2321/events
|
https://github.com/huggingface/peft/issues/2321
| 2,782,134,190
|
I_kwDOIf9iDM6l0_-u
| 2,321
|
[Warning] `Merge lora module to 4-bit linear may get different generations`
|
{
"login": "steveepreston",
"id": 175405060,
"node_id": "U_kgDOCnR4BA",
"avatar_url": "https://avatars.githubusercontent.com/u/175405060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/steveepreston",
"html_url": "https://github.com/steveepreston",
"followers_url": "https://api.github.com/users/steveepreston/followers",
"following_url": "https://api.github.com/users/steveepreston/following{/other_user}",
"gists_url": "https://api.github.com/users/steveepreston/gists{/gist_id}",
"starred_url": "https://api.github.com/users/steveepreston/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/steveepreston/subscriptions",
"organizations_url": "https://api.github.com/users/steveepreston/orgs",
"repos_url": "https://api.github.com/users/steveepreston/repos",
"events_url": "https://api.github.com/users/steveepreston/events{/privacy}",
"received_events_url": "https://api.github.com/users/steveepreston/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 15
| 2025-01-11T20:27:54
| 2025-03-06T15:30:07
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
peft 0.14.0
transformers 4.48.0
bitsandbytes 0.45.0
### Who can help?
@BenjaminBossan @sayakpaul
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
code:
```python
base_model_id = "gemma-2-27b-it"
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_quant_storage=torch.bfloat16,
)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_id,
quantization_config=quantization_config,
attn_implementation="sdpa",
torch_dtype=torch.bfloat16,
use_cache=True,
)
peft_model = PeftModel.from_pretrained(base_model, adapter_path)
--> merged_model = peft_model.merge_and_unload()
```
Warning:
```
UserWarning: Merge lora module to 4-bit linear may get different generations due to rounding errors.
```
### Expected behavior
merge_and_unload() correctly and without warning.
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2321/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2319
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2319/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2319/comments
|
https://api.github.com/repos/huggingface/peft/issues/2319/events
|
https://github.com/huggingface/peft/issues/2319
| 2,779,143,092
|
I_kwDOIf9iDM6lplu0
| 2,319
|
Import error , is it a version issue?
|
{
"login": "zhangyangniubi",
"id": 157886832,
"node_id": "U_kgDOCWkpcA",
"avatar_url": "https://avatars.githubusercontent.com/u/157886832?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangyangniubi",
"html_url": "https://github.com/zhangyangniubi",
"followers_url": "https://api.github.com/users/zhangyangniubi/followers",
"following_url": "https://api.github.com/users/zhangyangniubi/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangyangniubi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangyangniubi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangyangniubi/subscriptions",
"organizations_url": "https://api.github.com/users/zhangyangniubi/orgs",
"repos_url": "https://api.github.com/users/zhangyangniubi/repos",
"events_url": "https://api.github.com/users/zhangyangniubi/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangyangniubi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2025-01-10T02:34:52
| 2025-01-13T10:13:18
| 2025-01-13T10:13:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
When I execute the finetune.py file, an error occurs as follows: cannot import name 'prepare_model_for_int8_training'.Is it a version issue? My version is 0.14.0.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
cannot import name 'prepare_model_for_int8_training' from 'peft' (/path/python3.10/site-packages/peft/__init__.py)
### Expected behavior
Who can help me answer this question,thks
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2319/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2318
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2318/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2318/comments
|
https://api.github.com/repos/huggingface/peft/issues/2318/events
|
https://github.com/huggingface/peft/issues/2318
| 2,779,069,108
|
I_kwDOIf9iDM6lpTq0
| 2,318
|
Issue merging a Lora model to a SANA transformer
|
{
"login": "frutiemax92",
"id": 142428698,
"node_id": "U_kgDOCH1KGg",
"avatar_url": "https://avatars.githubusercontent.com/u/142428698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frutiemax92",
"html_url": "https://github.com/frutiemax92",
"followers_url": "https://api.github.com/users/frutiemax92/followers",
"following_url": "https://api.github.com/users/frutiemax92/following{/other_user}",
"gists_url": "https://api.github.com/users/frutiemax92/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frutiemax92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frutiemax92/subscriptions",
"organizations_url": "https://api.github.com/users/frutiemax92/orgs",
"repos_url": "https://api.github.com/users/frutiemax92/repos",
"events_url": "https://api.github.com/users/frutiemax92/events{/privacy}",
"received_events_url": "https://api.github.com/users/frutiemax92/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 13
| 2025-01-10T01:24:35
| 2025-03-06T18:39:20
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
peft=0.14.0
### Who can help?
@BenjaminBossan @sayakpaul
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
```
from diffusers import SanaPipeline, SanaPAGPipeline, SanaTransformer2DModel
from peft import PeftModel
transformer = SanaTransformer2DModel.from_pretrained("frutiemax/twistedreality-sana-1600m-1024px")
print(transformer)
peft_model = PeftModel.from_pretrained(transformer, '0')
model = peft_model.merge_and_unload()
```
### Expected behavior
I've trained a Lora model with PEFT on a SANA checkpoint. I can train and inference using the PEFT model. However, when I try to merge the Lora to the base checkpoint, I encounter a shape mismatch. I've attached the Lora model with a rank 4.

[0.zip](https://github.com/user-attachments/files/18369238/0.zip)
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2318/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2317
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2317/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2317/comments
|
https://api.github.com/repos/huggingface/peft/issues/2317/events
|
https://github.com/huggingface/peft/issues/2317
| 2,777,004,984
|
I_kwDOIf9iDM6lhbu4
| 2,317
|
Issue with finetuning with Corda
|
{
"login": "sirluk",
"id": 58826757,
"node_id": "MDQ6VXNlcjU4ODI2NzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/58826757?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sirluk",
"html_url": "https://github.com/sirluk",
"followers_url": "https://api.github.com/users/sirluk/followers",
"following_url": "https://api.github.com/users/sirluk/following{/other_user}",
"gists_url": "https://api.github.com/users/sirluk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sirluk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sirluk/subscriptions",
"organizations_url": "https://api.github.com/users/sirluk/orgs",
"repos_url": "https://api.github.com/users/sirluk/repos",
"events_url": "https://api.github.com/users/sirluk/events{/privacy}",
"received_events_url": "https://api.github.com/users/sirluk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 13
| 2025-01-09T07:12:18
| 2025-02-10T10:22:03
| 2025-02-10T10:22:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
peft master branch (commit 8d3039b6cb724522625bff26988418cac5759ffa)
### Who can help?
@BenjaminBossan @5eqn
Hi, I would like to try out Corda for my finetuning usecase but looking at the loss curves something seems to be going wrong so I just wanted to verify I implemented Corda correctly.
This is the relevant code snippet from my script. I have a tokenized dataset which I wrap with a dataloader with a batch size = 1 to pass to the `preprocess_corda` function. Once `preprocess_corda` is done computing I can just instantiate the peft model as usual with the required config, correct?
Would greatly appreciate some feedback.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
```python
# imports
import torch
from functools import partial
from datasets import load_dataset, interleave_datasets, DatasetDict
from transformers import AutoTokenizer, AutoModelForCausalLM, Trainer, TrainingArguments
from peft import get_peft_model, LoraConfig
from peft.tuners.lora.corda import preprocess_corda
from peft.tuners.lora.config import CordaConfig
# functions
def _tokenize_fn(prompts, completions, tokenizer):
prompt_tokens = tokenizer(prompts, add_special_tokens=False)["input_ids"]
input_tokens = tokenizer([x+y for x, y in zip(prompts, completions)], add_special_tokens=False)["input_ids"]
input_tokens = [[tokenizer.bos_token_id]+x+[tokenizer.eos_token_id] for x in input_tokens]
prompt_length = [len(x)+1 for x in prompt_tokens] # +1 for the bos token
input_length = [len(x) for x in input_tokens]
return {"input_ids": input_tokens, "prompt_length": prompt_length, "input_length": input_length}
class _TokenizerPromptSource:
def __init__(self, tokenizer_path, space_after_prompt=True):
# import promptsource
from promptsource_custom.templates import DatasetTemplates
self.dataset_templates = DatasetTemplates
self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_path)
self.space_after_prompt = space_after_prompt
def __call__(self, examples):
examples = [dict(zip(examples.keys(), e)) for e in zip(*examples.values())]
prompts, completions = zip(*[self.prompt.apply(e) for e in examples])
if self.space_after_prompt:
prompts = [p + " " for p in prompts]
return _tokenize_fn(prompts, completions, self.tokenizer)
class TokenizerWinogrande(_TokenizerPromptSource):
def __init__(self, tokenizer_path):
super().__init__(tokenizer_path)
self.prompt = self.dataset_templates("winogrande", "winogrande_xl")["multiple_choice_simple"]
class TokenizerHellaswag(_TokenizerPromptSource):
def __init__(self, tokenizer_path):
super().__init__(tokenizer_path)
self.prompt = self.dataset_templates("hellaswag")["multiple_choice_simple"]
class TokenizerArcChallenge(_TokenizerPromptSource):
def __init__(self, tokenizer_path):
super().__init__(tokenizer_path)
self.prompt = self.dataset_templates("ai2_arc", "ARC-Challenge")["multiple_choice_simple"]
class TokenizerArcEasy(_TokenizerPromptSource):
def __init__(self, tokenizer_path):
super().__init__(tokenizer_path)
self.prompt = self.dataset_templates("ai2_arc", "ARC-Easy")["multiple_choice_simple"]
class TokenizerPIQA(_TokenizerPromptSource):
def __init__(self, tokenizer_path):
super().__init__(tokenizer_path)
self.prompt = self.dataset_templates("piqa")["multiple_choice_simple"]
class TokenizerSIQA(_TokenizerPromptSource):
def __init__(self, tokenizer_path):
super().__init__(tokenizer_path)
self.prompt = self.dataset_templates("social_i_qa")["multiple_choice_simple"]
class TokenizerOpenBookQA(_TokenizerPromptSource):
def __init__(self, tokenizer_path):
super().__init__(tokenizer_path)
self.prompt = self.dataset_templates("openbookqa", "main")["multiple_choice_simple"]
class TokenizerBoolQ(_TokenizerPromptSource):
def __init__(self, tokenizer_path):
super().__init__(tokenizer_path)
self.prompt = self.dataset_templates("super_glue", "boolq")["multiple_choice_simple"]
class DataCollator:
def __init__(self, eos_token_id, max_length = None):
self.eos_token_id = eos_token_id
self.max_length = max_length
def __call__(self, batch):
batch = {k: [item[k] for item in batch] for k in batch[0]}
input_lengths = torch.stack(batch["input_length"])
prompt_lengths = torch.stack(batch["prompt_length"])
input_ids = torch.nn.utils.rnn.pad_sequence(batch["input_ids"], batch_first=True, padding_value=self.eos_token_id)
col_indices = torch.arange(input_ids.size(1)).unsqueeze(0)
attention_mask = col_indices < input_lengths.unsqueeze(1)
label_mask = torch.logical_or(col_indices < prompt_lengths.unsqueeze(1), ~attention_mask)
labels = input_ids.masked_fill(label_mask, -100)
if self.max_length is not None:
input_ids = input_ids[:, :self.max_length]
attention_mask = attention_mask[:, :self.max_length]
labels = labels[:, :self.max_length]
return {"input_ids": input_ids, "attention_mask": attention_mask, "labels": labels}
# constants
CORDA = False
SEED = 0
BATCH_SIZE = 4
NUM_EPOCHS = 1
LEARNING_RATE = 5e-4
GRADIENT_ACCUMULATION_STEPS = 8
MODEL_NAME = "meta-llama/Llama-2-7b-hf"
MODEL_MAX_LENGTH = 1024
QA_DATASETS = [
"Rowan/hellaswag",
"allenai/winogrande",
"allenai/ai2_arc_challenge",
"allenai/ai2_arc_easy",
"ybisk/piqa",
"allenai/social_i_qa",
"allenai/openbookqa",
"boolq"
]
LOAD_DATASET_KWARGS = {
"Rowan/hellaswag": {"path": "Rowan/hellaswag"},
"allenai/winogrande": {"path": "allenai/winogrande", "name": "winogrande_xl"},
"allenai/ai2_arc_challenge": {"path": "allenai/ai2_arc", "name": "ARC-Challenge"},
"allenai/ai2_arc_easy": {"path": "allenai/ai2_arc", "name": "ARC-Easy"},
"ybisk/piqa": {"path": "ybisk/piqa"},
"allenai/social_i_qa": {"path": "allenai/social_i_qa"},
"allenai/openbookqa": {"path": "allenai/openbookqa", "name": "main"},
"boolq": {"path": "aps/super_glue", "name": "boolq"}
}
TOKENIZE_MAP = {
"Rowan/hellaswag": TokenizerHellaswag,
"allenai/winogrande": TokenizerWinogrande,
"allenai/ai2_arc_challenge": TokenizerArcChallenge,
"allenai/ai2_arc_easy": TokenizerArcEasy,
"ybisk/piqa": TokenizerPIQA,
"allenai/social_i_qa": TokenizerSIQA,
"allenai/openbookqa": TokenizerOpenBookQA,
"boolq": TokenizerBoolQ
}
# load model
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME)
model.cuda()
# load dataset
datasets = []
for dataset_name in QA_DATASETS:
tokenizer_cls = TOKENIZE_MAP[dataset_name]
tokenizer_wrapper = tokenizer_cls(tokenizer_path=MODEL_NAME)
load_dataset_kwargs = LOAD_DATASET_KWARGS[dataset_name]
if load_dataset_kwargs["path"] is not None:
load_dataset_kwargs["path"] = load_dataset_kwargs["path"]
datasets.append(load_dataset(**load_dataset_kwargs, trust_remote_code=True))
datasets[-1] = datasets[-1].map(tokenizer_wrapper, batched=True, remove_columns=datasets[-1]["train"].column_names)
datasets[-1].set_format(type="torch")
datasets[-1] = datasets[-1].shuffle(seed=SEED)
all_splits = set([n for ds in datasets for n in ds.keys()])
datasets = DatasetDict({split: interleave_datasets([ds[split] for ds in datasets if split in ds]) for split in all_splits})
data_collator = DataCollator(tokenizer_wrapper.tokenizer.eos_token_id, MODEL_MAX_LENGTH)
# get peft config
target_modules = [n for n, m in model.named_modules() if isinstance(m, torch.nn.Linear)]
if CORDA:
corda_config = CordaConfig(corda_method="ipm")
lora_config = LoraConfig(
init_lora_weights="corda",
target_modules=target_modules,
lora_alpha=1,
lora_dropout=0,
r=16,
corda_config=corda_config
)
sampled_dataset = datasets["train"].select(list(range(256)))
corda_data_loader = torch.utils.data.DataLoader(
sampled_dataset,
batch_size=1,
collate_fn=data_collator,
shuffle=True
)
def run_model(model, corda_data_loader):
for batch in corda_data_loader:
input_ids = batch["input_ids"]
input_ids = input_ids.to(model.device)
with torch.no_grad():
model(input_ids)
run_model = partial(run_model, model=model, corda_data_loader=corda_data_loader)
preprocess_corda(model, lora_config, run_model=run_model)
else:
lora_config = LoraConfig(
init_lora_weights=True,
target_modules=target_modules,
lora_alpha=1,
lora_dropout=0,
r=16
)
model = get_peft_model(model, lora_config)
training_args = TrainingArguments(
output_dir="output",
num_train_epochs=NUM_EPOCHS,
per_device_train_batch_size=BATCH_SIZE,
per_device_eval_batch_size=BATCH_SIZE,
seed=SEED,
learning_rate=LEARNING_RATE,
remove_unused_columns=False,
gradient_accumulation_steps=GRADIENT_ACCUMULATION_STEPS,
report_to=[]
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=datasets["train"],
eval_dataset=datasets["validation"] if "validation" in datasets else None,
data_collator=data_collator
)
trainer.train()
```
### Expected behavior
I tried to follow the corda example in the documentation and thought it should work like this
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2317/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2316
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2316/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2316/comments
|
https://api.github.com/repos/huggingface/peft/issues/2316/events
|
https://github.com/huggingface/peft/issues/2316
| 2,776,718,486
|
I_kwDOIf9iDM6lgVyW
| 2,316
|
peft with DinoV2 and tasktype feature extraction
|
{
"login": "createdaccountbecauseIwantgithubcopilot",
"id": 109659313,
"node_id": "U_kgDOBolEsQ",
"avatar_url": "https://avatars.githubusercontent.com/u/109659313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot",
"html_url": "https://github.com/createdaccountbecauseIwantgithubcopilot",
"followers_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/followers",
"following_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/following{/other_user}",
"gists_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/subscriptions",
"organizations_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/orgs",
"repos_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/repos",
"events_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/events{/privacy}",
"received_events_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2025-01-09T02:48:36
| 2025-01-09T14:11:54
| 2025-01-09T14:11:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
irrelevant.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import AutoImageProcessor, Dinov2WithRegistersModel
from peft import LoraConfig, get_peft_model, TaskType
def setup_peft_model(model_name="facebook/dinov2-with-registers-large",
lora_r=8,
lora_alpha=32,
lora_dropout=0.1):
base_model = Dinov2WithRegistersModel.from_pretrained(model_name)
image_processor = AutoImageProcessor.from_pretrained(model_name)
peft_config = LoraConfig(
task_type=TaskType.FEATURE_EXTRACTION,
inference_mode=False,
r=lora_r,
lora_alpha=lora_alpha,
lora_dropout=lora_dropout,
target_modules=["query", "key", "value"]
)
peft_model = get_peft_model(base_model, peft_config)
peft_model.print_trainable_parameters()
return peft_model, image_processor
def process_image(model, processor, image_size=(518, 518)):
sample_input = torch.randn(1, 3, *image_size)
with torch.no_grad():
outputs = model(sample_input)
return outputs
def main():
model, processor = setup_peft_model()
outputs = process_image(model, processor)
print(f"Output shape: {outputs.last_hidden_state.shape}")
if __name__ == "__main__":
main()
```
Error: TypeError: Dinov2WithRegistersModel.forward() got an unexpected keyword argument 'input_ids'
### Expected behavior
it to work.
|
{
"login": "createdaccountbecauseIwantgithubcopilot",
"id": 109659313,
"node_id": "U_kgDOBolEsQ",
"avatar_url": "https://avatars.githubusercontent.com/u/109659313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot",
"html_url": "https://github.com/createdaccountbecauseIwantgithubcopilot",
"followers_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/followers",
"following_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/following{/other_user}",
"gists_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/subscriptions",
"organizations_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/orgs",
"repos_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/repos",
"events_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/events{/privacy}",
"received_events_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2316/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2316/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2315
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2315/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2315/comments
|
https://api.github.com/repos/huggingface/peft/issues/2315/events
|
https://github.com/huggingface/peft/issues/2315
| 2,776,494,295
|
I_kwDOIf9iDM6lffDX
| 2,315
|
Prefix Tuning dimension error with Qwen2 and missing vocab_size for PaliGemma2
|
{
"login": "Florian-Dreyer",
"id": 64322175,
"node_id": "MDQ6VXNlcjY0MzIyMTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/64322175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Florian-Dreyer",
"html_url": "https://github.com/Florian-Dreyer",
"followers_url": "https://api.github.com/users/Florian-Dreyer/followers",
"following_url": "https://api.github.com/users/Florian-Dreyer/following{/other_user}",
"gists_url": "https://api.github.com/users/Florian-Dreyer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Florian-Dreyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Florian-Dreyer/subscriptions",
"organizations_url": "https://api.github.com/users/Florian-Dreyer/orgs",
"repos_url": "https://api.github.com/users/Florian-Dreyer/repos",
"events_url": "https://api.github.com/users/Florian-Dreyer/events{/privacy}",
"received_events_url": "https://api.github.com/users/Florian-Dreyer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 15
| 2025-01-08T22:52:17
| 2025-02-25T15:04:15
| 2025-02-25T15:04:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
PEFT: 0.14.0
Transformers: 4.48.0.dev0
### Who can help?
@BenjaminBossan
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
For Qwen we get the following error:
IndexError: Caught IndexError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/{user_name}/venv/lib/python3.10/site-packages/torch/nn/parallel/parallel_apply.py", line 84, in _worker
output = module(*input, **kwargs)
File "/home/{user_name}/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/{user_name}/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/{user_name}/venv/lib/python3.10/site-packages/peft/peft_model.py", line 1755, in forward
return self.base_model(input_ids=input_ids, inputs_embeds=inputs_embeds, **kwargs)
File "/home/{user_name}/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/{user_name}/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/{user_name}/venv/lib/python3.10/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 1682, in forward
position_ids, rope_deltas = self.get_rope_index(
File "/home/{user_name}/venv/lib/python3.10/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 1486, in get_rope_index
input_ids = input_ids[attention_mask[i] == 1]
IndexError: The shape of the mask [172] at index 0 does not match the shape of the indexed tensor [122] at index 0
And for PaliGemma2 this one:
AttributeError Traceback (most recent call last)
Cell In[68], line 8
6 tokenizer = processor.tokenizer
7 # Apply PEFT model adaptation
----> 8 peft_model = get_peft_model(model, peft_config)
10 # Print trainable parameters
11 peft_model.print_trainable_parameters()
File ~/venv/lib/python3.10/site-packages/peft/mapping.py:222, in get_peft_model(model, peft_config, adapter_name, mixed, autocast_adapter_dtype, revision, low_cpu_mem_usage)
220 if peft_config.is_prompt_learning:
221 peft_config = _prepare_prompt_learning_config(peft_config, model_config)
--> 222 return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](
223 model,
224 peft_config,
225 adapter_name=adapter_name,
226 autocast_adapter_dtype=autocast_adapter_dtype,
227 low_cpu_mem_usage=low_cpu_mem_usage,
228 )
File ~/venv/lib/python3.10/site-packages/peft/peft_model.py:1684, in PeftModelForCausalLM.__init__(self, model, peft_config, adapter_name, **kwargs)
1681 def __init__(
1682 self, model: torch.nn.Module, peft_config: PeftConfig, adapter_name: str = "default", **kwargs
1683 ) -> None:
-> 1684 super().__init__(model, peft_config, adapter_name, **kwargs)
1685 self.base_model_prepare_inputs_for_generation = self.base_model.prepare_inputs_for_generation
File ~/venv/lib/python3.10/site-packages/peft/peft_model.py:170, in PeftModel.__init__(self, model, peft_config, adapter_name, autocast_adapter_dtype, low_cpu_mem_usage)
168 self._peft_config = {adapter_name: peft_config}
169 self.base_model = model
--> 170 self.add_adapter(adapter_name, peft_config, low_cpu_mem_usage=low_cpu_mem_usage)
171 else:
172 self._peft_config = None
File ~/venv/lib/python3.10/site-packages/peft/peft_model.py:958, in PeftModel.add_adapter(self, adapter_name, peft_config, low_cpu_mem_usage)
955 dict_config = self.config
957 peft_config = _prepare_prompt_learning_config(peft_config, dict_config)
--> 958 self._setup_prompt_encoder(adapter_name)
959 elif peft_config.is_adaption_prompt:
960 self.base_model.add_adapter(adapter_name, peft_config)
File ~/venv/lib/python3.10/site-packages/peft/peft_model.py:642, in PeftModel._setup_prompt_encoder(self, adapter_name)
635 for named_param, value in list(transformer_backbone.named_parameters()):
636 # for ZeRO-3, the tensor is sharded across accelerators and deepspeed modifies it to a tensor with shape
637 # [0] the actual unsharded shape is stored in "ds_shape" attribute special handling is needed in case
638 # the model is initialized in deepspeed.zero.Init() context or HfDeepSpeedConfig has been called before
639 # For reference refer to issue: https://github.com/huggingface/peft/issues/996
640 deepspeed_distributed_tensor_shape = getattr(value, "ds_shape", None)
--> 642 if value.shape[0] == self.base_model.config.vocab_size or (
643 deepspeed_distributed_tensor_shape is not None
644 and deepspeed_distributed_tensor_shape[0] == self.base_model.config.vocab_size
645 ):
646 word_embeddings = transformer_backbone.get_submodule(named_param.replace(".weight", ""))
647 break
File ~/venv/lib/python3.10/site-packages/transformers/configuration_utils.py:211, in PretrainedConfig.__getattribute__(self, key)
209 if key != "attribute_map" and key in super().__getattribute__("attribute_map"):
210 key = super().__getattribute__("attribute_map")[key]
--> 211 return super().__getattribute__(key)
AttributeError: 'PaliGemmaConfig' object has no attribute 'vocab_size'
You can find the notebook here to replicate the errors here:
https://github.com/Florian-Dreyer/PEFT_BUG/blob/main/prefix_tuning_peft.ipynb
Just execute the cells to get the errors.
### Expected behavior
We would expect the models to be able to process the input. We tried just calling model(**inputs) but ran into the same error with Qwen. Note: The dimension difference is exactly the prefix length.
So the question is, how can we get the models to run? Is PaliGemma even supported?
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2315/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2415
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2415/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2415/comments
|
https://api.github.com/repos/huggingface/peft/issues/2415/events
|
https://github.com/huggingface/peft/issues/2415
| 2,905,929,237
|
I_kwDOIf9iDM6tNPYV
| 2,415
|
size mismatch for lm_head when fintune QWEN2.5
|
{
"login": "minmie",
"id": 40080081,
"node_id": "MDQ6VXNlcjQwMDgwMDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/40080081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minmie",
"html_url": "https://github.com/minmie",
"followers_url": "https://api.github.com/users/minmie/followers",
"following_url": "https://api.github.com/users/minmie/following{/other_user}",
"gists_url": "https://api.github.com/users/minmie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minmie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minmie/subscriptions",
"organizations_url": "https://api.github.com/users/minmie/orgs",
"repos_url": "https://api.github.com/users/minmie/repos",
"events_url": "https://api.github.com/users/minmie/events{/privacy}",
"received_events_url": "https://api.github.com/users/minmie/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2025-03-10T02:45:29
| 2025-03-10T02:45:29
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
transformers version: 4.49.0
Platform: Linux-6.6.0-72.0.0.64.oe2403.x86_64-x86_64-with-glibc2.38
Python version: 3.10.16
Huggingface_hub version: 0.29.1
Safetensors version: 0.5.3
Accelerate version: 1.4.0
Accelerate config: not found
DeepSpeed version: not installed
PyTorch version (GPU?): 2.2.2+cu121 (True)
Tensorflow version (GPU?): not installed (NA)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using distributed or parallel set-up in script?:
Using GPU in script?:
GPU type: NVIDIA L4
### Who can help?
@benjaminbossan @sayakpaul
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
I load an adapter for Qwen/Qwen2.5-0.5B using the following code and an error occur:
```python
import torch
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer, pipeline
from peft import PeftConfig, PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "/home/chenjq/pythonWork/nlp/Qwen2.5-0.5B-SFT-Capybara/checkpoint-31"
# peft_model_id = args.output_dir
tokenizer = AutoTokenizer.from_pretrained(peft_model_id)
# Load Model with PEFT adapter
model = AutoPeftModelForCausalLM.from_pretrained(
peft_model_id,
device_map="auto",
torch_dtype=torch.float16
)
```
Error info as follow:
```python
Sliding Window Attention is enabled but not implemented for `sdpa`; unexpected results may be encountered.
Traceback (most recent call last):
File "/home/chenjq/.pycharm_helpers/pydev/pydevd.py", line 1500, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/chenjq/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/chenjq/pythonWork/nlp/test14.py", line 11, in <module>
model = AutoPeftModelForCausalLM.from_pretrained(
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/peft/auto.py", line 130, in from_pretrained
return cls._target_peft_class.from_pretrained(
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/peft/peft_model.py", line 581, in from_pretrained
load_result = model.load_adapter(
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/peft/peft_model.py", line 1239, in load_adapter
load_result = set_peft_model_state_dict(
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/peft/utils/save_and_load.py", line 451, in set_peft_model_state_dict
load_result = model.load_state_dict(peft_model_state_dict, strict=False)
File "/home/chenjq/miniconda3/envs/nlp/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2153, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for PeftModelForCausalLM:
size mismatch for base_model.model.lm_head.modules_to_save.default.weight: copying a param with shape torch.Size([151936, 896]) from checkpoint, the shape in current model is torch.Size([151665, 896]).
Process finished with exit code 1
```
However, if I use the following code to load model, everything just work fine:
```python
from peft import PeftConfig, PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model_name ='/home/models/qwen/Qwen2.5-0.5B'
adapter_model_name = "/home/chenjq/pythonWork/nlp/Qwen2.5-0.5B-SFT-Capybara/checkpoint-31"
model = AutoModelForCausalLM.from_pretrained(base_model_name)
model = PeftModel.from_pretrained(model, adapter_model_name)
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
```
Some info from [here ](https://github.com/huggingface/transformers/issues/36550#issuecomment-2708336059)that maybe help:
Hi everyone! I did some research and found out that the error occurs because the len(tokenizer)(151665) and the embedding size (151936) of Qwen/Qwen2.5-0.5B do not match. _BaseAutoPeftModel.from_pretrained resizes the base model embeddings to match with the tokenizer ([here](https://github.com/huggingface/peft/blob/8edaae9460e4b76bce9431dc187402178ff7b689/src/peft/auto.py#L137)) and as a result, it is unable to load the saved weights. I think a possible solution might be to only resize base model embeddings if the tokenizer size differs from the base tokenizer size. What do you think?
The adapter trained using the following code:
```python
from datasets import load_dataset
from trl import SFTConfig, SFTTrainer
from peft import LoraConfig
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
dataset = load_dataset("trl-lib/Capybara", split="train")
dataset = dataset.select(range(500))
MODEL_ID = 'Qwen/Qwen2.5-0.5B'
peft_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.05,
target_modules="all-linear",
modules_to_save=["lm_head", "embed_token"],
task_type="CAUSAL_LM",
)
args = SFTConfig(
output_dir="Qwen2.5-0.5B-SFT-Capybara", # directory to save and repository id
num_train_epochs=1, # number of training epochs
per_device_train_batch_size=4, # batch size per device during training
gradient_accumulation_steps=4, # number of steps before performing a backward/update pass
gradient_checkpointing=True, # use gradient checkpointing to save memory
optim="adamw_torch_fused", # use fused adamw optimizer
logging_steps=10, # log every 10 steps
save_strategy="epoch", # save checkpoint every epoch
bf16=True, # use bfloat16 precision
tf32=True, # use tf32 precision
learning_rate=2e-4, # learning rate, based on QLoRA paper
max_grad_norm=0.3, # max gradient norm based on QLoRA paper
warmup_ratio=0.03, # warmup ratio based on QLoRA paper
lr_scheduler_type="constant", # use constant learning rate scheduler
push_to_hub=False, # push model to hub
# report_to="tensorboard", # report metrics to tensorboard
)
trainer = SFTTrainer(
MODEL_ID,
train_dataset=dataset,
args=args,
peft_config=peft_config
)
trainer.train()
print('end')
```
### Expected behavior
Hope the model can predict normally.
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2415/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2413
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2413/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2413/comments
|
https://api.github.com/repos/huggingface/peft/issues/2413/events
|
https://github.com/huggingface/peft/issues/2413
| 2,901,962,025
|
I_kwDOIf9iDM6s-G0p
| 2,413
|
`LoraConfig` multiple properties should be unified
|
{
"login": "Qubitium",
"id": 417764,
"node_id": "MDQ6VXNlcjQxNzc2NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/417764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Qubitium",
"html_url": "https://github.com/Qubitium",
"followers_url": "https://api.github.com/users/Qubitium/followers",
"following_url": "https://api.github.com/users/Qubitium/following{/other_user}",
"gists_url": "https://api.github.com/users/Qubitium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Qubitium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Qubitium/subscriptions",
"organizations_url": "https://api.github.com/users/Qubitium/orgs",
"repos_url": "https://api.github.com/users/Qubitium/repos",
"events_url": "https://api.github.com/users/Qubitium/events{/privacy}",
"received_events_url": "https://api.github.com/users/Qubitium/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 9
| 2025-03-07T04:14:24
| 2025-03-10T14:59:51
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
@BenjaminBossan I am trying to add dynamic Lora support to both vLLM and SGLang as LoraConfig already supports this dynamic control via the following variables:
- `rank_pattern`: regex matching of which different `r`/`rank` values are applied
- `exclude_modules`: regex: which modules are not excluded from lora completedly
- `alpha_pattern`: regex matching of `alpha` override. extactly the same as `rank_pattern` but different property.
Nothing wrong with them individually but together, they become unncessary detached and has negative impact on code cost but also on dynamic control efficiency.
GPTQModel uses a single `dynamic`: Diction[str, Dict[]] where the `str` is a regex with `+:` (positive prefix, optional), `-:` negative prefix (Optional).
The dict value is the property override in string: value format.
Example as applied to PEFT (Proposal):
```
# implicit +: prefix if not used
# prefixs are stripped before the regex is performed
"mlp\.down_proj": { "r": 128 } # implicit positive
"+:mlp\.down_proj": { "r": 256 } # explicit positive
"-:mlp\.gate_proj": {} # negative
```
This simple control allows 3 states.
- Positive match == override any property values in base config (LoraConfig).
- Negative match == skip this modele for Lora (no LoraConfig at all)
- No match == There is no module matched so Base LoraConfig is used.
This single control replaces all existing PEFT control with same functionally while allowing ALL properties to be dynamically overriden (if necessary) without any additional apis/LoraConfig vars. As it exists, you need to add code and logic to every LoraConfig property that participates in dynamic override/control.
Basically I want Peft LoraConfig to the clean standard for vLLM and SGLang when it comes to dynamic control. Having a unified `dynamic` override system makes everyone's life so much easier and at the same time eliminate the issue that we have to write code each time a new LoraConfig property comes into pace.
Let me know what you think. I am willing to spend time working on it. You can also reach me at [email protected] and on [X: qubitium](https://x.com/qubitium). I really would love to chat with you for like 15 minutes to ping-pong this idea with you.
CC: @SunMarc @MekkCyber
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2413/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2412
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2412/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2412/comments
|
https://api.github.com/repos/huggingface/peft/issues/2412/events
|
https://github.com/huggingface/peft/issues/2412
| 2,901,275,403
|
I_kwDOIf9iDM6s7fML
| 2,412
|
Lora_B weight becomes 0 when using AuotModel
|
{
"login": "makcedward",
"id": 36614806,
"node_id": "MDQ6VXNlcjM2NjE0ODA2",
"avatar_url": "https://avatars.githubusercontent.com/u/36614806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/makcedward",
"html_url": "https://github.com/makcedward",
"followers_url": "https://api.github.com/users/makcedward/followers",
"following_url": "https://api.github.com/users/makcedward/following{/other_user}",
"gists_url": "https://api.github.com/users/makcedward/gists{/gist_id}",
"starred_url": "https://api.github.com/users/makcedward/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/makcedward/subscriptions",
"organizations_url": "https://api.github.com/users/makcedward/orgs",
"repos_url": "https://api.github.com/users/makcedward/repos",
"events_url": "https://api.github.com/users/makcedward/events{/privacy}",
"received_events_url": "https://api.github.com/users/makcedward/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 0
| 2025-03-06T19:45:29
| 2025-03-06T19:45:29
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
transformers version: 4.49.0
peft version: 0.14.0
### Who can help?
@benjaminbossan @sayakpaul
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModel, AutoModelForCausalLM
from peft import PeftModel
base_model_id = "meta-llama/Llama-3.2-1B"
adapter_id = "makcedward/Llama-3.2-1B-Instruct-LoRA-Adapter"
auto_model = PeftModel.from_pretrained(
AutoModel.from_pretrained(
base_model_id,
),
adapter_id
)
auto_casual_model = PeftModel.from_pretrained(
AutoModelForCausalLM.from_pretrained(
base_model_id,
),
adapter_id
)
print("Auto Model")
print(auto_model.base_model.model.layers[0].self_attn.q_proj.lora_A.default.weight)
# tensor([[-0.0168, 0.0056, -0.0009, ..., 0.0149, -0.0161, -0.0064],
print(auto_model.base_model.model.layers[0].self_attn.q_proj.lora_B.default.weight)
# tensor([[0., 0., 0., ..., 0., 0., 0.],
print("AutoModelForCausalLM")
print(auto_casual_model.base_model.model.model.layers[0].self_attn.q_proj.lora_A.default.weight)
# tensor([[ 1.5867e-02, 2.7307e-02, -1.8503e-02, ..., -1.2035e-02,
print(auto_casual_model.base_model.model.model.layers[0].self_attn.q_proj.lora_B.default.weight)
# tensor([[-7.1123e-04, -4.3834e-03, -1.7415e-03, ..., 4.3514e-03,
```
### Expected behavior
Able to load LoRA weights by using AutoModel
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2412/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2410
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2410/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2410/comments
|
https://api.github.com/repos/huggingface/peft/issues/2410/events
|
https://github.com/huggingface/peft/issues/2410
| 2,899,373,069
|
I_kwDOIf9iDM6s0OwN
| 2,410
|
running forward loop using get_peft_model disables requires_grad on output
|
{
"login": "Hamidreza3252",
"id": 27887474,
"node_id": "MDQ6VXNlcjI3ODg3NDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/27887474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hamidreza3252",
"html_url": "https://github.com/Hamidreza3252",
"followers_url": "https://api.github.com/users/Hamidreza3252/followers",
"following_url": "https://api.github.com/users/Hamidreza3252/following{/other_user}",
"gists_url": "https://api.github.com/users/Hamidreza3252/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hamidreza3252/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hamidreza3252/subscriptions",
"organizations_url": "https://api.github.com/users/Hamidreza3252/orgs",
"repos_url": "https://api.github.com/users/Hamidreza3252/repos",
"events_url": "https://api.github.com/users/Hamidreza3252/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hamidreza3252/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 3
| 2025-03-06T05:12:42
| 2025-03-06T15:35:13
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi,
I would like to report a recent issue I have been facing, but I am not sure if it is a bug or I am doing something wrong in the process. The steps to re-create the steps are easy. The issue happens when I try to convert **Qwen2-VL-2B-Instruct** model into a PEFT model using `get_peft_model` method. Simply load the model using the sample code in https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct and try to convert it to a PEFT model using a typical **8bit** LoraConfig with just sample `target_modules=["q_proj", "v_proj"]`. Then simply run a forward call to the model using a dummy input, such as `input_ids = torch.zeros((4, 1247)).to(device)`. When I inspect the `requires_grad` of `logits` attribute of the output, it is False. Meaning that I cannot run backward based on that output. This issue has been puzzling me for a while. I would appreciate if you can help me with a solution or advice how to address it properly.
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2410/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2407
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2407/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2407/comments
|
https://api.github.com/repos/huggingface/peft/issues/2407/events
|
https://github.com/huggingface/peft/issues/2407
| 2,895,061,583
|
I_kwDOIf9iDM6sjyJP
| 2,407
|
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:3! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
|
{
"login": "maxliang114514",
"id": 196797831,
"node_id": "U_kgDOC7rlhw",
"avatar_url": "https://avatars.githubusercontent.com/u/196797831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxliang114514",
"html_url": "https://github.com/maxliang114514",
"followers_url": "https://api.github.com/users/maxliang114514/followers",
"following_url": "https://api.github.com/users/maxliang114514/following{/other_user}",
"gists_url": "https://api.github.com/users/maxliang114514/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxliang114514/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxliang114514/subscriptions",
"organizations_url": "https://api.github.com/users/maxliang114514/orgs",
"repos_url": "https://api.github.com/users/maxliang114514/repos",
"events_url": "https://api.github.com/users/maxliang114514/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxliang114514/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 6
| 2025-03-04T18:09:43
| 2025-03-10T11:17:16
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
**When I attempted to swap out the Lora configuration in Q-Lora(see qlora.py in _https://github.com/artidoro/qlora_) for Vera, I ran into the following error:**
Traceback (most recent call last):
File "qvera.py", line 859, in <module>
train()
File "qvera.py", line 821, in train
train_result = trainer.train()
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/trainer.py", line 1809, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/trainer.py", line 2654, in training_step
loss = self.compute_loss(model, inputs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/trainer.py", line 2679, in compute_loss
outputs = model(**inputs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/peft/peft_model.py", line 1644, in forward
return self.base_model(
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/peft/tuners/tuners_utils.py", line 197, in forward
return self.model.forward(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 806, in forward
outputs = self.model(
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 685, in forward
layer_outputs = torch.utils.checkpoint.checkpoint(
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/utils/checkpoint.py", line 249, in checkpoint
return CheckpointFunction.apply(function, preserve, *args)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/utils/checkpoint.py", line 107, in forward
outputs = run_function(*args)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 681, in custom_forward
return module(*inputs, output_attentions, None)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 408, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 305, in forward
query_states = self.q_proj(hidden_states)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/data/lnj/miniconda3/envs/qlora/lib/python3.8/site-packages/peft/tuners/vera/layer.py", line 287, in forward
result = result + lambda_b * F.linear(lambda_d * F.linear(dropout(x), sliced_A), sliced_B)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:3! (when checking argument for argument mat2 in method wrapper_CUDA_mm)
**However, with the original settings, everything was trainable. My GPU specs are as follows:**
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.135 Driver Version: 550.135 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 2080 Ti Off | 00000000:02:00.0 Off | N/A |
| 22% 19C P8 11W / 250W | 1MiB / 11264MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA GeForce RTX 2080 Ti Off | 00000000:03:00.0 Off | N/A |
| 22% 19C P8 21W / 250W | 1MiB / 11264MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 2 NVIDIA GeForce RTX 2080 Ti Off | 00000000:82:00.0 Off | N/A |
| 22% 20C P8 17W / 250W | 1MiB / 11264MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 3 NVIDIA GeForce RTX 2080 Ti Off | 00000000:83:00.0 Off | N/A |
| 22% 19C P8 8W / 250W | 1MiB / 11264MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
**Is this an issue specific to Vera's unique characteristics? Given the scarcity of resources on Vera, I'd greatly appreciate any help with this problem, thank you!**
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2407/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2405
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2405/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2405/comments
|
https://api.github.com/repos/huggingface/peft/issues/2405/events
|
https://github.com/huggingface/peft/issues/2405
| 2,890,200,666
|
I_kwDOIf9iDM6sRPZa
| 2,405
|
SafetensorError when Merging LoRA Weights
|
{
"login": "Nothern-ai",
"id": 143473220,
"node_id": "U_kgDOCI06RA",
"avatar_url": "https://avatars.githubusercontent.com/u/143473220?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nothern-ai",
"html_url": "https://github.com/Nothern-ai",
"followers_url": "https://api.github.com/users/Nothern-ai/followers",
"following_url": "https://api.github.com/users/Nothern-ai/following{/other_user}",
"gists_url": "https://api.github.com/users/Nothern-ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nothern-ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nothern-ai/subscriptions",
"organizations_url": "https://api.github.com/users/Nothern-ai/orgs",
"repos_url": "https://api.github.com/users/Nothern-ai/repos",
"events_url": "https://api.github.com/users/Nothern-ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nothern-ai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 1
| 2025-03-03T05:22:05
| 2025-03-03T10:11:44
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Original Working Environment: Python 3.8, transformers==4.46.0.dev0, safetensors==0.4.4, peft==0.12.0, trl==0.10.1
New Environment with Issue: transformers==4.45.2, safetensors==0.4.4, peft==0.12.0, trl==0.10.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
When migrating from the original environment to a new machine with slightly different package versions, I encountered an error during the model merging process.
My workflow involves:
Saving LoRA weights
Merging these weights with the base model
The error occurs specifically during the loading of safetensors files after merging/
Reproduction Steps
no need to train directly save LoRA weights (this step succeeds)
Attempt to merge the saved weights with the original model
The merge fails with the above error
```
# train_critic.py
import os
import time
import shutil
import argparse
import torch
import torch.distributed as dist
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
GenerationConfig,
BitsAndBytesConfig,
)
from datasets import load_dataset
from trl import DPOTrainer, DPOConfig
from peft import LoraConfig, PeftModel
import wandb
from datetime import datetime
def print_rank_0(message):
if dist.get_rank() == 0:
print(message)
def main():
# ------------- Parse Arguments -------------
parser = argparse.ArgumentParser()
parser.add_argument("--epoch", type=int, required=True, help="Current outer training iteration (which round)")
parser.add_argument("--pref_dir", type=str, required=True, help="Folder for storing the preference dataset")
parser.add_argument("--weights_dir", type=str, required=True, help="Folder for saving and loading weights")
parser.add_argument("--train_epochs", type=int, default=1, help="Number of epochs to run in this DPO fine-tuning")
parser.add_argument("--beta", type=float, default=0.2, help="Beta hyperparameter for DPO")
parser.add_argument("--learning_rate", type=float, default=5e-6, help="Learning rate")
parser.add_argument("--batch_size", type=int, default=1, help="Batch Size")
args = parser.parse_args()
# ------------- Distributed Initialization -------------
local_rank = int(os.environ.get("LOCAL_RANK", -1))
if local_rank >= 0:
torch.cuda.set_device(local_rank)
dist.init_process_group(
backend='nccl',
init_method='env://',
world_size=int(os.environ.get("WORLD_SIZE", 1)),
rank=int(os.environ.get("RANK", 0))
)
print_rank_0(f"CUDA_VISIBLE_DEVICES: {os.environ.get('CUDA_VISIBLE_DEVICES')}")
print_rank_0(f"LOCAL_RANK: {os.environ.get('LOCAL_RANK')}")
print_rank_0(f"WORLD_SIZE: {os.environ.get('WORLD_SIZE')}")
# ------------- config -------------
epoch = args.epoch
weights_dir = args.weights_dir
pref_dir = args.pref_dir
batch_size = args.batch_size
base_model_path = "meta-llama/Llama-3.1-8B-Instruct"
print("base_model_path:", base_model_path)
data_path = os.path.join(pref_dir, f"critic_{epoch}.jsonl")
output_model_path = os.path.join(weights_dir, f"critic_{epoch}")
os.makedirs(output_model_path, exist_ok=True)
print_rank_0(f"Loading base model from: {base_model_path}")
model = AutoModelForCausalLM.from_pretrained(
base_model_path,
torch_dtype=torch.bfloat16,
device_map={'': torch.cuda.current_device()}
# device_map={'': torch.cuda.current_device()} if local_rank >= 0 else "auto",
)
tokenizer = AutoTokenizer.from_pretrained(base_model_path, use_fast=False)
model.generation_config = GenerationConfig(
max_new_tokens=512,
temperature=0.7,
do_sample=True,
)
# padding_side/pad_token
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
tokenizer.padding_side = 'right'
tokenizer.pad_token = '[PAD]'
model.config.pad_token_id = tokenizer.pad_token_id
model.config.eos_token_id = tokenizer.eos_token_id
with torch.no_grad():
model.resize_token_embeddings(len(tokenizer))
print_rank_0(f"Loading dataset from: {data_path}")
dataset = load_dataset('json', data_files=data_path)['train']
def convert_format(example):
messages = example['messages']
formatted = "<|begin_of_text|>"
# system
system_msg = messages[0]
formatted += f"<|start_header_id|>system<|end_header_id|>\n\n{system_msg['content']}<|eot_id|>"
# user
user_msg = messages[1]
formatted += f"<|start_header_id|>user<|end_header_id|>\n\n{user_msg['content']}<|eot_id|>"
# assistant
formatted += "<|start_header_id|>assistant<|end_header_id|>\n\n"
chosen_response = example['chosen'] + tokenizer.eos_token
rejected_response = example['rejected'] + tokenizer.eos_token
return {
"prompt": formatted,
"chosen": chosen_response,
"rejected": rejected_response
}
train_dataset = dataset.map(
convert_format,
remove_columns=dataset.column_names,
load_from_cache_file=False
)
base_lr = args.learning_rate
scaled_lr = base_lr * dist.get_world_size() * batch_size
warmup_steps = 100
dpo_config = DPOConfig(
beta=args.beta,
warmup_steps=warmup_steps,
weight_decay=0.01,
learning_rate=scaled_lr,
rpo_alpha=1.0,
# lr_scheduler_type="cosine",
output_dir=output_model_path,
num_train_epochs=args.train_epochs,
per_device_train_batch_size=batch_size,
fp16=False,
bf16=True,
logging_steps=10,
save_strategy="no",
save_total_limit=1,
report_to="none",
ddp_backend='nccl',
remove_unused_columns=False,
dataloader_drop_last=True,
max_length=2048,
max_prompt_length=2048,
local_rank=local_rank,
)
# LoRA
peft_config = LoraConfig(
r=256,
lora_alpha=32,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
lora_dropout=0.0,
bias="none",
task_type="CAUSAL_LM",
)
trainer = DPOTrainer(
model=model,
args=dpo_config,
train_dataset=train_dataset,
tokenizer=tokenizer,
peft_config=peft_config,
)
trainer.train()
# ------------- merge LoRA -------------
if dist.get_rank() == 0:
lora_weights_path = os.path.join(output_model_path, "lora_weights")
trainer.model.save_pretrained(lora_weights_path)
# print("lora weight saved")
# trainer.model.save_pretrained(lora_weights_path, safe_serialization=False)
print("lora weight saved")
base_merged_model = AutoModelForCausalLM.from_pretrained(
base_model_path,
device_map=None,
low_cpu_mem_usage=False,
)
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
tokenizer.pad_token = '[PAD]'
base_merged_model.config.pad_token_id = tokenizer.pad_token_id
base_merged_model.config.eos_token_id = tokenizer.eos_token_id
with torch.no_grad():
base_merged_model.resize_token_embeddings(len(tokenizer))
peft_model = PeftModel.from_pretrained(
base_merged_model,
lora_weights_path,
device_map=None,
)
merged_model = peft_model.merge_and_unload()
# save
print_rank_0(f"Saving merged model to: {output_model_path}")
merged_model.save_pretrained(output_model_path)
print_rank_0("Model saved successfully")
tokenizer.save_pretrained(output_model_path)
# delete lora weights
shutil.rmtree(lora_weights_path)
dist.barrier(device_ids=[local_rank] if local_rank >= 0 else None)
print_rank_0("DPO Training complete.")
dist.destroy_process_group()
if __name__ == "__main__":
main()
```
When trying to skip saving the LoRA weights and directly merging them, the merge operation succeeds
```
peft_model = trainer.model
merged_model = peft_model.merge_and_unload()
print_rank_0(f"Saving merged model to: {output_model_path}")
merged_model.save_pretrained(output_model_path)
tokenizer.save_pretrained(output_model_path)
print_rank_0("Merged model saved successfully")
```
However, attempting to AutoModelForCausalLM.from_pretrained the merged safetensors weights later results in the error2
### Expected behavior
error1(save lora weights and merge):
> 100%|██████████| 1/1 [00:01<00:00, 1.91s/it]
> 100%|██████████| 1/1 [00:01<00:00, 1.92s/it]
> /home//miniconda3/envs/py39env/lib/python3.8/site-packages/peft/utils/save_and_load.py:232: UserWarning: Setting `save_embedding_layers` to `True` as the embedding layer has been resized during finetuning.
> warnings.warn(
> lora weight saved
>
> Loading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]
> Loading checkpoint shards: 25%|██▌ | 1/4 [00:00<00:02, 1.28it/s]
> Loading checkpoint shards: 50%|█████ | 2/4 [00:01<00:01, 1.32it/s]
> Loading checkpoint shards: 75%|███████▌ | 3/4 [00:02<00:00, 1.31it/s]
> Loading checkpoint shards: 100%|██████████| 4/4 [00:02<00:00, 1.74it/s]
> Loading checkpoint shards: 100%|██████████| 4/4 [00:02<00:00, 1.55it/s]
> [rank0]: Traceback (most recent call last):
> [rank0]: File "/users/w/ac/train/train_critic.py", line 249, in <module>
> [rank0]: main()
> [rank0]: File "/users/w/ac/train/train_critic.py", line 225, in main
> [rank0]: peft_model = PeftModel.from_pretrained(
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/peft/peft_model.py", line 545, in from_pretrained
> [rank0]: model.load_adapter(
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/peft/peft_model.py", line 1113, in load_adapter
> [rank0]: adapters_weights = load_peft_weights(model_id, device=torch_device, **hf_hub_download_kwargs)
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/peft/utils/save_and_load.py", line 486, in load_peft_weights
> [rank0]: adapters_weights = safe_load_file(filename, device=device)
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/safetensors/torch.py", line 311, in load_file
> [rank0]: with safe_open(filename, framework="pt", device=device) as f:
> [rank0]: safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer
> E0302 21:17:38.377842 2650981 site-packages/torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 0 (pid: 2651079) of binary: /home//miniconda3/envs/py39env/bin/python
> Traceback (most recent call last):
> File "/home//miniconda3/envs/py39env/bin/torchrun", line 8, in <module>
> sys.exit(main())
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
> return f(*args, **kwargs)
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/run.py", line 919, in main
> run(args)
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/run.py", line 910, in run
> elastic_launch(
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 138, in __call__
> return launch_agent(self._config, self._entrypoint, list(args))
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
> raise ChildFailedError(
> torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
error2:(directly merge, and load the model after merge
> CUDA_VISIBLE_DEVICES: 1
> LOCAL_RANK: 0
> WORLD_SIZE: 1
> base_model_path: /train/runs/301_wd/weights/_1
> Loading base model from: /train/runs/301_wd/weights/_1
>
> Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s]
> Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s]
> [rank0]: Traceback (most recent call last):
> [rank0]: File "/train/train_.py", line 216, in <module>
> [rank0]: main()
> [rank0]: File "/train/train_.py", line 91, in main
> [rank0]: model = AutoModelForCausalLM.from_pretrained(
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
> [rank0]: return model_class.from_pretrained(
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/transformers/modeling_utils.py", line 4014, in from_pretrained
> [rank0]: ) = cls._load_pretrained_model(
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/transformers/modeling_utils.py", line 4482, in _load_pretrained_model
> [rank0]: state_dict = load_state_dict(shard_file, is_quantized=is_quantized)
> [rank0]: File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/transformers/modeling_utils.py", line 549, in load_state_dict
> [rank0]: with safe_open(checkpoint_file, framework="pt") as f:
> [rank0]: safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer
> E0302 20:39:06.398025 2565872 site-packages/torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 0 (pid: 2566031) of binary: /home//miniconda3/envs/py39env/bin/python
> Traceback (most recent call last):
> File "/home//miniconda3/envs/py39env/bin/torchrun", line 8, in <module>
> sys.exit(main())
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
> return f(*args, **kwargs)
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/run.py", line 919, in main
> run(args)
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/run.py", line 910, in run
> elastic_launch(
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 138, in __call__
> return launch_agent(self._config, self._entrypoint, list(args))
> File "/home//miniconda3/envs/py39env/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
> raise ChildFailedError(
> torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
> ============================================================
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2405/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2400
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2400/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2400/comments
|
https://api.github.com/repos/huggingface/peft/issues/2400/events
|
https://github.com/huggingface/peft/issues/2400
| 2,881,481,036
|
I_kwDOIf9iDM6rv-lM
| 2,400
|
processing_class and tokenizer arguments on SFTTrainer()
|
{
"login": "ErikKankaTrea",
"id": 18656607,
"node_id": "MDQ6VXNlcjE4NjU2NjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/18656607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ErikKankaTrea",
"html_url": "https://github.com/ErikKankaTrea",
"followers_url": "https://api.github.com/users/ErikKankaTrea/followers",
"following_url": "https://api.github.com/users/ErikKankaTrea/following{/other_user}",
"gists_url": "https://api.github.com/users/ErikKankaTrea/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ErikKankaTrea/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ErikKankaTrea/subscriptions",
"organizations_url": "https://api.github.com/users/ErikKankaTrea/orgs",
"repos_url": "https://api.github.com/users/ErikKankaTrea/repos",
"events_url": "https://api.github.com/users/ErikKankaTrea/events{/privacy}",
"received_events_url": "https://api.github.com/users/ErikKankaTrea/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2025-02-26T12:48:33
| 2025-02-27T03:39:02
| 2025-02-27T03:39:00
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hi!!!
I got unexpected error from my side when running the example train.py with deepspeed [(link)](https://github.com/huggingface/peft/tree/main/examples/sft)
Argument "**tokenizer**" should be now "**processing_class**".
Could anyone please, let me know whether with the example provided (link above) changing the arguments names on SFTTrainer() for passing the tokenizer should be enough ?
I am worried if I make that change switching arguments the example scripts will miss sense.
Thanks in advance!
|
{
"login": "ErikKankaTrea",
"id": 18656607,
"node_id": "MDQ6VXNlcjE4NjU2NjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/18656607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ErikKankaTrea",
"html_url": "https://github.com/ErikKankaTrea",
"followers_url": "https://api.github.com/users/ErikKankaTrea/followers",
"following_url": "https://api.github.com/users/ErikKankaTrea/following{/other_user}",
"gists_url": "https://api.github.com/users/ErikKankaTrea/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ErikKankaTrea/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ErikKankaTrea/subscriptions",
"organizations_url": "https://api.github.com/users/ErikKankaTrea/orgs",
"repos_url": "https://api.github.com/users/ErikKankaTrea/repos",
"events_url": "https://api.github.com/users/ErikKankaTrea/events{/privacy}",
"received_events_url": "https://api.github.com/users/ErikKankaTrea/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2400/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2394
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2394/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2394/comments
|
https://api.github.com/repos/huggingface/peft/issues/2394/events
|
https://github.com/huggingface/peft/issues/2394
| 2,874,191,172
|
I_kwDOIf9iDM6rUK1E
| 2,394
|
TP + DP training error
|
{
"login": "iMountTai",
"id": 35353688,
"node_id": "MDQ6VXNlcjM1MzUzNjg4",
"avatar_url": "https://avatars.githubusercontent.com/u/35353688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iMountTai",
"html_url": "https://github.com/iMountTai",
"followers_url": "https://api.github.com/users/iMountTai/followers",
"following_url": "https://api.github.com/users/iMountTai/following{/other_user}",
"gists_url": "https://api.github.com/users/iMountTai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iMountTai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iMountTai/subscriptions",
"organizations_url": "https://api.github.com/users/iMountTai/orgs",
"repos_url": "https://api.github.com/users/iMountTai/repos",
"events_url": "https://api.github.com/users/iMountTai/events{/privacy}",
"received_events_url": "https://api.github.com/users/iMountTai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 7
| 2025-02-24T08:30:53
| 2025-02-27T16:50:07
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
peft: 0.14.1.dev0
transformers: 4.50.dev0
accelerate: 1.4.0.dev0
python: 3.11
linux
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
After adding the LoRA module to the model, an error occurred:
NotImplementederror: ColwiseParallel currently only support nn.linear and nn.embedding
### Expected behavior
lora module training with TP
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2394/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2390
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2390/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2390/comments
|
https://api.github.com/repos/huggingface/peft/issues/2390/events
|
https://github.com/huggingface/peft/issues/2390
| 2,866,034,838
|
I_kwDOIf9iDM6q1DiW
| 2,390
|
Bug: Using 2 LoRA configs with `target_modules='all-linear'` leads to nested LoRA layers
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 4838806417,
"node_id": "LA_kwDOIf9iDM8AAAABIGpTkQ",
"url": "https://api.github.com/repos/huggingface/peft/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 4838806434,
"node_id": "LA_kwDOIf9iDM8AAAABIGpTog",
"url": "https://api.github.com/repos/huggingface/peft/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false
| null |
[] | null | 0
| 2025-02-20T12:34:35
| 2025-03-04T16:16:16
| 2025-03-04T16:16:16
|
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
-
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModelForCausalLM
from peft import LoraConfig, get_peft_model
model_id = "hf-internal-testing/tiny-random-OPTForCausalLM"
model = AutoModelForCausalLM.from_pretrained(model_id)
config0 = LoraConfig(target_modules="all-linear")
config1 = LoraConfig(target_modules="all-linear")
model = get_peft_model(model, config0)#, adapter_name="default")
model.add_adapter("adapter1", config1)
print(model.base_model.model.model.decoder.layers[0].self_attn.k_proj)
```
prints:
```
lora.Linear(
(base_layer): lora.Linear(
(base_layer): Linear(in_features=16, out_features=16, bias=True)
(lora_dropout): ModuleDict(
(adapter1): Identity()
)
(lora_A): ModuleDict(
(adapter1): Linear(in_features=16, out_features=8, bias=False)
)
(lora_B): ModuleDict(
(adapter1): Linear(in_features=8, out_features=16, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
(lora_dropout): ModuleDict(
(default): Identity()
)
(lora_A): ModuleDict(
(default): lora.Linear(
(base_layer): Linear(in_features=16, out_features=8, bias=False)
(lora_dropout): ModuleDict(
(adapter1): Identity()
)
(lora_A): ModuleDict(
(adapter1): Linear(in_features=16, out_features=8, bias=False)
)
(lora_B): ModuleDict(
(adapter1): Linear(in_features=8, out_features=8, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
)
(lora_B): ModuleDict(
(default): lora.Linear(
(base_layer): Linear(in_features=8, out_features=16, bias=False)
(lora_dropout): ModuleDict(
(adapter1): Identity()
)
(lora_A): ModuleDict(
(adapter1): Linear(in_features=8, out_features=8, bias=False)
)
(lora_B): ModuleDict(
(adapter1): Linear(in_features=8, out_features=16, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
```
### Expected behavior
Instead of getting nested LoRA layers, the linear layers belonging to a LoRA layer should not be targeted by `all-linear`.
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2390/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2388
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2388/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2388/comments
|
https://api.github.com/repos/huggingface/peft/issues/2388/events
|
https://github.com/huggingface/peft/issues/2388
| 2,863,639,986
|
I_kwDOIf9iDM6qr62y
| 2,388
|
ValueError: Target module Qwen2_5_VisionTransformerPretrainedModel is not supported.
|
{
"login": "samuellimabraz",
"id": 115582014,
"node_id": "U_kgDOBuOkPg",
"avatar_url": "https://avatars.githubusercontent.com/u/115582014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samuellimabraz",
"html_url": "https://github.com/samuellimabraz",
"followers_url": "https://api.github.com/users/samuellimabraz/followers",
"following_url": "https://api.github.com/users/samuellimabraz/following{/other_user}",
"gists_url": "https://api.github.com/users/samuellimabraz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samuellimabraz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samuellimabraz/subscriptions",
"organizations_url": "https://api.github.com/users/samuellimabraz/orgs",
"repos_url": "https://api.github.com/users/samuellimabraz/repos",
"events_url": "https://api.github.com/users/samuellimabraz/events{/privacy}",
"received_events_url": "https://api.github.com/users/samuellimabraz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 3
| 2025-02-19T15:09:17
| 2025-03-06T16:30:36
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
## Context
I'm finetuning the Qwen2.5-Vl model with swift for data extraction using LoRA. I'm not sure what is the correct way to save and upload the adapter and be able to recharge it correctly.
In short, I followed these steps
```python
# load model
model, processor = get_model_tokenizer(
'Qwen/Qwen2.5-VL-3B-Instruct',
torch_dtype=torch.bfloat16,
use_hf=True,
attn_impl="flash_attn",
)
# get lora
...
model = Swift.prepare_model(model, lora_config)
# train config e run
...
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
data_collator=template.data_collator,
train_dataset=train_dataset,
eval_dataset=val_dataset,
template=template,
callbacks= [
EarlyStoppingCallback(
early_stopping_patience=6,
early_stopping_threshold=0.001
)
]
)
stats = trainer.train()
# push adapter
model.push_to_hub(f"tech4humans/{model_name}", private=True)
```
debugging the peft model was loaded with the class `PeftModelForCausalLM`.
## Problem
Then after I tried to recharge the adapter and I get an error with peft
```python
from transformers import Qwen2_5_VLForConditionalGeneration
model = Qwen2_5_VLForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-VL-3B-Instruct", device_map="auto")
model.load_adapter("tech4humans/Qwen2.5-VL-3B-Instruct-r4-tuned")
```
```python
/usr/local/lib/python3.10/dist-packages/peft/tuners/lora/model.py in _create_new_module(lora_config, adapter_name, target, **kwargs)
345 if new_module is None:
346 # no module could be matched
--> 347 raise ValueError(
348 f"Target module {target} is not supported. Currently, only the following modules are supported: "
349 "`torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv1d`, `torch.nn.Conv2d`, `torch.nn.Conv3d`, ".
ValueError: Target module Qwen2_5_VisionTransformerPretrainedModel(
(patch_embed): Qwen2_5_VisionPatchEmbed(
(proj): Conv3d(3, 1280, kernel_size=(2, 14, 14), stride=(2, 14, 14), bias=False)
)
(rotary_pos_emb): Qwen2_5_VisionRotaryEmbedding()
(blocks): ModuleList(
(0-31): 32 x Qwen2_5_VLVisionBlock(
(norm1): Qwen2RMSNorm((1280,), eps=1e-06)
(norm2): Qwen2RMSNorm((1280,), eps=1e-06)
(attn): Qwen2_5_VLVisionSdpaAttention(
(qkv): Linear(in_features=1280, out_features=3840, bias=True)
(proj): Linear(in_features=1280, out_features=1280, bias=True)
)
(mlp): Qwen2_5_VLMLP(
(gate_proj): Linear(in_features=1280, out_features=3420, bias=True)
(up_proj): Linear(in_features=1280, out_features=3420, bias=True)
(down_proj): Linear(in_features=3420, out_features=1280, bias=True)
(act_fn): SiLU()
)
)
)
(merger): Qwen2_5_VLPatchMerger(
(ln_q): Qwen2RMSNorm((1280,), eps=1e-06)
(mlp): Sequential(
(0): Linear(in_features=5120, out_features=5120, bias=True)
(1): GELU(approximate='none')
(2): Linear(in_features=5120, out_features=2048, bias=True)
)
)
) is not supported. Currently, only the following modules are supported: `torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv1d`, `torch.nn.Conv2d`, `torch.nn.Conv3d`, `transformers.pytorch_utils.Conv1D`, `torch.nn.MultiheadAttention.`.
```
## Sytem info
```
transformers 4.50.0.dev0
peft 0.14.1.dev0
ms-swift 3.2.0.dev0
Python 3.10.12
CUDA Version: 12.6
```
Am I missing something or doing something wrong? Any pointers would be appreciated. Thanks!
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2388/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2381
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2381/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2381/comments
|
https://api.github.com/repos/huggingface/peft/issues/2381/events
|
https://github.com/huggingface/peft/issues/2381
| 2,857,556,037
|
I_kwDOIf9iDM6qUthF
| 2,381
|
Bug when deleting adapters of a model with modules_to_save
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 4838806417,
"node_id": "LA_kwDOIf9iDM8AAAABIGpTkQ",
"url": "https://api.github.com/repos/huggingface/peft/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
open
| false
| null |
[] | null | 0
| 2025-02-17T11:22:34
| 2025-02-20T12:35:13
| null |
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
All PEFT versions.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import AutoModelForSequenceClassification
from peft import LoraConfig, get_peft_model
model_id = "facebook/opt-125m"
config = LoraConfig(task_type="SEQ_CLS")
model = AutoModelForSequenceClassification.from_pretrained(model_id)
adapter_to_delete = "delete_me"
model = get_peft_model(model, config)
model.add_adapter(adapter_to_delete, config)
# sanity check
assert "delete_me" in model.base_model.model.score.modules_to_save
model.delete_adapter(adapter_to_delete)
assert "delete_me" not in model.base_model.model.score.modules_to_save
```
### Expected behavior
When adding, say, a LoRA adapter with `modules_to_save`, then deleting the adapter, the LoRA part is correctly removed but the `modules_to_save` part is not removed.
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2381/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2379
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2379/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2379/comments
|
https://api.github.com/repos/huggingface/peft/issues/2379/events
|
https://github.com/huggingface/peft/issues/2379
| 2,854,940,754
|
I_kwDOIf9iDM6qKvBS
| 2,379
|
prompt_tuning_peft tutorial raises cache layer error
|
{
"login": "jakerobers",
"id": 1840629,
"node_id": "MDQ6VXNlcjE4NDA2Mjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1840629?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jakerobers",
"html_url": "https://github.com/jakerobers",
"followers_url": "https://api.github.com/users/jakerobers/followers",
"following_url": "https://api.github.com/users/jakerobers/following{/other_user}",
"gists_url": "https://api.github.com/users/jakerobers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jakerobers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jakerobers/subscriptions",
"organizations_url": "https://api.github.com/users/jakerobers/orgs",
"repos_url": "https://api.github.com/users/jakerobers/repos",
"events_url": "https://api.github.com/users/jakerobers/events{/privacy}",
"received_events_url": "https://api.github.com/users/jakerobers/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 3
| 2025-02-15T00:10:11
| 2025-02-19T10:21:15
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Following the prompt tuning guide leads to an error when executing in a local environment:
- https://huggingface.co/learn/cookbook/en/prompt_tuning_peft
When executing, an exception is raised when calling `model.generate()` with the prompt-tuned model. Everything up to that point seems to be working as expected (i.e. the `peft_outputs_prompt` and `peft_outputs_sentences` directories containing the prompt-tunings have checkpoints).
Having a look at the stacktrace, it looks like `model_kwargs["past_key_values"]` is being referenced in `peft/peft_model.py`. I'm curious if this is possibly related to https://github.com/huggingface/peft/issues/1962.
```
Traceback (most recent call last):
File "/main.py", line 148, in <module>
loaded_model_prompt_outputs = get_outputs(loaded_model_prompt, input_prompt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./main.py", line 17, in get_outputs
outputs = model.generate(
^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/peft/peft_model.py", line 1140, in generate
outputs = self.base_model.generate(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/transformers/generation/utils.py", line 2255, in generate
result = self._sample(
^^^^^^^^^^^^^
File "lib/python3.11/site-packages/transformers/generation/utils.py", line 3247, in _sample
model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib/python3.11/site-packages/peft/peft_model.py", line 1169, in prepare_inputs_for_generation
if model_kwargs["past_key_values"][0][0].shape[-2] >= model_kwargs["input_ids"].shape[1]:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^
File "lib/python3.11/site-packages/transformers/cache_utils.py", line 390, in __getitem__
raise KeyError(f"Cache only has {len(self)} layers, attempted to access layer with index {layer_idx}")
KeyError: 'Cache only has 0 layers, attempted to access layer with index 0'
```
cc @BenjaminBossan since you have some context around how `past_key_values` [works with transformers](https://github.com/huggingface/peft/pull/2096/files)
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
This is the code provided in the article https://huggingface.co/learn/cookbook/en/prompt_tuning_peft, condensed into a single script.
```
#!/usr/bin/env python
# TODO: https://huggingface.co/learn/cookbook/en/prompt_tuning_peft
# TODO: https://huggingface.co/docs/peft/en/package_reference/prompt_tuning
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "bigscience/bloomz-560m"
# model_name="bigscience/bloom-1b1"
NUM_VIRTUAL_TOKENS = 4
NUM_EPOCHS = 6
tokenizer = AutoTokenizer.from_pretrained(model_name)
foundational_model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True)
def get_outputs(model, inputs, max_new_tokens=100):
outputs = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
max_new_tokens=max_new_tokens,
# temperature=0.2,
# top_p=0.95,
# do_sample=True,
repetition_penalty=1.5, # Avoid repetition.
early_stopping=True, # The model can stop before reach the max_length
eos_token_id=tokenizer.eos_token_id,
)
return outputs
input_prompt = tokenizer("I want you to act as a motivational coach. ", return_tensors="pt")
foundational_outputs_prompt = get_outputs(foundational_model, input_prompt, max_new_tokens=50)
print(tokenizer.batch_decode(foundational_outputs_prompt, skip_special_tokens=True))
import os
from IPython.display import display
# os.environ["TOKENIZERS_PARALLELISM"] = "false"
from datasets import load_dataset
dataset_prompt = "fka/awesome-chatgpt-prompts"
# Create the Dataset to create prompts.
#
data_prompt = load_dataset(dataset_prompt)
data_prompt = data_prompt.map(lambda samples: tokenizer(samples["prompt"]), batched=True)
train_sample_prompt = data_prompt["train"].select(range(50))
display(train_sample_prompt)
print(train_sample_prompt[:1])
dataset_sentences = load_dataset("Abirate/english_quotes")
data_sentences = dataset_sentences.map(lambda samples: tokenizer(samples["quote"]), batched=True)
train_sample_sentences = data_sentences["train"].select(range(25))
train_sample_sentences = train_sample_sentences.remove_columns(["author", "tags"])
display(train_sample_sentences)
print(train_sample_sentences[:1])
from peft import get_peft_model, PromptTuningConfig, TaskType, PromptTuningInit
generation_config = PromptTuningConfig(
task_type=TaskType.CAUSAL_LM, # This type indicates the model will generate text.
prompt_tuning_init=PromptTuningInit.RANDOM, # The added virtual tokens are initializad with random numbers
num_virtual_tokens=NUM_VIRTUAL_TOKENS, # Number of virtual tokens to be added and trained.
tokenizer_name_or_path=model_name, # The pre-trained model.
)
peft_model_prompt = get_peft_model(foundational_model, generation_config)
print(peft_model_prompt.print_trainable_parameters())
peft_model_sentences = get_peft_model(foundational_model, generation_config)
print(peft_model_sentences.print_trainable_parameters())
from transformers import TrainingArguments
def create_training_arguments(path, learning_rate=0.0035, epochs=6):
training_args = TrainingArguments(
output_dir=path, # Where the model predictions and checkpoints will be written
use_cpu=True, # This is necessary for CPU clusters.
auto_find_batch_size=True, # Find a suitable batch size that will fit into memory automatically
learning_rate=learning_rate, # Higher learning rate than full Fine-Tuning
num_train_epochs=epochs,
)
return training_args
import os
working_dir = "./"
# Is best to store the models in separate folders.
# Create the name of the directories where to store the models.
output_directory_prompt = os.path.join(working_dir, "peft_outputs_prompt")
output_directory_sentences = os.path.join(working_dir, "peft_outputs_sentences")
# Just creating the directoris if not exist.
if not os.path.exists(working_dir):
os.mkdir(working_dir)
if not os.path.exists(output_directory_prompt):
os.mkdir(output_directory_prompt)
if not os.path.exists(output_directory_sentences):
os.mkdir(output_directory_sentences)
training_args_prompt = create_training_arguments(output_directory_prompt, 0.003, NUM_EPOCHS)
training_args_sentences = create_training_arguments(output_directory_sentences, 0.003, NUM_EPOCHS)
from transformers import Trainer, DataCollatorForLanguageModeling
def create_trainer(model, training_args, train_dataset):
trainer = Trainer(
model=model, # We pass in the PEFT version of the foundation model, bloomz-560M
args=training_args, # The args for the training.
train_dataset=train_dataset, # The dataset used to tyrain the model.
data_collator=DataCollatorForLanguageModeling(
tokenizer, mlm=False
), # mlm=False indicates not to use masked language modeling
)
return trainer
trainer_prompt = create_trainer(peft_model_prompt, training_args_prompt, train_sample_prompt)
trainer_prompt.train()
trainer_sentences = create_trainer(peft_model_sentences, training_args_sentences, train_sample_sentences)
trainer_sentences.train()
trainer_prompt.model.save_pretrained(output_directory_prompt)
trainer_sentences.model.save_pretrained(output_directory_sentences)
from peft import PeftModel
loaded_model_prompt = PeftModel.from_pretrained(
foundational_model,
output_directory_prompt,
# device_map='auto',
is_trainable=False,
)
loaded_model_prompt_outputs = get_outputs(loaded_model_prompt, input_prompt)
print(tokenizer.batch_decode(loaded_model_prompt_outputs, skip_special_tokens=True))
loaded_model_prompt.load_adapter(output_directory_sentences, adapter_name="quotes")
loaded_model_prompt.set_adapter("quotes")
loaded_model_sentences_outputs = get_outputs(loaded_model_prompt, input_sentences)
print(tokenizer.batch_decode(loaded_model_sentences_outputs, skip_special_tokens=True))
# Notes:
# - https://github.com/huggingface/peft/issues/1962
# - https://github.com/huggingface/peft/issues/869#issuecomment-2263322623
```
### Expected behavior
The `loaded_model_prompt` should be able to execute `generate` and return a prompt-tuned response.
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2379/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2377
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2377/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2377/comments
|
https://api.github.com/repos/huggingface/peft/issues/2377/events
|
https://github.com/huggingface/peft/issues/2377
| 2,853,540,672
|
I_kwDOIf9iDM6qFZNA
| 2,377
|
Contributing new model merging method to PEFT
|
{
"login": "SpeeeedLee",
"id": 132431571,
"node_id": "U_kgDOB-S-0w",
"avatar_url": "https://avatars.githubusercontent.com/u/132431571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SpeeeedLee",
"html_url": "https://github.com/SpeeeedLee",
"followers_url": "https://api.github.com/users/SpeeeedLee/followers",
"following_url": "https://api.github.com/users/SpeeeedLee/following{/other_user}",
"gists_url": "https://api.github.com/users/SpeeeedLee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SpeeeedLee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SpeeeedLee/subscriptions",
"organizations_url": "https://api.github.com/users/SpeeeedLee/orgs",
"repos_url": "https://api.github.com/users/SpeeeedLee/repos",
"events_url": "https://api.github.com/users/SpeeeedLee/events{/privacy}",
"received_events_url": "https://api.github.com/users/SpeeeedLee/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 1
| 2025-02-14T12:17:46
| 2025-02-14T15:57:51
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Feature request
Hi all,
I noticed that several model merging methods, such as TIES and DARE, have been implemented in this library, as mentioned [here](https://github.com/huggingface/peft/blob/main/docs/source/developer_guides/model_merging.md).
I was wondering if there is a way for me to contribute a recently accepted model merging method to this repo.
I would really appreciate any guidance or suggestions on how to proceed.
Thanks in advance!
### Motivation
Enhance the diversity of model merging supported in this library.
### Your contribution
I can submit a PR.
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2377/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2368
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2368/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2368/comments
|
https://api.github.com/repos/huggingface/peft/issues/2368/events
|
https://github.com/huggingface/peft/issues/2368
| 2,838,153,330
|
I_kwDOIf9iDM6pKshy
| 2,368
|
[FSDP] After training embed_tokens in modules_to_save model has hallucinations
|
{
"login": "DmitryDiTy",
"id": 90377536,
"node_id": "MDQ6VXNlcjkwMzc3NTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/90377536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DmitryDiTy",
"html_url": "https://github.com/DmitryDiTy",
"followers_url": "https://api.github.com/users/DmitryDiTy/followers",
"following_url": "https://api.github.com/users/DmitryDiTy/following{/other_user}",
"gists_url": "https://api.github.com/users/DmitryDiTy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DmitryDiTy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DmitryDiTy/subscriptions",
"organizations_url": "https://api.github.com/users/DmitryDiTy/orgs",
"repos_url": "https://api.github.com/users/DmitryDiTy/repos",
"events_url": "https://api.github.com/users/DmitryDiTy/events{/privacy}",
"received_events_url": "https://api.github.com/users/DmitryDiTy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 17
| 2025-02-07T13:23:07
| 2025-02-14T08:23:35
| 2025-02-14T08:21:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
### Libs
```
absl-py==2.1.0
accelerate==1.3.0
aiohappyeyeballs==2.4.4
aiohttp==3.11.10
aiosignal==1.3.2
annotated-types==0.7.0
asttokens @ file:///home/conda/feedstock_root/build_artifacts/asttokens_1733250440834/work
async-timeout==5.0.1
attrs==24.3.0
beartype==0.14.1
bert-score==0.3.13
better-abc==0.0.3
certifi==2024.12.14
charset-normalizer==3.4.0
circuitsvis @ git+https://github.com/callummcdougall/CircuitsVis.git@1e6129d08cae7af9242d9ab5d3ed322dd44b4dd3#subdirectory=python
click==8.1.7
comm @ file:///home/conda/feedstock_root/build_artifacts/comm_1733502965406/work
contourpy==1.3.1
cycler==0.12.1
datasets==3.2.0
debugpy @ file:///home/conda/feedstock_root/build_artifacts/debugpy_1734158947252/work
decorator @ file:///home/conda/feedstock_root/build_artifacts/decorator_1733236420667/work
dill==0.3.8
docker-pycreds==0.4.0
einops==0.8.0
evaluate==0.4.3
exceptiongroup @ file:///home/conda/feedstock_root/build_artifacts/exceptiongroup_1733208806608/work
executing @ file:///home/conda/feedstock_root/build_artifacts/executing_1733569351617/work
fancy-einsum==0.0.3
filelock==3.16.1
fonttools==4.55.6
frozenlist==1.5.0
fsspec==2024.9.0
gitdb==4.0.11
GitPython==3.1.43
huggingface-hub==0.27.0
idna==3.10
importlib-metadata==5.2.0
ipykernel @ file:///home/conda/feedstock_root/build_artifacts/ipykernel_1719845459717/work
ipython @ file:///home/conda/feedstock_root/build_artifacts/ipython_1732896932739/work
ipywidgets==8.1.5
jaxtyping==0.2.36
jedi @ file:///home/conda/feedstock_root/build_artifacts/jedi_1733300866624/work
Jinja2==3.1.4
joblib==1.4.2
jupyter_client @ file:///home/conda/feedstock_root/build_artifacts/jupyter_client_1733440914442/work
jupyter_core @ file:///home/conda/feedstock_root/build_artifacts/jupyter_core_1727163409502/work
jupyterlab_widgets==3.0.13
kiwisolver==1.4.8
markdown-it-py==3.0.0
MarkupSafe==3.0.2
matplotlib==3.10.0
matplotlib-inline @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-inline_1733416936468/work
mdurl==0.1.2
mpmath==1.3.0
multidict==6.1.0
multiprocess==0.70.16
nest_asyncio @ file:///home/conda/feedstock_root/build_artifacts/nest-asyncio_1733325553580/work
networkx==3.4.2
nltk==3.9.1
numpy==1.26.4
nvidia-cublas-cu12==12.4.5.8
nvidia-cuda-cupti-cu12==12.4.127
nvidia-cuda-nvrtc-cu12==12.4.127
nvidia-cuda-runtime-cu12==12.4.127
nvidia-cudnn-cu12==9.1.0.70
nvidia-cufft-cu12==11.2.1.3
nvidia-curand-cu12==10.3.5.147
nvidia-cusolver-cu12==11.6.1.9
nvidia-cusparse-cu12==12.3.1.170
nvidia-nccl-cu12==2.21.5
nvidia-nvjitlink-cu12==12.4.127
nvidia-nvtx-cu12==12.4.127
packaging @ file:///home/conda/feedstock_root/build_artifacts/packaging_1733203243479/work
pandas==2.2.3
parso @ file:///home/conda/feedstock_root/build_artifacts/parso_1733271261340/work
peft==0.14.0
pexpect @ file:///home/conda/feedstock_root/build_artifacts/pexpect_1733301927746/work
pickleshare @ file:///home/conda/feedstock_root/build_artifacts/pickleshare_1733327343728/work
pillow==11.1.0
platformdirs @ file:///home/conda/feedstock_root/build_artifacts/platformdirs_1733232627818/work
prompt_toolkit @ file:///home/conda/feedstock_root/build_artifacts/prompt-toolkit_1733302527033/work
propcache==0.2.1
protobuf==5.29.1
psutil @ file:///home/conda/feedstock_root/build_artifacts/psutil_1729847040822/work
ptyprocess @ file:///home/conda/feedstock_root/build_artifacts/ptyprocess_1733302279685/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl#sha256=92c32ff62b5fd8cf325bec5ab90d7be3d2a8ca8c8a3813ff487a8d2002630d1f
pure_eval @ file:///home/conda/feedstock_root/build_artifacts/pure_eval_1733569405015/work
pyarrow==18.1.0
pydantic==2.10.3
pydantic_core==2.27.1
Pygments @ file:///home/conda/feedstock_root/build_artifacts/pygments_1733221634316/work
pyparsing==3.2.1
python-dateutil @ file:///home/conda/feedstock_root/build_artifacts/python-dateutil_1733215673016/work
pytz==2024.2
PyYAML==6.0.2
pyzmq @ file:///home/conda/feedstock_root/build_artifacts/pyzmq_1728642224099/work
regex==2024.11.6
requests==2.32.3
rich==13.9.4
rouge_score==0.1.2
safetensors==0.4.5
scikit-learn==1.6.1
scipy==1.15.1
sentence-transformers==3.3.1
sentencepiece==0.2.0
sentry-sdk==2.19.2
setproctitle==1.3.4
six @ file:///home/conda/feedstock_root/build_artifacts/six_1733380938961/work
smmap==5.0.1
stack_data @ file:///home/conda/feedstock_root/build_artifacts/stack_data_1733569443808/work
sympy==1.13.1
threadpoolctl==3.5.0
tokenizers==0.21.0
torch==2.5.1
tornado @ file:///home/conda/feedstock_root/build_artifacts/tornado_1732615898999/work
tqdm==4.67.1
traitlets @ file:///home/conda/feedstock_root/build_artifacts/traitlets_1733367359838/work
transformer-lens==2.10.0
transformers==4.48.2
triton==3.1.0
trl==0.14.0
typeguard==4.4.1
typing_extensions @ file:///home/conda/feedstock_root/build_artifacts/typing_extensions_1733188668063/work
tzdata==2024.2
urllib3==2.2.3
wandb==0.19.1
wcwidth @ file:///home/conda/feedstock_root/build_artifacts/wcwidth_1733231326287/work
widgetsnbextension==4.0.13
xxhash==3.5.0
yarl==1.18.3
zipp @ file:///home/conda/feedstock_root/build_artifacts/zipp_1732827521216/work
```
### Cuda
```
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
```
```
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.08 Driver Version: 545.23.08 CUDA Version: 12.3 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA RTX 6000 Ada Gene... Off | 00000000:01:00.0 Off | Off |
| 30% 40C P8 27W / 300W | 43531MiB / 49140MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA RTX 6000 Ada Gene... Off | 00000000:25:00.0 Off | Off |
| 30% 34C P8 23W / 300W | 3021MiB / 49140MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 2 NVIDIA RTX 6000 Ada Gene... Off | 00000000:41:00.0 Off | Off |
| 30% 37C P8 29W / 300W | 6MiB / 49140MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 3 NVIDIA RTX 6000 Ada Gene... Off | 00000000:61:00.0 Off | Off |
| 30% 40C P8 30W / 300W | 10881MiB / 49140MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 4 NVIDIA RTX 6000 Ada Gene... Off | 00000000:81:00.0 Off | Off |
| 30% 34C P8 24W / 300W | 1319MiB / 49140MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 5 NVIDIA RTX 6000 Ada Gene... Off | 00000000:A1:00.0 Off | Off |
| 40% 59C P2 71W / 300W | 5763MiB / 49140MiB | 6% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 6 NVIDIA RTX 6000 Ada Gene... Off | 00000000:C1:00.0 Off | Off |
| 30% 47C P2 91W / 300W | 43307MiB / 49140MiB | 74% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
+---------------------------------------------------------------------------------------+
```
### Who can help?
@benjaminbossan @sayakpaul
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
## Context
I do my model training for text generation just for CompletionOnlyLM with my own dataset (long dialogues with system/user/assistant remarks). I added to my model and tokenizer new tokens using:
```python
tokenizer.add_tokens(
[
AddedToken("<|start_thinking|>", normalized=False, special=False),
AddedToken("<|end_thinking|>", normalized=False, special=False),
AddedToken("<tool_response>", normalized=False, special=False),
AddedToken("</tool_response>", normalized=False, special=False),
AddedToken("<|start_response|>", normalized=False, special=False),
AddedToken("<|end_response|>", normalized=False, special=False),
]
)
model.resize_token_embeddings(len(tokenizer))
```
and I have saved it before training.
After that I just wanted training my extend model with PEFT + TRL + FSDP.
Model that I used like base:
```
Qwen2ForCausalLM(
(model): Qwen2Model(
(embed_tokens): Embedding(151671, 3584)
(layers): ModuleList(
(0-27): 28 x Qwen2DecoderLayer(
(self_attn): Qwen2Attention(
(q_proj): Linear(in_features=3584, out_features=3584, bias=True)
(k_proj): Linear(in_features=3584, out_features=512, bias=True)
(v_proj): Linear(in_features=3584, out_features=512, bias=True)
(o_proj): Linear(in_features=3584, out_features=3584, bias=False)
)
(mlp): Qwen2MLP(
(gate_proj): Linear(in_features=3584, out_features=18944, bias=False)
(up_proj): Linear(in_features=3584, out_features=18944, bias=False)
(down_proj): Linear(in_features=18944, out_features=3584, bias=False)
(act_fn): SiLU()
)
(input_layernorm): Qwen2RMSNorm((3584,), eps=1e-06)
(post_attention_layernorm): Qwen2RMSNorm((3584,), eps=1e-06)
)
)
(norm): Qwen2RMSNorm((3584,), eps=1e-06)
(rotary_emb): Qwen2RotaryEmbedding()
)
(lm_head): Linear(in_features=3584, out_features=151671, bias=False)
)
```
## Code
### Accelerate config
```yaml
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch: BACKWARD_PRE
fsdp_cpu_ram_efficient_loading: true
fsdp_forward_prefetch: false
fsdp_offload_params: false
fsdp_sharding_strategy: FULL_SHARD
fsdp_state_dict_type: SHARDED_STATE_DICT
fsdp_sync_module_states: true
fsdp_use_orig_params: true
machine_rank: 0
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
### Training script
```python
import warnings
warnings.filterwarnings("ignore")
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0, 1, 2, 3'
os.environ['TOKENIZERS_PARALLELISM'] = 'true'
import wandb
import numpy as np
import torch
import json
from typing import List, Optional, Union, Any, Literal
from datasets import load_dataset, Dataset
import evaluate
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
EarlyStoppingCallback,
DataCollatorForLanguageModeling,
AddedToken,
)
from peft import (
LoraConfig,
get_peft_model,
TaskType,
PeftModelForCausalLM
)
from trl import (
SFTConfig,
SFTTrainer,
DataCollatorForCompletionOnlyLM
)
from special_utils import DataCollatorForMultiCompletionOnlyLM, CustomLossTrainer
##################################
# Enviroments and configurations #
##################################
CHECKPOINT_PATH = None
DATA_CACHE_DIR = "/home/raid/datasets/"
MODEL_CACHE_DIR = "/home/raid/hf_cache/"
MODEL_PATH = "/home/raid/models/extended_qwen"
METRICS_CACHE = "/home/raid/metrics_cache"
MAX_PROMPT_LENGTH = 5000
LR = 1e-5
STEP_SIZE = 10
BATCH_SIZE = 2
GA_SIZE = 4
TRAIN_EPOCHS = 1
REPORT_TO = ['none', 'wandb'][0]
LORA_R = 48
LORA_ALPHA = 96
TARGET_MODULES = [
"self_attn.q_proj",
"self_attn.k_proj",
"self_attn.v_proj",
"self_attn.o_proj",
"mlp.gate_proj",
"mlp.up_proj",
"mlp.down_proj",
]
MODULES_TO_SAVE = [
"embed_tokens",
"lm_head"
]
REVISION_NAME = f"TEST_qwen-tp-({LR})LR-({BATCH_SIZE})BATCH_SIZE-({GA_SIZE})GA_SIZE-({TRAIN_EPOCHS})TRAIN_EPOCHS-({LORA_R})LORA_R-({LORA_ALPHA})LORA_ALPHA"
LOGS_PATH = f"/home/raid/models/{REVISION_NAME}/logs"
print(REVISION_NAME)
def main():
#####################
# Model & Tokenizer #
#####################
model = AutoModelForCausalLM.from_pretrained(
MODEL_PATH,
# cache_dir=MODEL_CACHE_DIR,
torch_dtype=torch.bfloat16,
use_cache=False,
)
tokenizer = AutoTokenizer.from_pretrained(
MODEL_PATH,
# cache_dir=MODEL_CACHE_DIR,
)
tokenizer.padding_side = 'right'
### FREEZING ###
for param in model.parameters():
param.requires_grad = False
print(tokenizer.added_tokens_decoder)
###########
# Dataset #
###########
dataset = load_dataset(
"my/dataset",
"train",
cache_dir=DATA_CACHE_DIR
)
def prepare_texts(example):
example['text'] = tokenizer.apply_chat_template(
conversation=json.loads(example['conversation']),
tools=json.loads(example['tools']),
tokenize=False
)
return example
dataset = dataset.map(prepare_texts)
dataset_vvalid = Dataset.from_dict(dataset['train'][:100]) # For tests
print(dataset)
########
# PEFT #
########
lora_config = LoraConfig(
task_type=TaskType.CAUSAL_LM,
r=LORA_R,
lora_alpha=LORA_ALPHA,
target_modules=TARGET_MODULES,
modules_to_save=MODULES_TO_SAVE,
lora_dropout=0.1,
bias="none",
)
##################
# Trainer & Args #
##################
bertscore = evaluate.load(
"bertscore",
cache_dir=METRICS_CACHE
)
rouge = evaluate.load(
"rouge",
cache_dir=METRICS_CACHE
)
def preprocess_logits_for_metrics(logits, labels):
pred_ids = torch.argmax(logits, dim=-1)
return pred_ids, labels
def compute_metrics(eval_pred):
pred_ids = torch.tensor(eval_pred.predictions[0])
label_ids = torch.tensor(eval_pred.label_ids)
preds = tokenizer.batch_decode(torch.where(label_ids == -100, tokenizer.eos_token_id, pred_ids), skip_special_tokens=True)
labels = tokenizer.batch_decode(torch.where(label_ids == -100, tokenizer.eos_token_id, label_ids), skip_special_tokens=True)
if not os.path.exists(LOGS_PATH):
os.makedirs(LOGS_PATH, exist_ok=True)
with open(LOGS_PATH + "/data", "w") as f:
f.write(json.dumps([preds, labels]))
print("PREDS:", preds[0], "###")
print("LABELS:", labels[0], "###")
bertscore_results = bertscore.compute(
predictions=preds,
references=labels,
lang='en'
)
rouge_results = rouge.compute(
predictions=preds,
references=labels,
)
return {
"bert_score_f1": np.mean(bertscore_results['f1']),
"bert_score_recall": np.mean(bertscore_results['recall']),
"bert_score_precision": np.mean(bertscore_results['precision']),
"rouge1": rouge_results['rouge1'],
'rouge2': rouge_results['rouge2'],
'rougeL': rouge_results['rougeL'],
}
data_collator = DataCollatorForMultiCompletionOnlyLM(
tokenizer=tokenizer,
response_template="<|im_start|>assistant\n",
end_response_template="<|im_end|>",
mlm=False
)
special_token_ids = [151665, 151666, 151667, 151668, 151669, 151670]
special_token_weight = 1.2
training_args = SFTConfig(
## SFT Arguments ##
max_seq_length=MAX_PROMPT_LENGTH,
## Standard Arguments ##
do_train=True,
do_eval=True,
output_dir=f"/home/raid/checkpoints/{REVISION_NAME}",
overwrite_output_dir=True,
eval_strategy="steps",
eval_steps=STEP_SIZE,
torch_empty_cache_steps=STEP_SIZE,
num_train_epochs=TRAIN_EPOCHS,
per_device_train_batch_size=BATCH_SIZE,
per_device_eval_batch_size=BATCH_SIZE,
gradient_accumulation_steps=GA_SIZE,
optim="adamw_torch",
save_steps=STEP_SIZE,
save_total_limit=4,
logging_steps=STEP_SIZE,
learning_rate=LR,
lr_scheduler_type="cosine",
bf16=True,
gradient_checkpointing=True,
gradient_checkpointing_kwargs = {"use_reentrant": True},
load_best_model_at_end=True,
metric_for_best_model="eval_rougeL",
greater_is_better=True,
report_to=REPORT_TO,
run_name=REVISION_NAME,
resume_from_checkpoint=True if CHECKPOINT_PATH else False,
)
trainer = CustomLossTrainer(
model=model,
args=training_args,
peft_config=lora_config,
train_dataset=dataset_vvalid,#dataset['train'],
eval_dataset=dataset_vvalid,#dataset['valid'],
processing_class=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
preprocess_logits_for_metrics=preprocess_logits_for_metrics,
callbacks=[EarlyStoppingCallback(early_stopping_patience=100)],
special_token_ids=special_token_ids,
special_token_weight=special_token_weight,
)
print("MODEL DTYPE: ", trainer.model.dtype)
# handle PEFT+FSDP case
trainer.model.print_trainable_parameters()
if getattr(trainer.accelerator.state, "fsdp_plugin", None):
from peft.utils.other import fsdp_auto_wrap_policy
fsdp_plugin = trainer.accelerator.state.fsdp_plugin
fsdp_plugin.auto_wrap_policy = fsdp_auto_wrap_policy(trainer.model)
# Training
if CHECKPOINT_PATH is not None:
trainer.train(resume_from_checkpoint=CHECKPOINT_PATH)
else:
trainer.train()
if trainer.is_fsdp_enabled:
trainer.accelerator.state.fsdp_plugin.set_state_dict_type("FULL_STATE_DICT")
trainer.save_model(f"/home/raid/models/{REVISION_NAME}/adapter")
if __name__ == "__main__":
main()
```
### Custom Collator & Trainer (special_utils.py)
```python
import torch
from transformers import DataCollatorForLanguageModeling
from typing import List, Optional, Union, Any, Literal
from trl import SFTTrainer
import numpy as np
# Adding weights to new tokens
class CustomLossTrainer(SFTTrainer):
def __init__(self, *args, special_token_ids, special_token_weight=1.2, **kwargs):
super().__init__(*args, **kwargs)
self.special_token_ids = special_token_ids
self.special_token_weight = special_token_weight
self.weights = None
def _init_weights(self, model):
self.weights = torch.ones(model.config.vocab_size, device=model.device)
for token_id in self.special_token_ids:
self.weights[token_id] = self.special_token_weight
self.cross_entropy = torch.nn.CrossEntropyLoss(weight=self.weights)
def compute_loss(self, model, inputs, return_outputs=False, **kwargs):
if self.weights is None:
self._init_weights(model)
labels = inputs.pop("labels").to(model.device)
outputs = model(**inputs)
logits = outputs.get("logits").to(model.device)
loss = self.cross_entropy(logits.view(-1, logits.size(-1)), labels.view(-1))
if return_outputs:
return loss, outputs
return loss
# For Completion with many different instruction templates
class DataCollatorForMultiCompletionOnlyLM(DataCollatorForLanguageModeling):
def __init__(
self,
response_template: Union[str, list[int]],
end_response_template: Union[str, list[int]],
instruction_template: Optional[Union[str, list[int]]] = None,
*args,
mlm: bool = False,
ignore_index: int = -100,
padding_free: bool = False,
**kwargs,
):
super().__init__(*args, mlm=mlm, **kwargs)
self.instruction_template = instruction_template
if isinstance(instruction_template, str):
# The user provides a string, must tokenize
self.instruction_token_ids = self.tokenizer.encode(self.instruction_template, add_special_tokens=False)
else:
# The user already provides the token ids
self.instruction_token_ids = instruction_template
self.response_template = response_template
if isinstance(response_template, str):
# The user provides a string, must tokenize
self.response_token_ids = self.tokenizer.encode(self.response_template, add_special_tokens=False)
else:
# The user already provides the token ids
self.response_token_ids = response_template
self.end_response_template = end_response_template
if isinstance(end_response_template, str):
# The user provides a string, must tokenize
self.end_response_token_ids = self.tokenizer.encode(self.end_response_template, add_special_tokens=False)
else:
# The user already provides the token ids
self.end_response_token_ids = end_response_template
if not self.mlm and self.instruction_template and self.tokenizer.pad_token_id == self.tokenizer.eos_token_id:
warnings.warn(
"The pad_token_id and eos_token_id values of this tokenizer are identical. "
"If you are planning for multi-turn training, "
"it can result in the model continuously generating questions and answers without eos token. "
"To avoid this, set the pad_token_id to a different value.",
UserWarning,
)
self.ignore_index = ignore_index
self.padding_free = padding_free
def torch_call(self, examples: list[Union[list[int], Any, dict[str, Any]]]) -> dict[str, Any]:
batch = super().torch_call(examples)
for i in range(len(examples)):
batch["labels"][i] = torch.where(batch["labels"][i] == 0, 999999, batch["labels"][i])
response_token_ids_start_ids = []
for idx in np.where(batch["labels"][i] == self.response_token_ids[0])[0]:
# `response_token_ids` is `'### Response:\n'`, here we are just making sure that the token IDs match
if (
self.response_token_ids
== batch["labels"][i][idx : idx + len(self.response_token_ids)].tolist()
):
response_token_ids_start_ids.append(idx)
if len(response_token_ids_start_ids) == 0:
warnings.warn(
f"Could not find response key `{self.response_template}` in the following instance: "
f"{self.tokenizer.decode(batch['input_ids'][i])}. This instance will be ignored in loss "
"calculation. Note, if this happens often, consider increasing the `max_seq_length`.",
UserWarning,
)
batch["labels"][i, :] = self.ignore_index
else:
response_token_ids_end_ids = [response_token_ids_start_idx + len(self.response_token_ids) for response_token_ids_start_idx in response_token_ids_start_ids]
end_response_token_ids_idxs = []
for idx in np.where(batch["labels"][i] == self.end_response_token_ids[0])[0]:
# `response_token_ids` is `'### Response:\n'`, here we are just making sure that the token IDs match
if (
self.end_response_token_ids
== batch["labels"][i][idx : idx + len(self.end_response_token_ids)].tolist()
):
end_response_token_ids_idxs.append(idx)
if len(end_response_token_ids_idxs) == 0:
warnings.warn(
f"Could not find end response key `{self.response_template}` in the following instance: "
f"{self.tokenizer.decode(batch['input_ids'][i])}. This instance will be ignored in loss "
"calculation. Note, if this happens often, consider increasing the `max_seq_length`.",
UserWarning,
)
batch["labels"][i, :] = self.ignore_index
assistant_end_idxs = []
for assistant_start_idx in response_token_ids_end_ids:
for assistant_end_idx in end_response_token_ids_idxs:
if assistant_start_idx < assistant_end_idx:
assistant_end_idxs.append(assistant_end_idx)
break
assert len(response_token_ids_end_ids) == len(assistant_end_idxs), "Error, need count assistant replics == count after assistant end suffixes"
mask = torch.ones_like(batch['labels'][i, :]) * -1
mask = torch.where(batch['labels'][i, :] == self.ignore_index, 1, mask)
for start_id, end_id in zip(response_token_ids_end_ids, assistant_end_idxs):
mask[start_id : end_id + 1] = 1
labels = mask * batch['labels'][i, :]
batch['labels'][i, :] = torch.where(labels < 0, self.ignore_index, labels)
batch["labels"][i] = torch.where(batch["labels"][i] == 999999, 0, batch["labels"][i])
if self.padding_free:
# remove padding, `attention_mask` and add `position_ids`
attn_mask = batch.pop("attention_mask")
batch["input_ids"] = batch["input_ids"][attn_mask.bool()].unsqueeze(0)
batch["position_ids"] = attn_mask.cumsum(1)[attn_mask.bool()].unsqueeze(0) - 1
batch["labels"] = batch["labels"][attn_mask.bool()].unsqueeze(0)
batch["labels"][batch["position_ids"] == 0] = self.ignore_index
# Calculate cumulative sequence lengths for queries and keys to prevent graph breaks during further computations.
flattened_position_ids = batch["position_ids"].flatten()
indices_q = torch.arange(
flattened_position_ids.size(0), device=flattened_position_ids.device, dtype=torch.int32
)
batch["cu_seq_lens_q"] = torch.cat(
(
indices_q[flattened_position_ids == 0],
torch.tensor(
flattened_position_ids.size(), device=flattened_position_ids.device, dtype=torch.int32
),
)
)
batch["cu_seq_lens_k"] = batch["cu_seq_lens_q"]
# Determine maximum sequence lengths to prevent graph breaks during further computations.
batch["max_length_k"] = flattened_position_ids.max().item() + 1
batch["max_length_q"] = batch["max_length_k"]
return batch
```
## During training
To be as sure as possible that this error is not in the learning process, I additionally save the validation examples to a separate file and log the metrics.
Metrics from wandb:

I tracked the direct text saved for validation, everything was fine.
## After training
After training process I have tried load model to check autoregressive inference:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_CACHE_DIR = "/home/raid/hf_cache"
DATA_CACHE_DIR = "/home/raid/datasets"
MODEL_PATH = "/home/raid/models/extended_qwen"
lora_path = "/home/raid/models/tool-plannings/qwen-tp-(1e-05)LR-(2)BATCH_SIZE-(4)GA_SIZE-(6)TRAIN_EPOCHS-(48)LORA_R-(96)LORA_ALPHA/adapter"
model = AutoModelForCausalLM.from_pretrained(
MODEL_PATH,
torch_dtype=torch.bfloat16,
use_cache=False,
)
tokenizer = AutoTokenizer.from_pretrained(
MODEL_PATH,
)
from peft import PeftModelForCausalLM
model = PeftModelForCausalLM.from_pretrained(
model,
lora_path # This contains adapter_model.safetensors, adapter_config.json, etc.
)
model
```
```
PeftModelForCausalLM(
(base_model): LoraModel(
(model): Qwen2ForCausalLM(
(model): Qwen2Model(
(embed_tokens): ModulesToSaveWrapper(
(original_module): Embedding(151671, 3584)
(modules_to_save): ModuleDict(
(default): Embedding(151671, 3584)
)
)
(layers): ModuleList(
(0-27): 28 x Qwen2DecoderLayer(
(self_attn): Qwen2Attention(
(q_proj): lora.Linear(
(base_layer): Linear(in_features=3584, out_features=3584, bias=True)
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=3584, out_features=48, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=48, out_features=3584, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
(k_proj): lora.Linear(
(base_layer): Linear(in_features=3584, out_features=512, bias=True)
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=3584, out_features=48, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=48, out_features=512, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
(v_proj): lora.Linear(
(base_layer): Linear(in_features=3584, out_features=512, bias=True)
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=3584, out_features=48, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=48, out_features=512, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
(o_proj): lora.Linear(
(base_layer): Linear(in_features=3584, out_features=3584, bias=False)
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=3584, out_features=48, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=48, out_features=3584, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
)
(mlp): Qwen2MLP(
(gate_proj): lora.Linear(
(base_layer): Linear(in_features=3584, out_features=18944, bias=False)
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=3584, out_features=48, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=48, out_features=18944, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
(up_proj): lora.Linear(
(base_layer): Linear(in_features=3584, out_features=18944, bias=False)
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=3584, out_features=48, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=48, out_features=18944, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
(down_proj): lora.Linear(
(base_layer): Linear(in_features=18944, out_features=3584, bias=False)
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=18944, out_features=48, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=48, out_features=3584, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(lora_magnitude_vector): ModuleDict()
)
(act_fn): SiLU()
)
(input_layernorm): Qwen2RMSNorm((3584,), eps=1e-06)
(post_attention_layernorm): Qwen2RMSNorm((3584,), eps=1e-06)
)
)
(norm): Qwen2RMSNorm((3584,), eps=1e-06)
(rotary_emb): Qwen2RotaryEmbedding()
)
(lm_head): ModulesToSaveWrapper(
(original_module): Linear(in_features=3584, out_features=151671, bias=False)
(modules_to_save): ModuleDict(
(default): Linear(in_features=3584, out_features=151671, bias=False)
)
)
)
)
)
```
And during inference I had something like that:
```python
outputs = model.generate(
**inputs_tokens,
max_new_tokens=20,
)[0]
print(tokenizer.decode(outputs, skip_special_tokens=False))
```
```
...ngle stepA journey of a thousand miles'.<|im_end|>
<|im_start|>assistant # here start new tokens
write write write write write write write write write write write write write write write write write write write...
```
## Problem
I thought there was a mistake in saving the adapter and instead of saving the adapter, I tried to merge model and adapter immediately after the end of the training in script like that:
```python
merged_model = trainer.model.merge_and_unload(safe_merge=True)
merged_model.save_pretrained(f"/home/raid/models/{REVISION_NAME}")
```
and I have occured the error:
```
MODEL DTYPE: torch.bfloat16
trainable params: 1,107,362,816 || all params: 8,720,162,304 || trainable%: 12.6989
{'train_runtime': 79.4632, 'train_samples_per_second': 1.258, 'train_steps_per_second': 0.038, 'train_loss': 108.3709716796875, 'epoch': 0.92}
100%|██████████████████████████████████████████████████████████████| 3/3 [01:19<00:00, 26.51s/it]
[rank2]: Traceback (most recent call last):
[rank2]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 268, in <module>
[rank2]: main()
[rank2]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 264, in main
[rank2]: merged_model = trainer.model.merge_and_unload(safe_merge=True)
[rank2]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 892, in merge_and_unload
[rank2]: return self._unload_and_optionally_merge(
[rank2]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 514, in _unload_and_optionally_merge
[rank2]: target.merge(safe_merge=safe_merge, adapter_names=adapter_names)
[rank2]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 477, in merge
[rank2]: delta_weight = self.get_delta_weight(active_adapter)
[rank2]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 585, in get_delta_weight
[rank2]: output_tensor = transpose(weight_B @ weight_A, self.fan_in_fan_out) * self.scaling[adapter]
[rank2]: RuntimeError: inconsistent tensor size, expected tensor [1024] and src [7168] to have the same number of elements, but got 1024 and 7168 elements respectively
[rank1]: Traceback (most recent call last):
[rank1]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 268, in <module>
[rank1]: main()
[rank1]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 264, in main
[rank1]: merged_model = trainer.model.merge_and_unload(safe_merge=True)
[rank1]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 892, in merge_and_unload
[rank1]: return self._unload_and_optionally_merge(
[rank1]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 514, in _unload_and_optionally_merge
[rank1]: target.merge(safe_merge=safe_merge, adapter_names=adapter_names)
[rank1]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 477, in merge
[rank1]: delta_weight = self.get_delta_weight(active_adapter)
[rank1]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 585, in get_delta_weight
[rank1]: output_tensor = transpose(weight_B @ weight_A, self.fan_in_fan_out) * self.scaling[adapter]
[rank1]: RuntimeError: inconsistent tensor size, expected tensor [1024] and src [7168] to have the same number of elements, but got 1024 and 7168 elements respectively
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 268, in <module>
[rank0]: main()
[rank0]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 264, in main
[rank0]: merged_model = trainer.model.merge_and_unload(safe_merge=True)
[rank0]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 892, in merge_and_unload
[rank0]: return self._unload_and_optionally_merge(
[rank0]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 514, in _unload_and_optionally_merge
[rank0]: target.merge(safe_merge=safe_merge, adapter_names=adapter_names)
[rank0]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 477, in merge
[rank0]: delta_weight = self.get_delta_weight(active_adapter)
[rank0]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 585, in get_delta_weight
[rank0]: output_tensor = transpose(weight_B @ weight_A, self.fan_in_fan_out) * self.scaling[adapter]
[rank0]: RuntimeError: inconsistent tensor size, expected tensor [1024] and src [7168] to have the same number of elements, but got 1024 and 7168 elements respectively
[rank3]: Traceback (most recent call last):
[rank3]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 268, in <module>
[rank3]: main()
[rank3]: File "/home/raid/dtishencko/git/function-calling/notebooks/train/train/train.py", line 264, in main
[rank3]: merged_model = trainer.model.merge_and_unload(safe_merge=True)
[rank3]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 892, in merge_and_unload
[rank3]: return self._unload_and_optionally_merge(
[rank3]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/model.py", line 514, in _unload_and_optionally_merge
[rank3]: target.merge(safe_merge=safe_merge, adapter_names=adapter_names)
[rank3]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 477, in merge
[rank3]: delta_weight = self.get_delta_weight(active_adapter)
[rank3]: File "/home/raid/dtishencko/miniconda3/miniconda3/envs/DS/lib/python3.10/site-packages/peft/tuners/lora/layer.py", line 585, in get_delta_weight
[rank3]: output_tensor = transpose(weight_B @ weight_A, self.fan_in_fan_out) * self.scaling[adapter]
[rank3]: RuntimeError: inconsistent tensor size, expected tensor [1024] and src [7168] to have the same number of elements, but got 1024 and 7168 elements respectively
```
Besides, I tried load adapter manually by safetensors script smth like that:
```python
from safetensors import safe_open
lora_state_dict = {}
with safe_open(lora_path, framework="pt", device="cpu") as f:
for key in f.keys():
new_key = key.replace("lora_A.", "lora_A.default.").replace("lora_B.", "lora_B.default.")
new_key = new_key.replace("embed_tokens.weight", "embed_tokens.original_module.weight")
new_key = new_key.replace("lm_head.weight", "lm_head.modules_to_save.default.weight")
lora_state_dict[new_key] = f.get_tensor(key)
m, u = model.load_state_dict(lora_state_dict, strict=False)
```
I was able to upload the adapter in my model, but I was still getting catastrophical hallucinations like:
```
...<|im_start|>assistant
# generated spaces
```
I assume that the error lies in the adapter merge and may be floating bf16 fp16 or something.
P.S. BTW I tried to train model with fp16 and I had same problem
### Expected behavior
Expected behavior of generation after merging adapter with my model
|
{
"login": "DmitryDiTy",
"id": 90377536,
"node_id": "MDQ6VXNlcjkwMzc3NTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/90377536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DmitryDiTy",
"html_url": "https://github.com/DmitryDiTy",
"followers_url": "https://api.github.com/users/DmitryDiTy/followers",
"following_url": "https://api.github.com/users/DmitryDiTy/following{/other_user}",
"gists_url": "https://api.github.com/users/DmitryDiTy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DmitryDiTy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DmitryDiTy/subscriptions",
"organizations_url": "https://api.github.com/users/DmitryDiTy/orgs",
"repos_url": "https://api.github.com/users/DmitryDiTy/repos",
"events_url": "https://api.github.com/users/DmitryDiTy/events{/privacy}",
"received_events_url": "https://api.github.com/users/DmitryDiTy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2368/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2367
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2367/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2367/comments
|
https://api.github.com/repos/huggingface/peft/issues/2367/events
|
https://github.com/huggingface/peft/issues/2367
| 2,838,045,820
|
I_kwDOIf9iDM6pKSR8
| 2,367
|
Some weights of MistralForSequenceClassification were not initialized from the model checkpoint at mistralai/Mistral-7B-Instruct-v0.3 and are newly initialized: ['score.weight']
|
{
"login": "amritansh6",
"id": 46628209,
"node_id": "MDQ6VXNlcjQ2NjI4MjA5",
"avatar_url": "https://avatars.githubusercontent.com/u/46628209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amritansh6",
"html_url": "https://github.com/amritansh6",
"followers_url": "https://api.github.com/users/amritansh6/followers",
"following_url": "https://api.github.com/users/amritansh6/following{/other_user}",
"gists_url": "https://api.github.com/users/amritansh6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amritansh6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amritansh6/subscriptions",
"organizations_url": "https://api.github.com/users/amritansh6/orgs",
"repos_url": "https://api.github.com/users/amritansh6/repos",
"events_url": "https://api.github.com/users/amritansh6/events{/privacy}",
"received_events_url": "https://api.github.com/users/amritansh6/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2025-02-07T12:29:22
| 2025-02-10T11:01:57
| 2025-02-10T11:01:55
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
I have been trying to fine tune mistral 7b v0.3 for a downstream task using lora and I get the following warning while running inference.
```python
base_model = AutoModelForSequenceClassification.from_pretrained(
model_id, use_auth_token="hf_***",
num_labels=2,
problem_type="single_label_classification"
)
base_model.config.pad_token_id = tokenizer.pad_token_id
lora_config = LoraConfig(
r=8,
lora_alpha=32,
target_modules=["q_proj", "v_proj"],
bias="none",
task_type="SEQ_CLS",
modules_to_save=["score"]
)
model_with_lora = get_peft_model(base_model, lora_config)
model_with_lora.print_trainable_parameters()
training_args = TrainingArguments(
output_dir="./results_4",
evaluation_strategy="epoch",
save_strategy="steps",
save_steps=0.1,
logging_dir="./logs",
learning_rate=5e-5,
per_device_train_batch_size=2,
num_train_epochs=2,
weight_decay=0.01,
report_to="wandb",
save_total_limit=2,
logging_steps=10,
)
trainer = Trainer(
model=model_with_lora,
args=training_args,
train_dataset=hf_dataset,
eval_dataset=hf_eval_dataset,
tokenizer=tokenizer,
compute_metrics=None,
)
```
This is my training script and while loading for inference I get the warning as,
Some weights of MistralForSequenceClassification were not initialized from the model checkpoint at mistralai/Mistral-7B-Instruct-v0.3 and are newly initialized: ['score.weight']
Can someone check this.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
```python
base_model = AutoModelForSequenceClassification.from_pretrained(
model_id, use_auth_token="hf_***",
num_labels=2,
problem_type="single_label_classification"
)
base_model.config.pad_token_id = tokenizer.pad_token_id
lora_config = LoraConfig(
r=8,
lora_alpha=32,
target_modules=["q_proj", "v_proj"],
bias="none",
task_type="SEQ_CLS",
modules_to_save=["score"]
)
model_with_lora = get_peft_model(base_model, lora_config)
model_with_lora.print_trainable_parameters()
training_args = TrainingArguments(
output_dir="./results_4",
evaluation_strategy="epoch",
save_strategy="steps",
save_steps=0.1,
logging_dir="./logs",
learning_rate=5e-5,
per_device_train_batch_size=2,
num_train_epochs=2,
weight_decay=0.01,
report_to="wandb",
save_total_limit=2,
logging_steps=10,
)
trainer = Trainer(
model=model_with_lora,
args=training_args,
train_dataset=hf_dataset,
eval_dataset=hf_eval_dataset,
tokenizer=tokenizer,
compute_metrics=None,
)
```
### Expected behavior
Ideally this warning should not come.
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2367/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2364
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2364/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2364/comments
|
https://api.github.com/repos/huggingface/peft/issues/2364/events
|
https://github.com/huggingface/peft/issues/2364
| 2,835,746,171
|
I_kwDOIf9iDM6pBg17
| 2,364
|
docs: broken links to boft
|
{
"login": "makelinux",
"id": 2335185,
"node_id": "MDQ6VXNlcjIzMzUxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2335185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/makelinux",
"html_url": "https://github.com/makelinux",
"followers_url": "https://api.github.com/users/makelinux/followers",
"following_url": "https://api.github.com/users/makelinux/following{/other_user}",
"gists_url": "https://api.github.com/users/makelinux/gists{/gist_id}",
"starred_url": "https://api.github.com/users/makelinux/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/makelinux/subscriptions",
"organizations_url": "https://api.github.com/users/makelinux/orgs",
"repos_url": "https://api.github.com/users/makelinux/repos",
"events_url": "https://api.github.com/users/makelinux/events{/privacy}",
"received_events_url": "https://api.github.com/users/makelinux/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2025-02-06T14:48:16
| 2025-02-07T10:14:44
| 2025-02-07T10:14:44
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
on page: https://huggingface.co/docs/peft/v0.14.0/en/conceptual_guides/oft
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
on page: https://huggingface.co/docs/peft/v0.14.0/en/conceptual_guides/oft
Snippet:
Take a look at the following step-by-step guides on how to finetune a model with BOFT:
[Dreambooth finetuning with BOFT](https://huggingface.co/docs/peft/v0.14.0/en/task_guides/boft_dreambooth)
[Controllable generation finetuning with BOFT (ControlNet)](https://huggingface.co/docs/peft/v0.14.0/en/task_guides/boft_controlnet)
### Expected behavior
perhaps the links should lead to
https://github.com/huggingface/peft/blob/main/examples/boft_dreambooth/boft_dreambooth.md
https://github.com/huggingface/peft/blob/main/examples/boft_controlnet/boft_controlnet.md
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2364/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2364/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2362
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2362/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2362/comments
|
https://api.github.com/repos/huggingface/peft/issues/2362/events
|
https://github.com/huggingface/peft/issues/2362
| 2,833,885,059
|
I_kwDOIf9iDM6o6aeD
| 2,362
|
Import error
|
{
"login": "ikamensh",
"id": 23004004,
"node_id": "MDQ6VXNlcjIzMDA0MDA0",
"avatar_url": "https://avatars.githubusercontent.com/u/23004004?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ikamensh",
"html_url": "https://github.com/ikamensh",
"followers_url": "https://api.github.com/users/ikamensh/followers",
"following_url": "https://api.github.com/users/ikamensh/following{/other_user}",
"gists_url": "https://api.github.com/users/ikamensh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ikamensh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ikamensh/subscriptions",
"organizations_url": "https://api.github.com/users/ikamensh/orgs",
"repos_url": "https://api.github.com/users/ikamensh/repos",
"events_url": "https://api.github.com/users/ikamensh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ikamensh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2025-02-05T20:19:35
| 2025-02-05T20:38:50
| 2025-02-05T20:38:23
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Successfully installed accelerate-1.3.0 aiohappyeyeballs-2.4.4 aiohttp-3.11.11 aiosignal-1.3.2 bitsandbytes-0.45.1 datasets-3.2.0 dill-0.3.8 frozenlist-1.5.0 huggingface_hub-0.28.1 multidict-6.1.0 multiprocess-0.70.16 pandas-2.2.3 peft-0.14.0 propcache-0.2.1 pyarrow-19.0.0 pytz-2025.1 regex-2024.11.6 safetensors-0.5.2 tokenizers-0.13.3 tqdm-4.67.1 transformers-4.30.2 tzdata-2025.1 xxhash-3.5.0 yarl-1.18.3
root@77c297c83b18:/workspace# python qlora.py
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/transformers/utils/import_utils.py", line 1086, in _get_module
return importlib.import_module("." + module_name, self.__name__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[...]
File "/usr/local/lib/python3.11/dist-packages/transformers/trainer.py", line 212, in <module>
from peft import PeftModel
File "/usr/local/lib/python3.11/dist-packages/peft/__init__.py", line 22, in <module>
from .auto import (
File "/usr/local/lib/python3.11/dist-packages/peft/auto.py", line 32, in <module>
from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING
File "/usr/local/lib/python3.11/dist-packages/peft/mapping.py", line 25, in <module>
from .mixed_model import PeftMixedModel
File "/usr/local/lib/python3.11/dist-packages/peft/mixed_model.py", line 29, in <module>
from .peft_model import PeftModel
File "/usr/local/lib/python3.11/dist-packages/peft/peft_model.py", line 37, in <module>
from transformers import Cache, DynamicCache, EncoderDecoderCache, PreTrainedModel
ImportError: cannot import name 'Cache' from 'transformers' (/usr/local/lib/python3.11/dist-packages/transformers/__init__.py)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workspace/qlora.py", line 17, in <module>
from transformers import (
File "<frozen importlib._bootstrap>", line 1229, in _handle_fromlist
File "/usr/local/lib/python3.11/dist-packages/transformers/utils/import_utils.py", line 1076, in __getattr__
module = self._get_module(self._class_to_module[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/transformers/utils/import_utils.py", line 1088, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):
cannot import name 'Cache' from 'transformers' (/usr/local/lib/python3.11/dist-packages/transformers/__init__.py)
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
`pip install peft-0.14.0 transformers-4.30.2` on linux + py3.11
run following:
```python
from transformers import (
LlamaForCausalLM,
LlamaTokenizer,
Trainer,
TrainingArguments,
DataCollatorForLanguageModeling,
)
```
### Expected behavior
imports work (or crash outside peft)
|
{
"login": "ikamensh",
"id": 23004004,
"node_id": "MDQ6VXNlcjIzMDA0MDA0",
"avatar_url": "https://avatars.githubusercontent.com/u/23004004?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ikamensh",
"html_url": "https://github.com/ikamensh",
"followers_url": "https://api.github.com/users/ikamensh/followers",
"following_url": "https://api.github.com/users/ikamensh/following{/other_user}",
"gists_url": "https://api.github.com/users/ikamensh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ikamensh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ikamensh/subscriptions",
"organizations_url": "https://api.github.com/users/ikamensh/orgs",
"repos_url": "https://api.github.com/users/ikamensh/repos",
"events_url": "https://api.github.com/users/ikamensh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ikamensh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2362/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2359
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2359/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2359/comments
|
https://api.github.com/repos/huggingface/peft/issues/2359/events
|
https://github.com/huggingface/peft/issues/2359
| 2,829,346,186
|
I_kwDOIf9iDM6opGWK
| 2,359
|
Inconsistent documentation
|
{
"login": "makelinux",
"id": 2335185,
"node_id": "MDQ6VXNlcjIzMzUxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2335185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/makelinux",
"html_url": "https://github.com/makelinux",
"followers_url": "https://api.github.com/users/makelinux/followers",
"following_url": "https://api.github.com/users/makelinux/following{/other_user}",
"gists_url": "https://api.github.com/users/makelinux/gists{/gist_id}",
"starred_url": "https://api.github.com/users/makelinux/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/makelinux/subscriptions",
"organizations_url": "https://api.github.com/users/makelinux/orgs",
"repos_url": "https://api.github.com/users/makelinux/repos",
"events_url": "https://api.github.com/users/makelinux/events{/privacy}",
"received_events_url": "https://api.github.com/users/makelinux/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 5
| 2025-02-04T07:25:29
| 2025-03-06T15:03:57
| null |
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Content of https://huggingface.co/docs/peft/index is not synchronised with ToC.
"How-to guides" is already "PEFT method guides".
"PEFT method guides" are under directory `task_guides`.

### Expected behavior
Consistent documentation.
Clear unambiguous names.
Links match titles and the content.
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2359/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2355
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2355/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2355/comments
|
https://api.github.com/repos/huggingface/peft/issues/2355/events
|
https://github.com/huggingface/peft/issues/2355
| 2,823,704,539
|
I_kwDOIf9iDM6oTk_b
| 2,355
|
dataclass config handling
|
{
"login": "moghadas76",
"id": 23231913,
"node_id": "MDQ6VXNlcjIzMjMxOTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/23231913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moghadas76",
"html_url": "https://github.com/moghadas76",
"followers_url": "https://api.github.com/users/moghadas76/followers",
"following_url": "https://api.github.com/users/moghadas76/following{/other_user}",
"gists_url": "https://api.github.com/users/moghadas76/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moghadas76/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moghadas76/subscriptions",
"organizations_url": "https://api.github.com/users/moghadas76/orgs",
"repos_url": "https://api.github.com/users/moghadas76/repos",
"events_url": "https://api.github.com/users/moghadas76/events{/privacy}",
"received_events_url": "https://api.github.com/users/moghadas76/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2025-01-31T14:48:29
| 2025-03-10T15:04:18
| 2025-03-10T15:04:18
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:12:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 4090
GPU 1: NVIDIA GeForce RTX 4090
Nvidia driver version: 555.42.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900F
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5600.0000
CPU min MHz: 800.0000
BogoMIPS: 3993.60
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.0
[pip3] torchtune==0.5.0
[conda] blas 1.0 mkl
[conda] cuda-cudart 12.1.105 0 nvidia
[conda] cuda-cupti 12.1.105 0 nvidia
[conda] cuda-libraries 12.1.0 0 nvidia
[conda] cuda-nvrtc 12.1.105 0 nvidia
[conda] cuda-nvtx 12.1.105 0 nvidia
[conda] cuda-opencl 12.3.52 0 nvidia
[conda] cuda-runtime 12.1.0 0 nvidia
[conda] easy-torch 1.3.2 pypi_0 pypi
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libcublas 12.1.0.26 0 nvidia
[conda] libcufft 11.0.2.4 0 nvidia
[conda] libcurand 10.3.4.52 0 nvidia
[conda] libcusolver 11.4.4.55 0 nvidia
[conda] libcusparse 12.0.2.55 0 nvidia
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] libnvjitlink 12.1.105 0 nvidia
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] mkl-service 2.4.0 py311h5eee18b_1
[conda] mkl_fft 1.3.8 py311h5eee18b_0
[conda] mkl_random 1.2.4 py311hdb19cb5_0
[conda] numpy 1.24.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 8.9.2.26 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.20.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch
[conda] pytorch-forecasting 1.2.0 pypi_0 pypi
[conda] pytorch-lightning 2.2.0 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torch 2.3.0 pypi_0 pypi
[conda] torch-cluster 1.6.3+pt23cu121 pypi_0 pypi
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt23cu121 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt23cu121 pypi_0 pypi
[conda] torch-spline-conv 1.2.2+pt23cu121 pypi_0 pypi
[conda] torch-summary 1.4.5 pypi_0 pypi
[conda] torchaudio 2.3.0 pypi_0 pypi
[conda] torchinfo 1.8.0 pypi_0 pypi
[conda] torchmetrics 1.3.0.post0 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchtune 0.5.0 pypi_0 pypi
[conda] torchvision 0.18.0 pypi_0 pypi
[conda] triton 2.3.0 pypi_0 pypi
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
See PR
### Expected behavior
See PR
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2355/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2354
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2354/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2354/comments
|
https://api.github.com/repos/huggingface/peft/issues/2354/events
|
https://github.com/huggingface/peft/issues/2354
| 2,823,156,387
|
I_kwDOIf9iDM6oRfKj
| 2,354
|
Commented PeftConfig
|
{
"login": "moghadas76",
"id": 23231913,
"node_id": "MDQ6VXNlcjIzMjMxOTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/23231913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moghadas76",
"html_url": "https://github.com/moghadas76",
"followers_url": "https://api.github.com/users/moghadas76/followers",
"following_url": "https://api.github.com/users/moghadas76/following{/other_user}",
"gists_url": "https://api.github.com/users/moghadas76/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moghadas76/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moghadas76/subscriptions",
"organizations_url": "https://api.github.com/users/moghadas76/orgs",
"repos_url": "https://api.github.com/users/moghadas76/repos",
"events_url": "https://api.github.com/users/moghadas76/events{/privacy}",
"received_events_url": "https://api.github.com/users/moghadas76/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2025-01-31T11:33:50
| 2025-03-10T15:04:20
| 2025-03-10T15:04:20
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
# from .config import PeftConfig, PeftType, PromptLearningConfig, TaskType
@ ./peft/utils/__init__.py
Why?
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
from peft.utils import PeftConfig
### Expected behavior
accessing to PeftConfig!
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2354/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2348
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2348/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2348/comments
|
https://api.github.com/repos/huggingface/peft/issues/2348/events
|
https://github.com/huggingface/peft/issues/2348
| 2,811,752,952
|
I_kwDOIf9iDM6nl_H4
| 2,348
|
Incorrect Magnitude Calculation for DoRA Linear Layers (Violates DoRA Paper Methodology)
|
{
"login": "arcteryox",
"id": 195980235,
"node_id": "U_kgDOC65ryw",
"avatar_url": "https://avatars.githubusercontent.com/u/195980235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arcteryox",
"html_url": "https://github.com/arcteryox",
"followers_url": "https://api.github.com/users/arcteryox/followers",
"following_url": "https://api.github.com/users/arcteryox/following{/other_user}",
"gists_url": "https://api.github.com/users/arcteryox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arcteryox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arcteryox/subscriptions",
"organizations_url": "https://api.github.com/users/arcteryox/orgs",
"repos_url": "https://api.github.com/users/arcteryox/repos",
"events_url": "https://api.github.com/users/arcteryox/events{/privacy}",
"received_events_url": "https://api.github.com/users/arcteryox/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2025-01-26T19:43:50
| 2025-01-30T18:56:52
| 2025-01-30T18:41:26
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### **Description**
The current `DoraLinearLayer` incorrectly computes weight magnitude norms **per input channel** instead of **per output channel**, violating the methodology outlined in the [DoRA paper (Section 3.1)](https://arxiv.org/abs/2402.09353). This leads to degraded performance for linear layers (e.g., in LLMs).
---
### **Issue Details**
#### **Affected Code**:
`peft/tuners/lora/dora.py` → `DoraLinearLayer.get_weight_norm`
```python
def get_weight_norm(self, weight, lora_weight, scaling):
weight = transpose(weight, self.fan_in_fan_out) # ❌ Transposes to [in_features, out_features]
weight = weight + scaling * lora_weight
weight_norm = torch.linalg.norm(weight, dim=1) # Norm over input channels (dim=1)
return weight_norm
```
#### **Problem**:
- For a linear layer with weight shape `[out_features, in_features]`, transposing to `[in_features, out_features]` causes `dim=1` to represent **input channels**, not output channels.
- This contradicts the DoRA paper’s requirement to compute magnitude **per output channel** (rows of the weight matrix).
---
### **Steps to Reproduce**
1. Initialize a DoRA-linear layer:
```python
base_layer = nn.Linear(10, 5) # out_features=5, in_features=10
dora_layer = DoraLinearLayer(fan_in_fan_out=False)
```
2. Check weight norm dimensions:
```python
weight = base_layer.weight # Shape [5, 10]
lora_weight = torch.randn(5, 10) # Simulate LoRA delta
norm = dora_layer.get_weight_norm(weight, lora_weight, scaling=1.0)
print(norm.shape) # Outputs [10] (input channels) instead of [5] (output channels)
```
---
### **Expected vs Actual Behavior**
| Expected (Per Paper) | Actual (Current Code) |
|-----------------------|-----------------------|
| Norms computed over **output channels** (`out_features`). | Norms computed over **input channels** (`in_features`). |
---
### **Proposed Fix**
Remove the transpose and compute norms over `dim=1` directly:
```python
def get_weight_norm(self, weight, lora_weight, scaling):
# Remove transpose - work directly with [out_features, in_features]
weight = weight + scaling * lora_weight
weight_norm = torch.linalg.norm(weight, dim=1) # ✅ Norm over output channels (dim=1)
return weight_norm
```
#### **Impact of Fix**:
- Aligns with DoRA paper’s methodology for linear layers.
- Convolutional layers (e.g., `DoraConv2dLayer`) are unaffected and already correct.
---
### **Additional Context**
1. **Paper Reference**:
- Section 3.1 defines magnitude as the L2 norm of **rows** (output channels) for linear layers.
- Example: For weight matrix `W ∈ ℝ^{d×k}`, magnitude `m_j = ||W_j||_2` (row-wise norm).
2. **Why This Matters**:
- Magnitude scaling is critical for DoRA’s ability to decouple direction and magnitude updates.
- Incorrect scaling invalidates the method’s theoretical guarantees and reduces performance (e.g., on LLM fine-tuning tasks).
---
### **Verification**
After applying the fix:
```python
print(norm.shape) # Now outputs [5] (correct for out_features=5)
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
### **Steps to Reproduce**
1. Initialize a DoRA-linear layer:
```python
base_layer = nn.Linear(10, 5) # out_features=5, in_features=10
dora_layer = DoraLinearLayer(fan_in_fan_out=False)
```
2. Check weight norm dimensions:
```python
weight = base_layer.weight # Shape [5, 10]
lora_weight = torch.randn(5, 10) # Simulate LoRA delta
norm = dora_layer.get_weight_norm(weight, lora_weight, scaling=1.0)
print(norm.shape) # Outputs [10] (input channels) instead of [5] (output channels)
```
### Expected behavior
### **Expected vs Actual Behavior**
| Expected (Per Paper) | Actual (Current Code) |
|-----------------------|-----------------------|
| Norms computed over **output channels** (`out_features`). | Norms computed over **input channels** (`in_features`). |
|
{
"login": "arcteryox",
"id": 195980235,
"node_id": "U_kgDOC65ryw",
"avatar_url": "https://avatars.githubusercontent.com/u/195980235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arcteryox",
"html_url": "https://github.com/arcteryox",
"followers_url": "https://api.github.com/users/arcteryox/followers",
"following_url": "https://api.github.com/users/arcteryox/following{/other_user}",
"gists_url": "https://api.github.com/users/arcteryox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arcteryox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arcteryox/subscriptions",
"organizations_url": "https://api.github.com/users/arcteryox/orgs",
"repos_url": "https://api.github.com/users/arcteryox/repos",
"events_url": "https://api.github.com/users/arcteryox/events{/privacy}",
"received_events_url": "https://api.github.com/users/arcteryox/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2348/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2344
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2344/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2344/comments
|
https://api.github.com/repos/huggingface/peft/issues/2344/events
|
https://github.com/huggingface/peft/issues/2344
| 2,807,348,808
|
I_kwDOIf9iDM6nVL5I
| 2,344
|
FSDP2 and peft
|
{
"login": "psinger",
"id": 1677826,
"node_id": "MDQ6VXNlcjE2Nzc4MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1677826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/psinger",
"html_url": "https://github.com/psinger",
"followers_url": "https://api.github.com/users/psinger/followers",
"following_url": "https://api.github.com/users/psinger/following{/other_user}",
"gists_url": "https://api.github.com/users/psinger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/psinger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/psinger/subscriptions",
"organizations_url": "https://api.github.com/users/psinger/orgs",
"repos_url": "https://api.github.com/users/psinger/repos",
"events_url": "https://api.github.com/users/psinger/events{/privacy}",
"received_events_url": "https://api.github.com/users/psinger/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2025-01-23T16:20:47
| 2025-03-03T15:04:06
| 2025-03-03T15:04:06
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
Hey, sorry if this is the wrong place. Feel free to move it to discussion.
I am trying to get peft working with fsdp2 and am wondering if someone else attempted that already?
The issue is that Im always getting errors along the lines of:
`RuntimeError: aten.mm.default: got mixed torch.Tensor and DTensor, need to convert all torch.Tensor to DTensor before calling distributed operators!`
Happy for any pointers.
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2344/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2342
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2342/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2342/comments
|
https://api.github.com/repos/huggingface/peft/issues/2342/events
|
https://github.com/huggingface/peft/issues/2342
| 2,806,843,497
|
I_kwDOIf9iDM6nTQhp
| 2,342
|
CI: Add gptqmodel to the CI
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 5192585063,
"node_id": "LA_kwDOIf9iDM8AAAABNYCPZw",
"url": "https://api.github.com/repos/huggingface/peft/labels/wip",
"name": "wip",
"color": "fbca04",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 4
| 2025-01-23T12:57:29
| 2025-02-28T10:35:25
| null |
MEMBER
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
This issue is to track the TODO from [this comment](https://github.com/huggingface/peft/pull/2247#pullrequestreview-2569656574). Once optimum 1.24.0 and transformers 4.49.0 are released, we should enable gptqmodel in the CI (and remove auto-gptq).
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2342/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2342/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2339
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2339/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2339/comments
|
https://api.github.com/repos/huggingface/peft/issues/2339/events
|
https://github.com/huggingface/peft/issues/2339
| 2,802,697,166
|
I_kwDOIf9iDM6nDcPO
| 2,339
|
Peft version upgrade from 0.4.0 to 0.14.0 results in "No module named \u0027peft.utils.config\u0027" error
|
{
"login": "incchar",
"id": 184541983,
"node_id": "U_kgDOCv_jHw",
"avatar_url": "https://avatars.githubusercontent.com/u/184541983?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/incchar",
"html_url": "https://github.com/incchar",
"followers_url": "https://api.github.com/users/incchar/followers",
"following_url": "https://api.github.com/users/incchar/following{/other_user}",
"gists_url": "https://api.github.com/users/incchar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/incchar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/incchar/subscriptions",
"organizations_url": "https://api.github.com/users/incchar/orgs",
"repos_url": "https://api.github.com/users/incchar/repos",
"events_url": "https://api.github.com/users/incchar/events{/privacy}",
"received_events_url": "https://api.github.com/users/incchar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2025-01-21T20:00:07
| 2025-03-02T15:03:46
| 2025-03-02T15:03:46
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Hello,
I'm migrating my sagemaker endpoint from the `huggingface-pytorch-inference:2.1.0-transformers4.37.0-gpu-py310-cu118-ubuntu20.04` image (which is being deprecated) to the `huggingface-pytorch-inference:2.3.0-transformers4.46.1-gpu-py311-cu121-ubuntu20.04-v1.0` image, which is supported.
This new version does not support the 0.4.0 version of peft, so we have upgraded to 1.14.0 and upgraded to a compatible diffusers version. The sagemaker endpoint deploys correctly with these new versions, but once it's run, we receive the following error:
`No module named \u0027peft.utils.config\u0027`
I dug around and found that there' no usage of peft.utils.config in our inference code. The only usage I could find is here, in the peft code itself: https://github.com/huggingface/peft/blob/main/src/peft/config.py. However, in this code, It looks like utils.config does not exist at all.
Here's what I'm currently using:
diffusers==0.32.2
peft==0.14.0
Is the peft library somehow breaking itself by looking for a peft.utils.config that doesn't exist? Have I missed a step that would create the utils.config file? Or is there another hidden dependency using peft.utils.config?
### Who can help?
@BenjaminBossan @sayakpaul
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
Create a sagemaker endpoint using the new `huggingface-pytorch-inference:2.3.0-transformers4.46.1-gpu-py311-cu121-ubuntu20.04-v1.0` huggingface DLC image.
Use a requirements.txt that looks like the following:
diffusers==0.32.2
peft==0.14.0
Observe that all requests to the sagemaker endpoint respond with 500 errors.
### Expected behavior
The Sagemaker endpoint should continue to process requests as it did before the version upgrade (using peft 0.4.0)
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2339/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2339/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2337
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2337/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2337/comments
|
https://api.github.com/repos/huggingface/peft/issues/2337/events
|
https://github.com/huggingface/peft/issues/2337
| 2,800,325,334
|
I_kwDOIf9iDM6m6ZLW
| 2,337
|
AdaLora kthvalue(): selected number k out of range for dimension 0
|
{
"login": "PKaralupov",
"id": 152442722,
"node_id": "U_kgDOCRYXYg",
"avatar_url": "https://avatars.githubusercontent.com/u/152442722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PKaralupov",
"html_url": "https://github.com/PKaralupov",
"followers_url": "https://api.github.com/users/PKaralupov/followers",
"following_url": "https://api.github.com/users/PKaralupov/following{/other_user}",
"gists_url": "https://api.github.com/users/PKaralupov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PKaralupov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PKaralupov/subscriptions",
"organizations_url": "https://api.github.com/users/PKaralupov/orgs",
"repos_url": "https://api.github.com/users/PKaralupov/repos",
"events_url": "https://api.github.com/users/PKaralupov/events{/privacy}",
"received_events_url": "https://api.github.com/users/PKaralupov/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2025-01-20T21:56:43
| 2025-01-23T05:25:02
| 2025-01-23T05:25:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Using docker image pytorch/pytorch:2.5.1-cuda12.4-cudnn9-runtime
transformers 4.48.0
accelerate 1.2.1
peft 0.14.0
torch 2.5.1+cu124
Python 3.11.10
### Who can help?
@sayakpaul, @benjaminbossan
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
Using peft AdaLora for finetuning Whiper large v3
```
model = prepare_model_for_kbit_training(model)
target_modules=["q_proj", "v_proj", "k_proj"]
t_modules = []
for id, (name, param) in enumerate(model.named_modules()):
if 'model.decoder' in name and any([module in name for module in target_modules]):
t_modules.append(name)
target_modules=t_modules
config = AdaLoraConfig(
init_r= 96,
target_r=64,
beta1=0.85,
beta2=0.85,
tinit=6000,
tfinal=11000,
deltaT=100,
lora_alpha=128,
lora_dropout=0.1,
target_modules=target_modules,
orth_reg_weight=0.5,
total_step= 13500
)
model = get_peft_model(model, config)
model.print_trainable_parameters()
```
Using trainer callback for update_and_allocate
```
class OptimizerStepCllback(TrainerCallback):
def on_optimizer_step(self, args, state, control, **kwargs):
model.update_and_allocate(state.global_step)
```
```
training_args = Seq2SeqTrainingArguments(
output_dir=args.output_dir,
per_device_train_batch_size=args.train_batchsize,
gradient_accumulation_steps=1,
learning_rate=args.learning_rate,
warmup_steps=args.warmup,
gradient_checkpointing=gradient_checkpointing,
fp16 = not torch.cuda.is_bf16_supported(),
bf16 = torch.cuda.is_bf16_supported(),
evaluation_strategy="epoch",
save_strategy="epoch",
num_train_epochs=args.num_epochs,
per_device_eval_batch_size=args.eval_batchsize,
predict_with_generate=True,
generation_max_length=256,
logging_steps=25,
report_to=["tensorboard"],
load_best_model_at_end=True,
metric_for_best_model="eval_librispeech_asr_wer",
greater_is_better=False,
optim="adamw_bnb_8bit",
remove_unused_columns=False,
dataloader_num_workers=args.num_proc
)
trainer = Seq2SeqTrainer(
args=training_args,
model=model,
train_dataset=raw_dataset["train"],
eval_dataset=raw_dataset["eval"],
data_collator=data_collator,
compute_metrics=compute_metrics,
tokenizer=processor.feature_extractor
)
trainer.add_callback(OptimizerStepCllback)
trainer.train(resume_from_checkpoint=resume_from_checkpoint)
```
Error after 2500 steps:
```
ERROR 2025-01-18T20:36:17.740476732Z [resource.labels.taskName: workerpool0-0] trainer.train(resume_from_checkpoint=resume_from_checkpoint)
ERROR 2025-01-18T20:36:17.740483350Z [resource.labels.taskName: workerpool0-0] File "/opt/conda/lib/python3.11/site-packages/transformers/trainer.py", line 2171, in train
ERROR 2025-01-18T20:36:17.740489895Z [resource.labels.taskName: workerpool0-0] return inner_training_loop(
ERROR 2025-01-18T20:36:17.740496256Z [resource.labels.taskName: workerpool0-0] ^^^^^^^^^^^^^^^^^^^^
ERROR 2025-01-18T20:36:17.740502909Z [resource.labels.taskName: workerpool0-0] File "/opt/conda/lib/python3.11/site-packages/transformers/trainer.py", line 2586, in _inner_training_loop
ERROR 2025-01-18T20:36:17.740509254Z [resource.labels.taskName: workerpool0-0] self.control = self.callback_handler.on_optimizer_step(args, self.state, self.control)
ERROR 2025-01-18T20:36:17.740515900Z [resource.labels.taskName: workerpool0-0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 2025-01-18T20:36:17.740522460Z [resource.labels.taskName: workerpool0-0] File "/opt/conda/lib/python3.11/site-packages/transformers/trainer_callback.py", line 491, in on_optimizer_step
ERROR 2025-01-18T20:36:17.740529629Z [resource.labels.taskName: workerpool0-0] return self.call_event("on_optimizer_step", args, state, control)
ERROR 2025-01-18T20:36:17.740535418Z [resource.labels.taskName: workerpool0-0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 2025-01-18T20:36:17.740541637Z [resource.labels.taskName: workerpool0-0] File "/opt/conda/lib/python3.11/site-packages/transformers/trainer_callback.py", line 519, in call_event
ERROR 2025-01-18T20:36:17.740547789Z [resource.labels.taskName: workerpool0-0] result = getattr(callback, event)(
ERROR 2025-01-18T20:36:17.740554197Z [resource.labels.taskName: workerpool0-0] ^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 2025-01-18T20:36:17.740560199Z [resource.labels.taskName: workerpool0-0] File "/workspace/task.py", line 752, in on_optimizer_step
ERROR 2025-01-18T20:36:17.740566453Z [resource.labels.taskName: workerpool0-0] model.update_and_allocate(state.global_step)
ERROR 2025-01-18T20:36:17.740572647Z [resource.labels.taskName: workerpool0-0] File "/opt/conda/lib/python3.11/site-packages/peft/tuners/adalora/model.py", line 343, in update_and_allocate
ERROR 2025-01-18T20:36:17.740578651Z [resource.labels.taskName: workerpool0-0] _, rank_pattern = self.rankallocator.update_and_allocate(self.model, global_step, force_mask=True)
ERROR 2025-01-18T20:36:17.740589951Z [resource.labels.taskName: workerpool0-0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 2025-01-18T20:36:17.740596643Z [resource.labels.taskName: workerpool0-0] File "/opt/conda/lib/python3.11/site-packages/peft/tuners/adalora/layer.py", line 342, in update_and_allocate
ERROR 2025-01-18T20:36:17.740605933Z [resource.labels.taskName: workerpool0-0] rank_pattern = self.mask_to_budget(model, budget)
ERROR 2025-01-18T20:36:17.740612342Z [resource.labels.taskName: workerpool0-0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 2025-01-18T20:36:17.740618182Z [resource.labels.taskName: workerpool0-0] File "/opt/conda/lib/python3.11/site-packages/peft/tuners/adalora/layer.py", line 321, in mask_to_budget
ERROR 2025-01-18T20:36:17.740627268Z [resource.labels.taskName: workerpool0-0] mask_threshold = torch.kthvalue(
ERROR 2025-01-18T20:36:17.740634138Z [resource.labels.taskName: workerpool0-0] ^^^^^^^^^^^^^^^
ERROR 2025-01-18T20:36:17.740640759Z [resource.labels.taskName: workerpool0-0] RuntimeError: kthvalue(): selected number k out of range for dimension 0
```
### Expected behavior
I believe something is wrong with my configuration, as this error was not raised with other peft config parameters
However, I am not sure why it happenned
|
{
"login": "PKaralupov",
"id": 152442722,
"node_id": "U_kgDOCRYXYg",
"avatar_url": "https://avatars.githubusercontent.com/u/152442722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PKaralupov",
"html_url": "https://github.com/PKaralupov",
"followers_url": "https://api.github.com/users/PKaralupov/followers",
"following_url": "https://api.github.com/users/PKaralupov/following{/other_user}",
"gists_url": "https://api.github.com/users/PKaralupov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PKaralupov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PKaralupov/subscriptions",
"organizations_url": "https://api.github.com/users/PKaralupov/orgs",
"repos_url": "https://api.github.com/users/PKaralupov/repos",
"events_url": "https://api.github.com/users/PKaralupov/events{/privacy}",
"received_events_url": "https://api.github.com/users/PKaralupov/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2337/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2337/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2336
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2336/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2336/comments
|
https://api.github.com/repos/huggingface/peft/issues/2336/events
|
https://github.com/huggingface/peft/issues/2336
| 2,799,925,050
|
I_kwDOIf9iDM6m43c6
| 2,336
|
After using peft, the performance indicators decreased.
|
{
"login": "KQDtianxiaK",
"id": 92998962,
"node_id": "U_kgDOBYsNMg",
"avatar_url": "https://avatars.githubusercontent.com/u/92998962?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KQDtianxiaK",
"html_url": "https://github.com/KQDtianxiaK",
"followers_url": "https://api.github.com/users/KQDtianxiaK/followers",
"following_url": "https://api.github.com/users/KQDtianxiaK/following{/other_user}",
"gists_url": "https://api.github.com/users/KQDtianxiaK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KQDtianxiaK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KQDtianxiaK/subscriptions",
"organizations_url": "https://api.github.com/users/KQDtianxiaK/orgs",
"repos_url": "https://api.github.com/users/KQDtianxiaK/repos",
"events_url": "https://api.github.com/users/KQDtianxiaK/events{/privacy}",
"received_events_url": "https://api.github.com/users/KQDtianxiaK/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 5
| 2025-01-20T17:04:33
| 2025-03-09T15:04:20
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Sorry, I just finished the previous question and I still have to ask you a new question. I use the DNABert2 model, whose original structure is as follows:
```
BertForSequenceClassification(
(bert): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(4096, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0): BertLayer(
(attention): BertUnpadAttention(
(self): BertUnpadSelfAttention(
(dropout): Dropout(p=0.0, inplace=False)
(Wqkv): Linear(in_features=768, out_features=2304, bias=True)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(mlp): BertGatedLinearUnitMLP(
(gated_layers): Linear(in_features=768, out_features=6144, bias=False)
(act): GELU(approximate='none')
(wo): Linear(in_features=3072, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(layernorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
)
......
(11): BertLayer(
(attention): BertUnpadAttention(
(self): BertUnpadSelfAttention(
(dropout): Dropout(p=0.0, inplace=False)
(Wqkv): Linear(in_features=768, out_features=2304, bias=True)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(mlp): BertGatedLinearUnitMLP(
(gated_layers): Linear(in_features=768, out_features=6144, bias=False)
(act): GELU(approximate='none')
(wo): Linear(in_features=3072, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
(layernorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
(dropout): Dropout(p=0.1, inplace=False)
(classifier): Linear(in_features=768, out_features=157, bias=True)
)
```
On three different classification tasks, I used OFT, LNTuning and other methods to add fine-tuning modules to linear layers such as ['Wqkv'/'wo'/'gated_layers'], or ['LayerNorm'] and other parts for supervised training. During the training process, the logs output at each logging_steps are normal, the loss keeps decreasing, and the performance indicators keep rising. However, when the saved model weights are finally called to evaluate on the independent test set, the performance will be very poor, and the results are basically equivalent to It's the same as having no training at all. The following error is reported when calling:
```
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at model/DNABERT2-117M and are newly initialized: ['bert.pooler.dense.bias', 'bert.pooler.dense.weight', 'classifier.bias', 'classifier.weight']
```
### Who can help?
@BenjaminBossan
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
and the code I use to call the model from each path that holds the best model weights:
```
def load_best_model_for_test(checkpoint_dir, fold_number):
fold_dir = os.path.join(checkpoint_dir)
checkpoint_folders = [d for d in os.scandir(fold_dir) if d.is_dir() and d.name.startswith('checkpoint')]
best_model_dir = max(checkpoint_folders, key=lambda d: os.path.getmtime(d.path), default=None)
best_model_path = best_model_dir.path
model = AutoPeftModelForSequenceClassification.from_pretrained(best_model_path, trust_remote_code=True, num_labels=2)
return model
def evaluate_on_test_set(models, test_dataset):
test_results = []
for model in models:
trainer = Trainer(
model=model,
args=training_args,
eval_dataset=test_dataset,
data_collator=DataCollatorWithPadding(tokenizer=tokenizer),
compute_metrics=eval_predict
)
metrics = trainer.evaluate()
test_results.append(metrics)
average_metrics = {key: np.mean([result[key] for result in test_results]) for key in test_results[0].keys()}
return average_metrics
```
However, when I did full-parameter supervised fine-tuning without using peft, the final results on the independent test set were all normal. I changed different tasks, changed different peft methods, changed different parts of fine-tuning, and used the latest version of peft and still can't solve the problem.
### Expected behavior
Find out the cause and fix the problem
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2336/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2336/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2330
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2330/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2330/comments
|
https://api.github.com/repos/huggingface/peft/issues/2330/events
|
https://github.com/huggingface/peft/issues/2330
| 2,789,282,442
|
I_kwDOIf9iDM6mQRKK
| 2,330
|
MoELorA
|
{
"login": "moghadas76",
"id": 23231913,
"node_id": "MDQ6VXNlcjIzMjMxOTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/23231913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moghadas76",
"html_url": "https://github.com/moghadas76",
"followers_url": "https://api.github.com/users/moghadas76/followers",
"following_url": "https://api.github.com/users/moghadas76/following{/other_user}",
"gists_url": "https://api.github.com/users/moghadas76/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moghadas76/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moghadas76/subscriptions",
"organizations_url": "https://api.github.com/users/moghadas76/orgs",
"repos_url": "https://api.github.com/users/moghadas76/repos",
"events_url": "https://api.github.com/users/moghadas76/events{/privacy}",
"received_events_url": "https://api.github.com/users/moghadas76/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2025-01-15T09:29:58
| 2025-02-23T15:03:30
| 2025-02-23T15:03:30
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Feature request
Feature request
The paper "MoELoRA: Contrastive Learning Guided Mixture of Experts on
Parameter-Efficient Fine-Tuning for Large Language Models" introduced MoLoRA, a Mixutre-of-Experts approach using LoRA adapters. I am using it to conduct some research for my MSc thesis, and have implemented it in peft. I was wondering if this method is interesting and would be worth it to clean up my code and submit a PR.
Motivation
The motivation is to include more PEFT methods that the community can benefit from.
Your contribution
I can contribute a PR with the implementation of MoLoRA.
### Motivation
Feature request
The paper "MoELoRA: Contrastive Learning Guided Mixture of Experts on
Parameter-Efficient Fine-Tuning for Large Language Models" introduced MoLoRA, a Mixutre-of-Experts approach using LoRA adapters. I am using it to conduct some research for my MSc thesis, and have implemented it in peft. I was wondering if this method is interesting and would be worth it to clean up my code and submit a PR.
Motivation
The motivation is to include more PEFT methods that the community can benefit from.
Your contribution
I can contribute a PR with the implementation of MoLoRA.
### Your contribution
Feature request
The paper "MoELoRA: Contrastive Learning Guided Mixture of Experts on
Parameter-Efficient Fine-Tuning for Large Language Models" introduced MoLoRA, a Mixutre-of-Experts approach using LoRA adapters. I am using it to conduct some research for my MSc thesis, and have implemented it in peft. I was wondering if this method is interesting and would be worth it to clean up my code and submit a PR.
Motivation
The motivation is to include more PEFT methods that the community can benefit from.
Your contribution
I can contribute a PR with the implementation of MoLoRA.
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2330/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2330/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2329
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2329/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2329/comments
|
https://api.github.com/repos/huggingface/peft/issues/2329/events
|
https://github.com/huggingface/peft/issues/2329
| 2,788,385,643
|
I_kwDOIf9iDM6mM2Nr
| 2,329
|
Request to intergrate Structure Sparsity-based PEFT (S2FT)
|
{
"login": "Hanyuezhuohua",
"id": 58478765,
"node_id": "MDQ6VXNlcjU4NDc4NzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/58478765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hanyuezhuohua",
"html_url": "https://github.com/Hanyuezhuohua",
"followers_url": "https://api.github.com/users/Hanyuezhuohua/followers",
"following_url": "https://api.github.com/users/Hanyuezhuohua/following{/other_user}",
"gists_url": "https://api.github.com/users/Hanyuezhuohua/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hanyuezhuohua/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hanyuezhuohua/subscriptions",
"organizations_url": "https://api.github.com/users/Hanyuezhuohua/orgs",
"repos_url": "https://api.github.com/users/Hanyuezhuohua/repos",
"events_url": "https://api.github.com/users/Hanyuezhuohua/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hanyuezhuohua/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 3
| 2025-01-14T22:18:53
| 2025-02-14T15:29:31
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Feature request
This request proposes to intergrate S2FT, a pure structure sparsity-based PEFT method that concurrently achieve state-of-theart fine-tuning performance, training efficiency, and inference scalability. More information about our NeurIPS paper can be found here: https://infini-ai-lab.github.io/S2FT-Page/, of which i'm the first author. Here is our code for the implementation: https://github.com/Infini-AI-Lab/S2FT.
### Motivation
As far as I know, S2FT is the first one to offer efficient and flexible sparsity-based PEFT for LLMs (previously only some add sparsity to LoRA or use layerwise freezing). Here, we'd like to mention several importance features of S2FT:
- Model Versatility: The design of our structure sparsity is based on the coupled structure in LLMs, which commonly exists in LLMs, VLMs, CNNs, and GNNs. Therefore, our method should work for many different structures.
- Generalization Ability: When evaluated on more recent models such as LLaMA-3-8B, we observe that our method can outperform both LoRA and Full FT, which is because we only modified a small fraction of the original parameters. Therefore, we can maintain most advanced abilities during pre-training.
<img width="806" alt="Image" src="https://github.com/user-attachments/assets/ce046f07-5f0a-4ef3-a17f-13b836cf9473" />
- Training Efficiency: Instead of focusing on the parameter efficiency, S2FT can provide practical acceleration for model training. In our experiments, we show that S2FT can surpass LoRA in both training memory and time by 10%, which is important for resource-limited settings.
<img width="794" alt="Image" src="https://github.com/user-attachments/assets/39122dce-f948-421e-936e-592b08463bc6" />
- Scalable Serving: Finally, S2FT also shows good serving ability in comparison with LoRA, where we consider adapter fusion, switch, and parallelism. For these settings, S2FT always outperforms LoRA in both efficiency and performance.
<img width="809" alt="Image" src="https://github.com/user-attachments/assets/29dcc747-8cdf-42d9-9474-7b8c7b77c052" />
- Controllability: The model parameters to be updated in S2FT can be selected with user-specific functions, where LoRA cannot achieve this.
Based on these information, although S2FT is just released, we think it is new kind of PEFT method showing very good potential. And the integration of it should be benefit for future sparsity-based PEFT methods.
### Your contribution
I will try to write most code for this new PEFT method based on the current PEFT
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2329/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2329/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2326
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2326/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2326/comments
|
https://api.github.com/repos/huggingface/peft/issues/2326/events
|
https://github.com/huggingface/peft/issues/2326
| 2,784,601,999
|
I_kwDOIf9iDM6l-aeP
| 2,326
|
AttributeError: ModulesToSaveWrapper has no attribute `dense`
|
{
"login": "KQDtianxiaK",
"id": 92998962,
"node_id": "U_kgDOBYsNMg",
"avatar_url": "https://avatars.githubusercontent.com/u/92998962?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KQDtianxiaK",
"html_url": "https://github.com/KQDtianxiaK",
"followers_url": "https://api.github.com/users/KQDtianxiaK/followers",
"following_url": "https://api.github.com/users/KQDtianxiaK/following{/other_user}",
"gists_url": "https://api.github.com/users/KQDtianxiaK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KQDtianxiaK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KQDtianxiaK/subscriptions",
"organizations_url": "https://api.github.com/users/KQDtianxiaK/orgs",
"repos_url": "https://api.github.com/users/KQDtianxiaK/repos",
"events_url": "https://api.github.com/users/KQDtianxiaK/events{/privacy}",
"received_events_url": "https://api.github.com/users/KQDtianxiaK/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2025-01-13T16:49:37
| 2025-01-20T16:29:05
| 2025-01-20T16:29:04
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
**Original model architecture:**
```
EsmForSequenceClassification(
(esm): EsmModel(
(embeddings): EsmEmbeddings(
(word_embeddings): Embedding(33, 640, padding_idx=1)
(dropout): Dropout(p=0.0, inplace=False)
(position_embeddings): Embedding(1026, 640, padding_idx=1)
)
(encoder): EsmEncoder(
(layer): ModuleList(
(0-29): 30 x EsmLayer(
(attention): EsmAttention(
(self): EsmSelfAttention(
...
**(output): EsmSelfOutput(
(dense): Linear(in_features=640, out_features=640, bias=True)**
(dropout): Dropout(p=0.0, inplace=False)
)
...
**(intermediate): EsmIntermediate(
(dense): Linear(in_features=640, out_features=2560, bias=True)
)**
**(output): EsmOutput(
(dense): Linear(in_features=2560, out_features=640, bias=True)**
...
**(classifier): EsmClassificationHead(
(dense): Linear(in_features=640, out_features=640, bias=True)**
...
```
**my code:**
```
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=7)
config = OFTConfig(task_type=TaskType.SEQ_CLS, target_modules=['dense'])
model_OFT = get_peft_model(model, config)
```
**Peft model architecture:**
```
PeftModelForSequenceClassification(
(base_model): OFTModel(
(model): EsmForSequenceClassification(
(esm): EsmModel(
(embeddings): EsmEmbeddings(
(word_embeddings): Embedding(33, 640, padding_idx=1)
(dropout): Dropout(p=0.0, inplace=False)
(position_embeddings): Embedding(1026, 640, padding_idx=1)
)
(encoder): EsmEncoder(
(layer): ModuleList(
(0-29): 30 x EsmLayer(
(attention): EsmAttention(
(self): EsmSelfAttention(
(query): Linear(in_features=640, out_features=640, bias=True)
(key): Linear(in_features=640, out_features=640, bias=True)
(value): Linear(in_features=640, out_features=640, bias=True)
...
**(dense): oft.Linear(
(base_layer): Linear(in_features=640, out_features=640, bias=True)
(oft_r): ParameterDict( (default): Parameter containing: [torch.FloatTensor of size 8x80x80])
)**
...
**(intermediate): EsmIntermediate(
(dense): oft.Linear(
(base_layer): Linear(in_features=640, out_features=2560, bias=True)
(oft_r): ParameterDict( (default): Parameter containing: [torch.FloatTensor of size 8x320x320])
)**
)
**(output): EsmOutput(
(dense): oft.Linear(
(base_layer): Linear(in_features=2560, out_features=640, bias=True)
(oft_r): ParameterDict( (default): Parameter containing: [torch.FloatTensor of size 8x80x80])
)**
...
**(classifier): ModulesToSaveWrapper(
(original_module): EsmClassificationHead(
(dense): oft.Linear(
(base_layer): Linear(in_features=640, out_features=640, bias=True)
(oft_r): ParameterDict( (default): Parameter containing: [torch.FloatTensor of size 8x80x80])
)**
...
(modules_to_save): ModuleDict(
(default): EsmClassificationHead(
**(dense): oft.Linear(
(base_layer): Linear(in_features=640, out_features=640, bias=True)
(oft_r): ParameterDict( (default): Parameter containing: [torch.FloatTensor of size 8x80x80])
)**
...
```
**adapter_config.json:**
```
{
"alpha_pattern": {},
"auto_mapping": null,
"base_model_name_or_path": "model/esm2_35M",
"block_share": false,
"coft": false,
"eps": 6e-05,
"inference_mode": true,
"init_weights": true,
"layers_pattern": null,
"layers_to_transform": null,
"module_dropout": 0.0,
"modules_to_save": [
"classifier",
"score"
],
"peft_type": "OFT",
"r": 8,
"rank_pattern": {},
"revision": null,
"target_modules": [
"dense"
],
"task_type": "SEQ_CLS"
}
```
### Who can help?
@BenjaminBossan
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
**After training, I load the model from the saved checkpoint, using the following codes:**
```
best_model_path = best_model_dir.path
model_peft = AutoPeftModelForSequenceClassification.from_pretrained(best_model_path, num_labels=7)
```
**Got this error:**
```
Traceback (most recent call last):
File "/root/autodl-tmp/PEFT-PLM/ESM2_scop_OFT.py", line 213, in <module>
best_model = load_best_model_for_test(training_args.output_dir, i+1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/autodl-tmp/PEFT-PLM/ESM2_scop_OFT.py", line 189, in load_best_model_for_test
model_peft = AutoPeftModelForSequenceClassification.from_pretrained(best_model_path, num_labels=7)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.12/site-packages/peft/auto.py", line 130, in from_pretrained
return cls._target_peft_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.12/site-packages/peft/peft_model.py", line 541, in from_pretrained
model = MODEL_TYPE_TO_PEFT_MODEL_MAPPING[config.task_type](
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.12/site-packages/peft/peft_model.py", line 1311, in __init__
super().__init__(model, peft_config, adapter_name, **kwargs)
File "/root/miniconda3/lib/python3.12/site-packages/peft/peft_model.py", line 155, in __init__
self.base_model = cls(model, {adapter_name: peft_config}, adapter_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.12/site-packages/peft/tuners/lycoris_utils.py", line 196, in __init__
super().__init__(model, config, adapter_name)
File "/root/miniconda3/lib/python3.12/site-packages/peft/tuners/tuners_utils.py", line 175, in __init__
self.inject_adapter(self.model, adapter_name)
File "/root/miniconda3/lib/python3.12/site-packages/peft/tuners/tuners_utils.py", line 430, in inject_adapter
parent, target, target_name = _get_submodules(model, key)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.12/site-packages/peft/utils/other.py", line 313, in _get_submodules
target = model.get_submodule(key)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py", line 717, in get_submodule
raise AttributeError(
AttributeError: ModulesToSaveWrapper has no attribute `dense`
```
### Expected behavior
Find out the cause and solve the problem
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2326/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2326/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2322
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2322/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2322/comments
|
https://api.github.com/repos/huggingface/peft/issues/2322/events
|
https://github.com/huggingface/peft/issues/2322
| 2,782,367,731
|
I_kwDOIf9iDM6l14_z
| 2,322
|
model merge and unload feature for AdaLora
|
{
"login": "DaehanKim",
"id": 20675681,
"node_id": "MDQ6VXNlcjIwNjc1Njgx",
"avatar_url": "https://avatars.githubusercontent.com/u/20675681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DaehanKim",
"html_url": "https://github.com/DaehanKim",
"followers_url": "https://api.github.com/users/DaehanKim/followers",
"following_url": "https://api.github.com/users/DaehanKim/following{/other_user}",
"gists_url": "https://api.github.com/users/DaehanKim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DaehanKim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DaehanKim/subscriptions",
"organizations_url": "https://api.github.com/users/DaehanKim/orgs",
"repos_url": "https://api.github.com/users/DaehanKim/repos",
"events_url": "https://api.github.com/users/DaehanKim/events{/privacy}",
"received_events_url": "https://api.github.com/users/DaehanKim/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2025-01-12T09:20:01
| 2025-01-14T12:47:35
| 2025-01-14T12:47:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Feature request
unlike Lora or IA3 adapter type, AdaLora does not provide a method to merge lora adapter weights into original weights so that it can be used as a standalone model. I made that feature for a personal usecase and want to make a PR to make this feature accessible to everyone.
### Motivation
This feature makes people easily merge AdaLora adapter weights into original weights, which makes further finetuning on it possible (i.e. when one wants to resume adalora training for checkpoints that was already trained with adalora, resuming training is not possible with unmerged weights. )
### Your contribution
I'll submit a PR. I followed the example of IA3 `merge_and_unload`
Following is the overview of change :
```
def _unload_and_optionally_merge(
self,
merge: bool = True,
safe_merge: bool = False,
adapter_names: Optional[list[str]] = None,
eps: float = 1e-5
) -> torch.nn.Module:
"""
This method unloads the AdaLoRA adapter modules and optionally merges them into the base model weights.
Args:
merge (`bool`, defaults to `True`):
If True, merges the adapter weights into base model weights.
If False, it will only unload the adapters without merging.
safe_merge (`bool`, defaults to `False`):
If True, performs the merge operation with extra safety checks.
adapter_names (`List[str]`, *optional*):
The list of adapter names to merge. If None, all active adapters will be merged.
eps (`float`, defaults to 1e-5):
Small constant for numerical stability when dividing by ranknum.
Returns:
model (`torch.nn.Module`):
The resulting PyTorch model.
"""
if getattr(self.model, "is_loaded_in_8bit", False):
raise ValueError("Cannot merge adalora layers when the model is loaded in 8-bit mode")
if getattr(self.model, "is_loaded_in_4bit", False):
raise ValueError("Cannot merge adalora layers when the model is loaded in 4-bit mode")
if adapter_names is not None:
raise ValueError("AdaLoRA does not support merging specific adapters. Got adapter_names={adapter_names}")
# Create a copy of the base model state dict to modify
original_state_dict = self.model.state_dict()
if merge:
for name, module in self.model.named_modules():
if hasattr(module, "base_layer") and hasattr(module, "lora_A"):
# Extract base layer weight name
layer_name = name.replace(".lora_A", "")
layer_name = layer_name.replace("base_model.model.", "")
base_weight_name = f"{layer_name}.weight"
# Get SVD parameters
lora_A = module.lora_A["default"] # [r x d_in]
lora_B = module.lora_B["default"] # [d_out x r]
lora_E = module.lora_E["default"] # [r x 1]
# Calculate active ranks
ranknum = (lora_E != 0).sum()
scaling = module.scaling["default"] if hasattr(module, "scaling") else 16
# Safety check if requested
if safe_merge and (torch.isnan(lora_A).any() or torch.isnan(lora_B).any() or torch.isnan(lora_E).any()):
raise ValueError(f"NaN detected in adapter weights for layer {name}")
# Scale A with E: A' = AE
scaled_A = lora_A * lora_E # [r x d_in]
# Compute update: ΔW = BA'
if ranknum > 0:
update = (lora_B @ scaled_A) * scaling / (ranknum + eps)
else:
update = torch.zeros_like(original_state_dict[base_weight_name])
# Update base weights
if base_weight_name in original_state_dict:
original_state_dict[base_weight_name] += update
# Load the merged state dict back into a clean version of the model
self.model.load_state_dict(original_state_dict)
return self.model
def merge_and_unload(
self,
safe_merge: bool = False,
adapter_names: Optional[list[str]] = None,
eps: float = 1e-5
) -> torch.nn.Module:
"""
Merge the active adapters into the base model and unload the adapters.
Args:
safe_merge (`bool`, defaults to `False`):
If True, performs the merge operation with extra safety checks.
adapter_names (`List[str]`, *optional*):
List of adapter names to merge. If None, merges all active adapters.
eps (`float`, defaults to 1e-5):
Small constant for numerical stability when dividing by ranknum.
Returns:
`torch.nn.Module`: The merged model.
"""
return self._unload_and_optionally_merge(
safe_merge=safe_merge,
adapter_names=adapter_names,
eps=eps
)
def unload(self) -> torch.nn.Module:
"""
Unload the adapters without merging them into the base model.
Returns:
`torch.nn.Module`: The unloaded model.
"""
return self._unload_and_optionally_merge(merge=False)
```
|
{
"login": "DaehanKim",
"id": 20675681,
"node_id": "MDQ6VXNlcjIwNjc1Njgx",
"avatar_url": "https://avatars.githubusercontent.com/u/20675681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DaehanKim",
"html_url": "https://github.com/DaehanKim",
"followers_url": "https://api.github.com/users/DaehanKim/followers",
"following_url": "https://api.github.com/users/DaehanKim/following{/other_user}",
"gists_url": "https://api.github.com/users/DaehanKim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DaehanKim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DaehanKim/subscriptions",
"organizations_url": "https://api.github.com/users/DaehanKim/orgs",
"repos_url": "https://api.github.com/users/DaehanKim/repos",
"events_url": "https://api.github.com/users/DaehanKim/events{/privacy}",
"received_events_url": "https://api.github.com/users/DaehanKim/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2322/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2321
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2321/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2321/comments
|
https://api.github.com/repos/huggingface/peft/issues/2321/events
|
https://github.com/huggingface/peft/issues/2321
| 2,782,134,190
|
I_kwDOIf9iDM6l0_-u
| 2,321
|
[Warning] `Merge lora module to 4-bit linear may get different generations`
|
{
"login": "steveepreston",
"id": 175405060,
"node_id": "U_kgDOCnR4BA",
"avatar_url": "https://avatars.githubusercontent.com/u/175405060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/steveepreston",
"html_url": "https://github.com/steveepreston",
"followers_url": "https://api.github.com/users/steveepreston/followers",
"following_url": "https://api.github.com/users/steveepreston/following{/other_user}",
"gists_url": "https://api.github.com/users/steveepreston/gists{/gist_id}",
"starred_url": "https://api.github.com/users/steveepreston/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/steveepreston/subscriptions",
"organizations_url": "https://api.github.com/users/steveepreston/orgs",
"repos_url": "https://api.github.com/users/steveepreston/repos",
"events_url": "https://api.github.com/users/steveepreston/events{/privacy}",
"received_events_url": "https://api.github.com/users/steveepreston/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 15
| 2025-01-11T20:27:54
| 2025-03-06T15:30:07
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
peft 0.14.0
transformers 4.48.0
bitsandbytes 0.45.0
### Who can help?
@BenjaminBossan @sayakpaul
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
code:
```python
base_model_id = "gemma-2-27b-it"
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_quant_storage=torch.bfloat16,
)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_id,
quantization_config=quantization_config,
attn_implementation="sdpa",
torch_dtype=torch.bfloat16,
use_cache=True,
)
peft_model = PeftModel.from_pretrained(base_model, adapter_path)
--> merged_model = peft_model.merge_and_unload()
```
Warning:
```
UserWarning: Merge lora module to 4-bit linear may get different generations due to rounding errors.
```
### Expected behavior
merge_and_unload() correctly and without warning.
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2321/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2319
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2319/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2319/comments
|
https://api.github.com/repos/huggingface/peft/issues/2319/events
|
https://github.com/huggingface/peft/issues/2319
| 2,779,143,092
|
I_kwDOIf9iDM6lplu0
| 2,319
|
Import error , is it a version issue?
|
{
"login": "zhangyangniubi",
"id": 157886832,
"node_id": "U_kgDOCWkpcA",
"avatar_url": "https://avatars.githubusercontent.com/u/157886832?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangyangniubi",
"html_url": "https://github.com/zhangyangniubi",
"followers_url": "https://api.github.com/users/zhangyangniubi/followers",
"following_url": "https://api.github.com/users/zhangyangniubi/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangyangniubi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangyangniubi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangyangniubi/subscriptions",
"organizations_url": "https://api.github.com/users/zhangyangniubi/orgs",
"repos_url": "https://api.github.com/users/zhangyangniubi/repos",
"events_url": "https://api.github.com/users/zhangyangniubi/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangyangniubi/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2025-01-10T02:34:52
| 2025-01-13T10:13:18
| 2025-01-13T10:13:18
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
When I execute the finetune.py file, an error occurs as follows: cannot import name 'prepare_model_for_int8_training'.Is it a version issue? My version is 0.14.0.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
cannot import name 'prepare_model_for_int8_training' from 'peft' (/path/python3.10/site-packages/peft/__init__.py)
### Expected behavior
Who can help me answer this question,thks
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2319/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2319/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2318
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2318/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2318/comments
|
https://api.github.com/repos/huggingface/peft/issues/2318/events
|
https://github.com/huggingface/peft/issues/2318
| 2,779,069,108
|
I_kwDOIf9iDM6lpTq0
| 2,318
|
Issue merging a Lora model to a SANA transformer
|
{
"login": "frutiemax92",
"id": 142428698,
"node_id": "U_kgDOCH1KGg",
"avatar_url": "https://avatars.githubusercontent.com/u/142428698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frutiemax92",
"html_url": "https://github.com/frutiemax92",
"followers_url": "https://api.github.com/users/frutiemax92/followers",
"following_url": "https://api.github.com/users/frutiemax92/following{/other_user}",
"gists_url": "https://api.github.com/users/frutiemax92/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frutiemax92/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frutiemax92/subscriptions",
"organizations_url": "https://api.github.com/users/frutiemax92/orgs",
"repos_url": "https://api.github.com/users/frutiemax92/repos",
"events_url": "https://api.github.com/users/frutiemax92/events{/privacy}",
"received_events_url": "https://api.github.com/users/frutiemax92/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 13
| 2025-01-10T01:24:35
| 2025-03-06T18:39:20
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
peft=0.14.0
### Who can help?
@BenjaminBossan @sayakpaul
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
```
from diffusers import SanaPipeline, SanaPAGPipeline, SanaTransformer2DModel
from peft import PeftModel
transformer = SanaTransformer2DModel.from_pretrained("frutiemax/twistedreality-sana-1600m-1024px")
print(transformer)
peft_model = PeftModel.from_pretrained(transformer, '0')
model = peft_model.merge_and_unload()
```
### Expected behavior
I've trained a Lora model with PEFT on a SANA checkpoint. I can train and inference using the PEFT model. However, when I try to merge the Lora to the base checkpoint, I encounter a shape mismatch. I've attached the Lora model with a rank 4.

[0.zip](https://github.com/user-attachments/files/18369238/0.zip)
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2318/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2317
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2317/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2317/comments
|
https://api.github.com/repos/huggingface/peft/issues/2317/events
|
https://github.com/huggingface/peft/issues/2317
| 2,777,004,984
|
I_kwDOIf9iDM6lhbu4
| 2,317
|
Issue with finetuning with Corda
|
{
"login": "sirluk",
"id": 58826757,
"node_id": "MDQ6VXNlcjU4ODI2NzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/58826757?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sirluk",
"html_url": "https://github.com/sirluk",
"followers_url": "https://api.github.com/users/sirluk/followers",
"following_url": "https://api.github.com/users/sirluk/following{/other_user}",
"gists_url": "https://api.github.com/users/sirluk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sirluk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sirluk/subscriptions",
"organizations_url": "https://api.github.com/users/sirluk/orgs",
"repos_url": "https://api.github.com/users/sirluk/repos",
"events_url": "https://api.github.com/users/sirluk/events{/privacy}",
"received_events_url": "https://api.github.com/users/sirluk/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 13
| 2025-01-09T07:12:18
| 2025-02-10T10:22:03
| 2025-02-10T10:22:01
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
peft master branch (commit 8d3039b6cb724522625bff26988418cac5759ffa)
### Who can help?
@BenjaminBossan @5eqn
Hi, I would like to try out Corda for my finetuning usecase but looking at the loss curves something seems to be going wrong so I just wanted to verify I implemented Corda correctly.
This is the relevant code snippet from my script. I have a tokenized dataset which I wrap with a dataloader with a batch size = 1 to pass to the `preprocess_corda` function. Once `preprocess_corda` is done computing I can just instantiate the peft model as usual with the required config, correct?
Would greatly appreciate some feedback.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
```python
# imports
import torch
from functools import partial
from datasets import load_dataset, interleave_datasets, DatasetDict
from transformers import AutoTokenizer, AutoModelForCausalLM, Trainer, TrainingArguments
from peft import get_peft_model, LoraConfig
from peft.tuners.lora.corda import preprocess_corda
from peft.tuners.lora.config import CordaConfig
# functions
def _tokenize_fn(prompts, completions, tokenizer):
prompt_tokens = tokenizer(prompts, add_special_tokens=False)["input_ids"]
input_tokens = tokenizer([x+y for x, y in zip(prompts, completions)], add_special_tokens=False)["input_ids"]
input_tokens = [[tokenizer.bos_token_id]+x+[tokenizer.eos_token_id] for x in input_tokens]
prompt_length = [len(x)+1 for x in prompt_tokens] # +1 for the bos token
input_length = [len(x) for x in input_tokens]
return {"input_ids": input_tokens, "prompt_length": prompt_length, "input_length": input_length}
class _TokenizerPromptSource:
def __init__(self, tokenizer_path, space_after_prompt=True):
# import promptsource
from promptsource_custom.templates import DatasetTemplates
self.dataset_templates = DatasetTemplates
self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_path)
self.space_after_prompt = space_after_prompt
def __call__(self, examples):
examples = [dict(zip(examples.keys(), e)) for e in zip(*examples.values())]
prompts, completions = zip(*[self.prompt.apply(e) for e in examples])
if self.space_after_prompt:
prompts = [p + " " for p in prompts]
return _tokenize_fn(prompts, completions, self.tokenizer)
class TokenizerWinogrande(_TokenizerPromptSource):
def __init__(self, tokenizer_path):
super().__init__(tokenizer_path)
self.prompt = self.dataset_templates("winogrande", "winogrande_xl")["multiple_choice_simple"]
class TokenizerHellaswag(_TokenizerPromptSource):
def __init__(self, tokenizer_path):
super().__init__(tokenizer_path)
self.prompt = self.dataset_templates("hellaswag")["multiple_choice_simple"]
class TokenizerArcChallenge(_TokenizerPromptSource):
def __init__(self, tokenizer_path):
super().__init__(tokenizer_path)
self.prompt = self.dataset_templates("ai2_arc", "ARC-Challenge")["multiple_choice_simple"]
class TokenizerArcEasy(_TokenizerPromptSource):
def __init__(self, tokenizer_path):
super().__init__(tokenizer_path)
self.prompt = self.dataset_templates("ai2_arc", "ARC-Easy")["multiple_choice_simple"]
class TokenizerPIQA(_TokenizerPromptSource):
def __init__(self, tokenizer_path):
super().__init__(tokenizer_path)
self.prompt = self.dataset_templates("piqa")["multiple_choice_simple"]
class TokenizerSIQA(_TokenizerPromptSource):
def __init__(self, tokenizer_path):
super().__init__(tokenizer_path)
self.prompt = self.dataset_templates("social_i_qa")["multiple_choice_simple"]
class TokenizerOpenBookQA(_TokenizerPromptSource):
def __init__(self, tokenizer_path):
super().__init__(tokenizer_path)
self.prompt = self.dataset_templates("openbookqa", "main")["multiple_choice_simple"]
class TokenizerBoolQ(_TokenizerPromptSource):
def __init__(self, tokenizer_path):
super().__init__(tokenizer_path)
self.prompt = self.dataset_templates("super_glue", "boolq")["multiple_choice_simple"]
class DataCollator:
def __init__(self, eos_token_id, max_length = None):
self.eos_token_id = eos_token_id
self.max_length = max_length
def __call__(self, batch):
batch = {k: [item[k] for item in batch] for k in batch[0]}
input_lengths = torch.stack(batch["input_length"])
prompt_lengths = torch.stack(batch["prompt_length"])
input_ids = torch.nn.utils.rnn.pad_sequence(batch["input_ids"], batch_first=True, padding_value=self.eos_token_id)
col_indices = torch.arange(input_ids.size(1)).unsqueeze(0)
attention_mask = col_indices < input_lengths.unsqueeze(1)
label_mask = torch.logical_or(col_indices < prompt_lengths.unsqueeze(1), ~attention_mask)
labels = input_ids.masked_fill(label_mask, -100)
if self.max_length is not None:
input_ids = input_ids[:, :self.max_length]
attention_mask = attention_mask[:, :self.max_length]
labels = labels[:, :self.max_length]
return {"input_ids": input_ids, "attention_mask": attention_mask, "labels": labels}
# constants
CORDA = False
SEED = 0
BATCH_SIZE = 4
NUM_EPOCHS = 1
LEARNING_RATE = 5e-4
GRADIENT_ACCUMULATION_STEPS = 8
MODEL_NAME = "meta-llama/Llama-2-7b-hf"
MODEL_MAX_LENGTH = 1024
QA_DATASETS = [
"Rowan/hellaswag",
"allenai/winogrande",
"allenai/ai2_arc_challenge",
"allenai/ai2_arc_easy",
"ybisk/piqa",
"allenai/social_i_qa",
"allenai/openbookqa",
"boolq"
]
LOAD_DATASET_KWARGS = {
"Rowan/hellaswag": {"path": "Rowan/hellaswag"},
"allenai/winogrande": {"path": "allenai/winogrande", "name": "winogrande_xl"},
"allenai/ai2_arc_challenge": {"path": "allenai/ai2_arc", "name": "ARC-Challenge"},
"allenai/ai2_arc_easy": {"path": "allenai/ai2_arc", "name": "ARC-Easy"},
"ybisk/piqa": {"path": "ybisk/piqa"},
"allenai/social_i_qa": {"path": "allenai/social_i_qa"},
"allenai/openbookqa": {"path": "allenai/openbookqa", "name": "main"},
"boolq": {"path": "aps/super_glue", "name": "boolq"}
}
TOKENIZE_MAP = {
"Rowan/hellaswag": TokenizerHellaswag,
"allenai/winogrande": TokenizerWinogrande,
"allenai/ai2_arc_challenge": TokenizerArcChallenge,
"allenai/ai2_arc_easy": TokenizerArcEasy,
"ybisk/piqa": TokenizerPIQA,
"allenai/social_i_qa": TokenizerSIQA,
"allenai/openbookqa": TokenizerOpenBookQA,
"boolq": TokenizerBoolQ
}
# load model
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME)
model.cuda()
# load dataset
datasets = []
for dataset_name in QA_DATASETS:
tokenizer_cls = TOKENIZE_MAP[dataset_name]
tokenizer_wrapper = tokenizer_cls(tokenizer_path=MODEL_NAME)
load_dataset_kwargs = LOAD_DATASET_KWARGS[dataset_name]
if load_dataset_kwargs["path"] is not None:
load_dataset_kwargs["path"] = load_dataset_kwargs["path"]
datasets.append(load_dataset(**load_dataset_kwargs, trust_remote_code=True))
datasets[-1] = datasets[-1].map(tokenizer_wrapper, batched=True, remove_columns=datasets[-1]["train"].column_names)
datasets[-1].set_format(type="torch")
datasets[-1] = datasets[-1].shuffle(seed=SEED)
all_splits = set([n for ds in datasets for n in ds.keys()])
datasets = DatasetDict({split: interleave_datasets([ds[split] for ds in datasets if split in ds]) for split in all_splits})
data_collator = DataCollator(tokenizer_wrapper.tokenizer.eos_token_id, MODEL_MAX_LENGTH)
# get peft config
target_modules = [n for n, m in model.named_modules() if isinstance(m, torch.nn.Linear)]
if CORDA:
corda_config = CordaConfig(corda_method="ipm")
lora_config = LoraConfig(
init_lora_weights="corda",
target_modules=target_modules,
lora_alpha=1,
lora_dropout=0,
r=16,
corda_config=corda_config
)
sampled_dataset = datasets["train"].select(list(range(256)))
corda_data_loader = torch.utils.data.DataLoader(
sampled_dataset,
batch_size=1,
collate_fn=data_collator,
shuffle=True
)
def run_model(model, corda_data_loader):
for batch in corda_data_loader:
input_ids = batch["input_ids"]
input_ids = input_ids.to(model.device)
with torch.no_grad():
model(input_ids)
run_model = partial(run_model, model=model, corda_data_loader=corda_data_loader)
preprocess_corda(model, lora_config, run_model=run_model)
else:
lora_config = LoraConfig(
init_lora_weights=True,
target_modules=target_modules,
lora_alpha=1,
lora_dropout=0,
r=16
)
model = get_peft_model(model, lora_config)
training_args = TrainingArguments(
output_dir="output",
num_train_epochs=NUM_EPOCHS,
per_device_train_batch_size=BATCH_SIZE,
per_device_eval_batch_size=BATCH_SIZE,
seed=SEED,
learning_rate=LEARNING_RATE,
remove_unused_columns=False,
gradient_accumulation_steps=GRADIENT_ACCUMULATION_STEPS,
report_to=[]
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=datasets["train"],
eval_dataset=datasets["validation"] if "validation" in datasets else None,
data_collator=data_collator
)
trainer.train()
```
### Expected behavior
I tried to follow the corda example in the documentation and thought it should work like this
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2317/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2317/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2316
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2316/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2316/comments
|
https://api.github.com/repos/huggingface/peft/issues/2316/events
|
https://github.com/huggingface/peft/issues/2316
| 2,776,718,486
|
I_kwDOIf9iDM6lgVyW
| 2,316
|
peft with DinoV2 and tasktype feature extraction
|
{
"login": "createdaccountbecauseIwantgithubcopilot",
"id": 109659313,
"node_id": "U_kgDOBolEsQ",
"avatar_url": "https://avatars.githubusercontent.com/u/109659313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot",
"html_url": "https://github.com/createdaccountbecauseIwantgithubcopilot",
"followers_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/followers",
"following_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/following{/other_user}",
"gists_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/subscriptions",
"organizations_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/orgs",
"repos_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/repos",
"events_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/events{/privacy}",
"received_events_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2025-01-09T02:48:36
| 2025-01-09T14:11:54
| 2025-01-09T14:11:54
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
irrelevant.
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import AutoImageProcessor, Dinov2WithRegistersModel
from peft import LoraConfig, get_peft_model, TaskType
def setup_peft_model(model_name="facebook/dinov2-with-registers-large",
lora_r=8,
lora_alpha=32,
lora_dropout=0.1):
base_model = Dinov2WithRegistersModel.from_pretrained(model_name)
image_processor = AutoImageProcessor.from_pretrained(model_name)
peft_config = LoraConfig(
task_type=TaskType.FEATURE_EXTRACTION,
inference_mode=False,
r=lora_r,
lora_alpha=lora_alpha,
lora_dropout=lora_dropout,
target_modules=["query", "key", "value"]
)
peft_model = get_peft_model(base_model, peft_config)
peft_model.print_trainable_parameters()
return peft_model, image_processor
def process_image(model, processor, image_size=(518, 518)):
sample_input = torch.randn(1, 3, *image_size)
with torch.no_grad():
outputs = model(sample_input)
return outputs
def main():
model, processor = setup_peft_model()
outputs = process_image(model, processor)
print(f"Output shape: {outputs.last_hidden_state.shape}")
if __name__ == "__main__":
main()
```
Error: TypeError: Dinov2WithRegistersModel.forward() got an unexpected keyword argument 'input_ids'
### Expected behavior
it to work.
|
{
"login": "createdaccountbecauseIwantgithubcopilot",
"id": 109659313,
"node_id": "U_kgDOBolEsQ",
"avatar_url": "https://avatars.githubusercontent.com/u/109659313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot",
"html_url": "https://github.com/createdaccountbecauseIwantgithubcopilot",
"followers_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/followers",
"following_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/following{/other_user}",
"gists_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/subscriptions",
"organizations_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/orgs",
"repos_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/repos",
"events_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/events{/privacy}",
"received_events_url": "https://api.github.com/users/createdaccountbecauseIwantgithubcopilot/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2316/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2316/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2315
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2315/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2315/comments
|
https://api.github.com/repos/huggingface/peft/issues/2315/events
|
https://github.com/huggingface/peft/issues/2315
| 2,776,494,295
|
I_kwDOIf9iDM6lffDX
| 2,315
|
Prefix Tuning dimension error with Qwen2 and missing vocab_size for PaliGemma2
|
{
"login": "Florian-Dreyer",
"id": 64322175,
"node_id": "MDQ6VXNlcjY0MzIyMTc1",
"avatar_url": "https://avatars.githubusercontent.com/u/64322175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Florian-Dreyer",
"html_url": "https://github.com/Florian-Dreyer",
"followers_url": "https://api.github.com/users/Florian-Dreyer/followers",
"following_url": "https://api.github.com/users/Florian-Dreyer/following{/other_user}",
"gists_url": "https://api.github.com/users/Florian-Dreyer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Florian-Dreyer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Florian-Dreyer/subscriptions",
"organizations_url": "https://api.github.com/users/Florian-Dreyer/orgs",
"repos_url": "https://api.github.com/users/Florian-Dreyer/repos",
"events_url": "https://api.github.com/users/Florian-Dreyer/events{/privacy}",
"received_events_url": "https://api.github.com/users/Florian-Dreyer/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 15
| 2025-01-08T22:52:17
| 2025-02-25T15:04:15
| 2025-02-25T15:04:15
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
PEFT: 0.14.0
Transformers: 4.48.0.dev0
### Who can help?
@BenjaminBossan
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
For Qwen we get the following error:
IndexError: Caught IndexError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/{user_name}/venv/lib/python3.10/site-packages/torch/nn/parallel/parallel_apply.py", line 84, in _worker
output = module(*input, **kwargs)
File "/home/{user_name}/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/{user_name}/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/{user_name}/venv/lib/python3.10/site-packages/peft/peft_model.py", line 1755, in forward
return self.base_model(input_ids=input_ids, inputs_embeds=inputs_embeds, **kwargs)
File "/home/{user_name}/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/{user_name}/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/{user_name}/venv/lib/python3.10/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 1682, in forward
position_ids, rope_deltas = self.get_rope_index(
File "/home/{user_name}/venv/lib/python3.10/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 1486, in get_rope_index
input_ids = input_ids[attention_mask[i] == 1]
IndexError: The shape of the mask [172] at index 0 does not match the shape of the indexed tensor [122] at index 0
And for PaliGemma2 this one:
AttributeError Traceback (most recent call last)
Cell In[68], line 8
6 tokenizer = processor.tokenizer
7 # Apply PEFT model adaptation
----> 8 peft_model = get_peft_model(model, peft_config)
10 # Print trainable parameters
11 peft_model.print_trainable_parameters()
File ~/venv/lib/python3.10/site-packages/peft/mapping.py:222, in get_peft_model(model, peft_config, adapter_name, mixed, autocast_adapter_dtype, revision, low_cpu_mem_usage)
220 if peft_config.is_prompt_learning:
221 peft_config = _prepare_prompt_learning_config(peft_config, model_config)
--> 222 return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](
223 model,
224 peft_config,
225 adapter_name=adapter_name,
226 autocast_adapter_dtype=autocast_adapter_dtype,
227 low_cpu_mem_usage=low_cpu_mem_usage,
228 )
File ~/venv/lib/python3.10/site-packages/peft/peft_model.py:1684, in PeftModelForCausalLM.__init__(self, model, peft_config, adapter_name, **kwargs)
1681 def __init__(
1682 self, model: torch.nn.Module, peft_config: PeftConfig, adapter_name: str = "default", **kwargs
1683 ) -> None:
-> 1684 super().__init__(model, peft_config, adapter_name, **kwargs)
1685 self.base_model_prepare_inputs_for_generation = self.base_model.prepare_inputs_for_generation
File ~/venv/lib/python3.10/site-packages/peft/peft_model.py:170, in PeftModel.__init__(self, model, peft_config, adapter_name, autocast_adapter_dtype, low_cpu_mem_usage)
168 self._peft_config = {adapter_name: peft_config}
169 self.base_model = model
--> 170 self.add_adapter(adapter_name, peft_config, low_cpu_mem_usage=low_cpu_mem_usage)
171 else:
172 self._peft_config = None
File ~/venv/lib/python3.10/site-packages/peft/peft_model.py:958, in PeftModel.add_adapter(self, adapter_name, peft_config, low_cpu_mem_usage)
955 dict_config = self.config
957 peft_config = _prepare_prompt_learning_config(peft_config, dict_config)
--> 958 self._setup_prompt_encoder(adapter_name)
959 elif peft_config.is_adaption_prompt:
960 self.base_model.add_adapter(adapter_name, peft_config)
File ~/venv/lib/python3.10/site-packages/peft/peft_model.py:642, in PeftModel._setup_prompt_encoder(self, adapter_name)
635 for named_param, value in list(transformer_backbone.named_parameters()):
636 # for ZeRO-3, the tensor is sharded across accelerators and deepspeed modifies it to a tensor with shape
637 # [0] the actual unsharded shape is stored in "ds_shape" attribute special handling is needed in case
638 # the model is initialized in deepspeed.zero.Init() context or HfDeepSpeedConfig has been called before
639 # For reference refer to issue: https://github.com/huggingface/peft/issues/996
640 deepspeed_distributed_tensor_shape = getattr(value, "ds_shape", None)
--> 642 if value.shape[0] == self.base_model.config.vocab_size or (
643 deepspeed_distributed_tensor_shape is not None
644 and deepspeed_distributed_tensor_shape[0] == self.base_model.config.vocab_size
645 ):
646 word_embeddings = transformer_backbone.get_submodule(named_param.replace(".weight", ""))
647 break
File ~/venv/lib/python3.10/site-packages/transformers/configuration_utils.py:211, in PretrainedConfig.__getattribute__(self, key)
209 if key != "attribute_map" and key in super().__getattribute__("attribute_map"):
210 key = super().__getattribute__("attribute_map")[key]
--> 211 return super().__getattribute__(key)
AttributeError: 'PaliGemmaConfig' object has no attribute 'vocab_size'
You can find the notebook here to replicate the errors here:
https://github.com/Florian-Dreyer/PEFT_BUG/blob/main/prefix_tuning_peft.ipynb
Just execute the cells to get the errors.
### Expected behavior
We would expect the models to be able to process the input. We tried just calling model(**inputs) but ran into the same error with Qwen. Note: The dimension difference is exactly the prefix length.
So the question is, how can we get the models to run? Is PaliGemma even supported?
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2315/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2310
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2310/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2310/comments
|
https://api.github.com/repos/huggingface/peft/issues/2310/events
|
https://github.com/huggingface/peft/issues/2310
| 2,772,061,506
|
I_kwDOIf9iDM6lOk1C
| 2,310
|
Comparison of Different Fine-Tuning Techniques for Conversational AI
|
{
"login": "ImamaDev",
"id": 172792947,
"node_id": "U_kgDOCkyccw",
"avatar_url": "https://avatars.githubusercontent.com/u/172792947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ImamaDev",
"html_url": "https://github.com/ImamaDev",
"followers_url": "https://api.github.com/users/ImamaDev/followers",
"following_url": "https://api.github.com/users/ImamaDev/following{/other_user}",
"gists_url": "https://api.github.com/users/ImamaDev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ImamaDev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ImamaDev/subscriptions",
"organizations_url": "https://api.github.com/users/ImamaDev/orgs",
"repos_url": "https://api.github.com/users/ImamaDev/repos",
"events_url": "https://api.github.com/users/ImamaDev/events{/privacy}",
"received_events_url": "https://api.github.com/users/ImamaDev/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[
{
"id": 4838806434,
"node_id": "LA_kwDOIf9iDM8AAAABIGpTog",
"url": "https://api.github.com/repos/huggingface/peft/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
},
{
"id": 4838806438,
"node_id": "LA_kwDOIf9iDM8AAAABIGpTpg",
"url": "https://api.github.com/repos/huggingface/peft/labels/help%20wanted",
"name": "help wanted",
"color": "008672",
"default": true,
"description": "Extra attention is needed"
},
{
"id": 7219265350,
"node_id": "LA_kwDOIf9iDM8AAAABrk0_Rg",
"url": "https://api.github.com/repos/huggingface/peft/labels/contributions-welcome",
"name": "contributions-welcome",
"color": "F2AD28",
"default": false,
"description": ""
}
] |
open
| false
| null |
[] | null | 14
| 2025-01-07T07:07:50
| 2025-03-10T10:15:49
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Feature request
It would be incredibly helpful to have a clear comparison or support for various fine-tuning techniques specifically for conversational AI. This feature could include insights into their strengths, limitations, and ideal use cases, helping practitioners choose the right approach for their needs.
Here’s a list of techniques to consider:
LoRa
AdaLoRa
BONE
VeRa
XLora
LN Tuning
VbLora
HRA (Hyperparameter Regularization Adapter)
IA3 (Input-Aware Adapter)
Llama Adapter
CPT (Conditional Prompt Tuning)etc
### Motivation
With the growing number of fine-tuning techniques for conversational AI, it can be challenging to identify the most suitable approach for specific use cases. A comprehensive comparison of these techniques—highlighting their strengths, limitations, and ideal scenarios—would save time, reduce trial-and-error, and empower users to make informed decisions. This feature would bridge the gap between research and practical application, enabling more effective model customization and deployment.
### Your contribution
I’d be happy to collaborate on this! While I might not have a complete solution right now, I’m willing to contribute by gathering resources, reviewing papers, or helping organize comparisons. If others are interested in teaming up, we could work together on a PR to make this feature happen. Let’s connect and brainstorm how we can tackle this effectively!
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2310/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2310/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2307
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2307/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2307/comments
|
https://api.github.com/repos/huggingface/peft/issues/2307/events
|
https://github.com/huggingface/peft/issues/2307
| 2,771,731,382
|
I_kwDOIf9iDM6lNUO2
| 2,307
|
The provided `peft_type` 'PROMPT_TUNING' is not compatible with the `PeftMixedModel`.
|
{
"login": "Radu1999",
"id": 37249331,
"node_id": "MDQ6VXNlcjM3MjQ5MzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/37249331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Radu1999",
"html_url": "https://github.com/Radu1999",
"followers_url": "https://api.github.com/users/Radu1999/followers",
"following_url": "https://api.github.com/users/Radu1999/following{/other_user}",
"gists_url": "https://api.github.com/users/Radu1999/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Radu1999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Radu1999/subscriptions",
"organizations_url": "https://api.github.com/users/Radu1999/orgs",
"repos_url": "https://api.github.com/users/Radu1999/repos",
"events_url": "https://api.github.com/users/Radu1999/events{/privacy}",
"received_events_url": "https://api.github.com/users/Radu1999/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2025-01-07T01:46:18
| 2025-02-14T15:04:03
| 2025-02-14T15:04:02
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Feature request
PROMPT_TUNING is an useful adapter and it would be great if we can combine it with LORA.
### Motivation
Lots of finetunes on consumer grade hardware leverage lora. It would be great we can mix prompt tuning with lora as plug and play.
### Your contribution
I would like to submit a PR if there is interest.
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2307/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2307/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2304
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2304/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2304/comments
|
https://api.github.com/repos/huggingface/peft/issues/2304/events
|
https://github.com/huggingface/peft/issues/2304
| 2,770,151,254
|
I_kwDOIf9iDM6lHSdW
| 2,304
|
a question about input_ids and attention_mask after prefix-tuning
|
{
"login": "MaTengSYSU",
"id": 104305243,
"node_id": "U_kgDOBjeSWw",
"avatar_url": "https://avatars.githubusercontent.com/u/104305243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaTengSYSU",
"html_url": "https://github.com/MaTengSYSU",
"followers_url": "https://api.github.com/users/MaTengSYSU/followers",
"following_url": "https://api.github.com/users/MaTengSYSU/following{/other_user}",
"gists_url": "https://api.github.com/users/MaTengSYSU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MaTengSYSU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MaTengSYSU/subscriptions",
"organizations_url": "https://api.github.com/users/MaTengSYSU/orgs",
"repos_url": "https://api.github.com/users/MaTengSYSU/repos",
"events_url": "https://api.github.com/users/MaTengSYSU/events{/privacy}",
"received_events_url": "https://api.github.com/users/MaTengSYSU/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2025-01-06T08:44:01
| 2025-02-13T15:04:14
| 2025-02-13T15:04:14
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
there is a error report:
tensor([[ 1, 319, 13563, 1546, 263, 12758, 5199, 322, 385, 23116,
21082, 20255, 29889, 450, 20255, 4076, 8444, 29892, 13173, 29892,
322, 1248, 568, 6089, 304, 278, 5199, 29915, 29879, 5155,
29889, 3148, 1001, 29901, 29871, -200, 29871, 13, 4002, 29581,
278, 1967, 29889, 319, 1799, 9047, 13566, 29901]],
device='cuda:0')
input_ids shape: torch.Size([1, 48])
attention_mask shape: torch.Size([1, 68])
tensor([[True, True, True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True]], device='cuda:0')
Traceback (most recent call last):
File "/mnt/sda1/mateng/PEFT-MLLM/test.py", line 44, in <module>
print(description)
File "/mnt/sda1/mateng/PEFT-MLLM/llava/eval/run_llava.py", line 113, in eval_model
output_ids = model.generate(
File "/mnt/sda1/mateng/PEFT-MLLM/peft/src/peft/peft_model.py", line 1130, in generate
outputs = self.base_model.generate(**kwargs)
File "/home/mateng/anaconda3/envs/peft-mllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/mateng/anaconda3/envs/peft-mllm/lib/python3.10/site-packages/transformers/generation/utils.py", line 1602, in generate
return self.greedy_search(
File "/home/mateng/anaconda3/envs/peft-mllm/lib/python3.10/site-packages/transformers/generation/utils.py", line 2450, in greedy_search
outputs = self(
File "/home/mateng/anaconda3/envs/peft-mllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/mateng/anaconda3/envs/peft-mllm/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
File "/mnt/sda1/mateng/PEFT-MLLM/llava/model/language_model/llava_llama.py", line 84, in forward
) = self.prepare_inputs_labels_for_multimodal(
File "/mnt/sda1/mateng/PEFT-MLLM/llava/model/llava_arch.py", line 151, in prepare_inputs_labels_for_multimodal
input_ids = [cur_input_ids[cur_attention_mask] for cur_input_ids, cur_attention_mask in zip(input_ids, attention_mask)]
File "/mnt/sda1/mateng/PEFT-MLLM/llava/model/llava_arch.py", line 151, in <listcomp>
input_ids = [cur_input_ids[cur_attention_mask] for cur_input_ids, cur_attention_mask in zip(input_ids, attention_mask)]
IndexError: The shape of the mask [68] at index 0 does not match the shape of the indexed tensor [48] at index 0
### Who can help?
@BenjaminBossan @sayakpaul @stev
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
this is my test.py :
```
from llava.model.builder import load_pretrained_model
from llava.mm_utils import get_model_name_from_path
from peft import PeftModel
from PIL import Image
import torch
# 加载 LLaVA 模型和处理器
model_name = get_model_name_from_path("liuhaotian/llava-v1.5-7b") # 替换为你的 LLaVA 模型名称
print(f"Loading LLaVA model: {model_name}")
tokenizer, model, image_processor, context_len = load_pretrained_model(
model_path="liuhaotian/llava-v1.5-7b", # 模型路径
model_base=None, # 如果有基础模型(如 Vicuna),可以指定
model_name=model_name,
load_8bit=False, # 是否加载 8bit 量化模型
load_4bit=False, # 是否加载 4bit 量化模型
device_map="auto" # 自动分配设备
)
peft_model_path = "/mnt/sda1/mateng/PEFT-MLLM/checkpoints/llava/sqa/llava-sqa-prefix" # 替换为你的微调文件路径
model = PeftModel.from_pretrained(model, peft_model_path)
from llava.mm_utils import (
tokenizer_image_token,
)
from llava.eval.run_llava import eval_model
args = type('Args', (), {
"model_path": "liuhaotian/llava-v1.5-7b",
"model_base": None,
"model_name": 'llava-v1.5-7b',
"query": "Describe the image.",
"conv_mode": None,
"image_file": "/mnt/sda1/mateng/PEFT-MLLM/images/main_fig.jpg",
"sep": ",",
"temperature": 0,
"top_p": None,
"num_beams": 1,
"max_new_tokens": 512
})()
description = eval_model(args, model, tokenizer, image_processor, context_len)
print(description)
```
### Expected behavior
I am a novice in this field. How can I solve the problem of input_ids and attention_mask mismatch caused by fine-tuning? And I don't want to affect the performance of the model, I want to use the fine tuned model
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2304/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2304/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2302
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2302/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2302/comments
|
https://api.github.com/repos/huggingface/peft/issues/2302/events
|
https://github.com/huggingface/peft/issues/2302
| 2,764,539,820
|
I_kwDOIf9iDM6kx4es
| 2,302
|
Bug in `get_peft_model_state_dict` when using vblora
|
{
"login": "KaiyangLi1992",
"id": 50897450,
"node_id": "MDQ6VXNlcjUwODk3NDUw",
"avatar_url": "https://avatars.githubusercontent.com/u/50897450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KaiyangLi1992",
"html_url": "https://github.com/KaiyangLi1992",
"followers_url": "https://api.github.com/users/KaiyangLi1992/followers",
"following_url": "https://api.github.com/users/KaiyangLi1992/following{/other_user}",
"gists_url": "https://api.github.com/users/KaiyangLi1992/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KaiyangLi1992/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KaiyangLi1992/subscriptions",
"organizations_url": "https://api.github.com/users/KaiyangLi1992/orgs",
"repos_url": "https://api.github.com/users/KaiyangLi1992/repos",
"events_url": "https://api.github.com/users/KaiyangLi1992/events{/privacy}",
"received_events_url": "https://api.github.com/users/KaiyangLi1992/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-12-31T16:52:25
| 2025-02-09T15:03:35
| 2025-02-09T15:03:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
The issue occurs when the following line is executed:
```python
to_return["base_model.vblora_vector_bank." + adapter_name] = state_dict["base_model.vblora_vector_bank." + adapter_name]
```
- The `state_dict` does not contain a key named `"base_model.vblora_vector_bank.default"`.
- Replacing it with the following resolves the issue:
```python
for i in state_dict.keys():
if "vblora_vector_bank" in i:
to_return[i] = state_dict[i]
```
---
I’m not sure if this is due to how I am calling or configuring the function.
### Who can help?
@leo
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1e6ysneOZflu_TB5Pgj5zLTWyXbZW1Ezy#scrollTo=Da58JezBno0V
### Expected behavior
Remove this bug.
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2302/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2302/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2301
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2301/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2301/comments
|
https://api.github.com/repos/huggingface/peft/issues/2301/events
|
https://github.com/huggingface/peft/issues/2301
| 2,763,880,938
|
I_kwDOIf9iDM6kvXnq
| 2,301
|
How to pass in an attention _ mask that is one dimension more than input _ ids
|
{
"login": "Chinesehou97",
"id": 152465729,
"node_id": "U_kgDOCRZxQQ",
"avatar_url": "https://avatars.githubusercontent.com/u/152465729?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Chinesehou97",
"html_url": "https://github.com/Chinesehou97",
"followers_url": "https://api.github.com/users/Chinesehou97/followers",
"following_url": "https://api.github.com/users/Chinesehou97/following{/other_user}",
"gists_url": "https://api.github.com/users/Chinesehou97/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Chinesehou97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Chinesehou97/subscriptions",
"organizations_url": "https://api.github.com/users/Chinesehou97/orgs",
"repos_url": "https://api.github.com/users/Chinesehou97/repos",
"events_url": "https://api.github.com/users/Chinesehou97/events{/privacy}",
"received_events_url": "https://api.github.com/users/Chinesehou97/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-12-31T02:26:14
| 2025-02-07T15:03:57
| 2025-02-07T15:03:57
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Hello, how can I pass in `attention_mask` that has one more dimension than `input_ids`, for example: `output = peft_model.generate(input_ids,attention_mask=attention_mask,max_new_tokens=100)` The `input_ids` dimension is [bitch_size,N], and the `attention_mask` dimension is [bitch_size,N,N].
Under this condition, when the above line of code is run, the following error will be reported:
File "/root/anaconda3/lib/python3.10/site-packages/transformers/modeling_attn_mask_utils.py", line 179, in _expand_mask bsz, src_len = mask.size()
ValueError: too many values to unpack (expected 2)
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
`
input_ids = torch.cat([
(torch.ones(input_ids.shape[0], 1) * uni_prompting.sptids_dict['<|mmu|>']).to(device),
(torch.ones(input_ids.shape[0], 1) * uni_prompting.sptids_dict['<|soi|>']).to(device),
image_tokens,
(torch.ones(input_ids.shape[0], 1) * uni_prompting.sptids_dict['<|eoi|>']).to(device),
(torch.ones(input_ids.shape[0], 1) * uni_prompting.sptids_dict['<|sot|>']).to(device),
input_ids
], dim=1).long()
attention_mask = create_attention_mask_for_mmu(input_ids.to(device),
eoi_id=int(uni_prompting.sptids_dict['<|eoi|>']))
cont_toks_list = peft_model.generate(input_ids,attention_mask=attention_mask,max_new_tokens=100)`
### Expected behavior
Read the model for fine-tuning and reasoning.
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2301/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2299
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2299/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2299/comments
|
https://api.github.com/repos/huggingface/peft/issues/2299/events
|
https://github.com/huggingface/peft/issues/2299
| 2,761,224,091
|
I_kwDOIf9iDM6klO-b
| 2,299
|
Additional Information to prepare_model_for_kbit_training
|
{
"login": "NilBiescas",
"id": 98542048,
"node_id": "U_kgDOBd-h4A",
"avatar_url": "https://avatars.githubusercontent.com/u/98542048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NilBiescas",
"html_url": "https://github.com/NilBiescas",
"followers_url": "https://api.github.com/users/NilBiescas/followers",
"following_url": "https://api.github.com/users/NilBiescas/following{/other_user}",
"gists_url": "https://api.github.com/users/NilBiescas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NilBiescas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NilBiescas/subscriptions",
"organizations_url": "https://api.github.com/users/NilBiescas/orgs",
"repos_url": "https://api.github.com/users/NilBiescas/repos",
"events_url": "https://api.github.com/users/NilBiescas/events{/privacy}",
"received_events_url": "https://api.github.com/users/NilBiescas/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2024-12-27T19:52:16
| 2025-01-06T16:16:14
| 2025-01-06T16:07:09
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Feature request
Add a comment in the docstring of prepare_model_for_kbit_training to inform that it sets requires_grad to false to all the base model parameters.
### Motivation
As this function is used before training it might be nice to know that its actually freezing all the base model.
### Your contribution
I could add a line commenting that the function freezes the base model.
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2299/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2298
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2298/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2298/comments
|
https://api.github.com/repos/huggingface/peft/issues/2298/events
|
https://github.com/huggingface/peft/issues/2298
| 2,760,388,162
|
I_kwDOIf9iDM6kiC5C
| 2,298
|
Qdora support
|
{
"login": "imrankh46",
"id": 103720343,
"node_id": "U_kgDOBi6llw",
"avatar_url": "https://avatars.githubusercontent.com/u/103720343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imrankh46",
"html_url": "https://github.com/imrankh46",
"followers_url": "https://api.github.com/users/imrankh46/followers",
"following_url": "https://api.github.com/users/imrankh46/following{/other_user}",
"gists_url": "https://api.github.com/users/imrankh46/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imrankh46/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imrankh46/subscriptions",
"organizations_url": "https://api.github.com/users/imrankh46/orgs",
"repos_url": "https://api.github.com/users/imrankh46/repos",
"events_url": "https://api.github.com/users/imrankh46/events{/privacy}",
"received_events_url": "https://api.github.com/users/imrankh46/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-12-27T04:47:54
| 2025-01-03T12:26:58
| 2025-01-03T12:26:58
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Feature request
is it possible to use qdora with peft?
### Motivation
qdora is better than qlora and perform like full fine tuning.
### Your contribution
```
peft_config = LoraConfig(
r=8,
lora_alpha=32,
lora_dropout=0.1,
qdora=True # adding qdora
)
```
|
{
"login": "githubnemo",
"id": 264196,
"node_id": "MDQ6VXNlcjI2NDE5Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/264196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/githubnemo",
"html_url": "https://github.com/githubnemo",
"followers_url": "https://api.github.com/users/githubnemo/followers",
"following_url": "https://api.github.com/users/githubnemo/following{/other_user}",
"gists_url": "https://api.github.com/users/githubnemo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/githubnemo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/githubnemo/subscriptions",
"organizations_url": "https://api.github.com/users/githubnemo/orgs",
"repos_url": "https://api.github.com/users/githubnemo/repos",
"events_url": "https://api.github.com/users/githubnemo/events{/privacy}",
"received_events_url": "https://api.github.com/users/githubnemo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2298/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2298/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2296
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2296/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2296/comments
|
https://api.github.com/repos/huggingface/peft/issues/2296/events
|
https://github.com/huggingface/peft/issues/2296
| 2,757,010,941
|
I_kwDOIf9iDM6kVKX9
| 2,296
|
Error of load_adapter of Target module is not supported when using Qwen2-VL
|
{
"login": "bigmouthbabyguo-530",
"id": 28996090,
"node_id": "MDQ6VXNlcjI4OTk2MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/28996090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bigmouthbabyguo-530",
"html_url": "https://github.com/bigmouthbabyguo-530",
"followers_url": "https://api.github.com/users/bigmouthbabyguo-530/followers",
"following_url": "https://api.github.com/users/bigmouthbabyguo-530/following{/other_user}",
"gists_url": "https://api.github.com/users/bigmouthbabyguo-530/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bigmouthbabyguo-530/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bigmouthbabyguo-530/subscriptions",
"organizations_url": "https://api.github.com/users/bigmouthbabyguo-530/orgs",
"repos_url": "https://api.github.com/users/bigmouthbabyguo-530/repos",
"events_url": "https://api.github.com/users/bigmouthbabyguo-530/events{/privacy}",
"received_events_url": "https://api.github.com/users/bigmouthbabyguo-530/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-12-24T01:46:09
| 2025-02-05T15:04:03
| 2025-02-05T15:04:03
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Env info:
* torch 2.4.0
* peft 0.11.1
* transformers 4.46.1
I finetune lora for Qwen2VL in a 5-fold way. My aim is to load 5 lora models according to the following procedure:
```
from peft import PeftConfig, PeftModel, get_peft_model
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import Qwen2VLForConditionalGeneration, Qwen2VLConfig
import torch
path="/xxx/saves/qwen2_vl-7b/kgroup_fold_0"
config = PeftConfig.from_pretrained(path)
model = Qwen2VLForConditionalGeneration.from_pretrained(config.base_model_name_or_path,
trust_remote_code=True,
torch_dtype=torch.bfloat16,
device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("/mnt_nas/download-model-mllm/Qwen2-VL-7B-Instruct")
lora_path="/xxx/saves/qwen2_vl-7b/kgroup_fold_{fold}"
model = PeftModel.from_pretrained(model, lora_path.format(fold=0), adapter_name=f"fold_{0}")
for i in range(1,5):
print(i)
model.load_adapter(lora_path.format(fold=i), adapter_name=f"fold_{i}")
```
But it reports module is not supported error:
> File [~/miniconda3/envs/mllm/lib/python3.10/site-packages/peft/tuners/lora/model.py:322](http://127.0.0.1:10003/lab/tree/~/miniconda3/envs/mllm/lib/python3.10/site-packages/peft/tuners/lora/model.py#line=321), in LoraModel._create_new_module(lora_config, adapter_name, target, **kwargs)
317 break
319 if new_module is None:
320
321 # no module could be matched
--> 322 raise ValueError(
323 f"Target module {target} is not supported. Currently, only the following modules are supported: "
324 "`torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv2d`, `transformers.pytorch_utils.Conv1D`."
325 )
327 return new_module
>ValueError: Target module ModuleDict(
(fold_0): Dropout(p=0.05, inplace=False)
(fold_1): Dropout(p=0.05, inplace=False)
) is not supported. Currently, only the following modules are supported: `torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv2d`, `transformers.pytorch_utils.Conv1D`.
According to this [issue](https://github.com/huggingface/peft/issues/2286), I try to ignore dropout module mannualy.
However, I would like to use a combination of theses loras
`model.add_weighted_adapter(
adapters=['fold_0', 'fold_1'],
weights=[0.5, 0.5],
adapter_name="combined",
combination_type="svd",
)`
But it also failed, reporting:
> File [~/miniconda3/envs/mllm/lib/python3.10/site-packages/peft/tuners/lora/model.py:659](http://127.0.0.1:10003/lab/tree/~/miniconda3/envs/mllm/lib/python3.10/site-packages/peft/tuners/lora/model.py#line=658), in LoraModel.add_weighted_adapter(self, adapters, weights, adapter_name, combination_type, svd_rank, svd_clamp, svd_full_matrices, svd_driver, density, majority_sign_method)
651 target_lora_B.data[:, : loras_B.shape[1]] = loras_B
652 elif combination_type in [
653 "svd",
654 "ties_svd",
(...)
657 "magnitude_prune_svd",
658 ]:
--> 659 target_lora_A.data, target_lora_B.data = self._svd_generalized_task_arithmetic_weighted_adapter(
660 combination_type,
661 adapters,
662 weights,
663 new_rank,
664 target,
665 target_lora_A,
666 target_lora_B,
667 density,
668 majority_sign_method,
669 svd_clamp,
670 full_matrices=svd_full_matrices,
671 driver=svd_driver,
672 )
673 elif combination_type in ["linear", "ties", "dare_linear", "dare_ties", "magnitude_prune"]:
674 target_lora_A.data, target_lora_B.data = self._generalized_task_arithmetic_weighted_adapter(
675 combination_type, adapters, weights, target, density, majority_sign_method
676 )
> File [~/miniconda3/envs/mllm/lib/python3.10/site-packages/peft/tuners/lora/model.py:703](http://127.0.0.1:10003/lab/tree/~/miniconda3/envs/mllm/lib/python3.10/site-packages/peft/tuners/lora/model.py#line=702), in LoraModel._svd_generalized_task_arithmetic_weighted_adapter(self, combination_type, adapters, weights, new_rank, target, target_lora_A, target_lora_B, density, majority_sign_method, clamp, full_matrices, driver)
701 # if no valid adapter, nothing to do
702 if len(valid_adapters) == 0:
--> 703 raise ValueError("No matching LoRAs found. Please raise an issue on Github.")
704 delta_weight = [target.get_delta_weight(adapter) for adapter in valid_adapters]
705 valid_weights = torch.tensor(valid_weights).to(delta_weight[0].device)
> ValueError: No matching LoRAs found. Please raise an issue on Github.
As my aim is to merge loras and make their contributions equally. I'm not sure if I can use from_pretrain method like this
```
for i in range(5):
model = PeftModel.from_pretrained(model, lora_path.format(fold=i))
model = model.merge_and_unload()
```
And adding a weight (like 0.2) to the merge_and_unload method.
### Who can help?
@Ben
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
path="/xxx/saves/qwen2_vl-7b/kgroup_fold_0"
config = PeftConfig.from_pretrained(path)
model = Qwen2VLForConditionalGeneration.from_pretrained(config.base_model_name_or_path,
trust_remote_code=True,
torch_dtype=torch.bfloat16,
device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("/mnt_nas/download-model-mllm/Qwen2-VL-7B-Instruct")
lora_path="/xxx/saves/qwen2_vl-7b/kgroup_fold_{fold}"
model = PeftModel.from_pretrained(model, lora_path.format(fold=0), adapter_name=f"fold_{0}")
for i in range(1,5):
print(i)
model.load_adapter(lora_path.format(fold=i), adapter_name=f"fold_{i}")`
### Expected behavior
Expect to load_adapter successfully
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2296/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2296/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2295
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2295/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2295/comments
|
https://api.github.com/repos/huggingface/peft/issues/2295/events
|
https://github.com/huggingface/peft/issues/2295
| 2,755,569,416
|
I_kwDOIf9iDM6kPqcI
| 2,295
|
PEFT model doesn't update params when having changed LoRA config
|
{
"login": "d-kleine",
"id": 53251018,
"node_id": "MDQ6VXNlcjUzMjUxMDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/53251018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-kleine",
"html_url": "https://github.com/d-kleine",
"followers_url": "https://api.github.com/users/d-kleine/followers",
"following_url": "https://api.github.com/users/d-kleine/following{/other_user}",
"gists_url": "https://api.github.com/users/d-kleine/gists{/gist_id}",
"starred_url": "https://api.github.com/users/d-kleine/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/d-kleine/subscriptions",
"organizations_url": "https://api.github.com/users/d-kleine/orgs",
"repos_url": "https://api.github.com/users/d-kleine/repos",
"events_url": "https://api.github.com/users/d-kleine/events{/privacy}",
"received_events_url": "https://api.github.com/users/d-kleine/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 10
| 2024-12-23T09:03:50
| 2025-01-09T14:05:43
| 2025-01-09T14:05:43
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
I have noticed that when updated the `target_modules` settings in the LoRA config, the PEFT model params remain unchanged. Might affect other PEFT settings too.
My assumption is that `get_peft_model()` does not re-instantiate/update its settings once it has been initialized before.
System: Windows 11
Python: 3.11
peft: 0.14.0
### Who can help?
@BenjaminBossan @sayakpaul
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
For reproduction in a Jupyter Notebook:
```py
from peft import LoraConfig, get_peft_model, TaskType
from transformers import AutoTokenizer, AutoModelForTokenClassification
import torch
label_list = ['B-LOC', 'B-MISC', 'B-ORG', 'B-PER', 'I-LOC', 'I-MISC', 'I-ORG', 'I-PER', 'O']
# Initialize tokenizer
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B")
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
model = AutoModelForTokenClassification.from_pretrained(
"meta-llama/Llama-3.2-1B",
pad_token_id=tokenizer.eos_token_id,
torch_dtype=torch.bfloat16,
device_map="auto",
num_labels=len(label_list)
)
for name, module in model.named_modules():
print(name)
```
```py
lora_config = LoraConfig(
task_type=TaskType.TOKEN_CLS,
r=16,
lora_alpha=32,
target_modules=["q_proj", "v_proj"],
lora_dropout=0.1
)
```
```py
model = get_peft_model(model, lora_config)
model.print_trainable_parameters()
```
This outputs
```
trainable params: 1,722,377 || all params: 1,237,555,218 || trainable%: 0.1392
```
But when changing the above code without restarting the kernel to:
```py
lora_config = LoraConfig(
task_type=TaskType.TOKEN_CLS,
r=16,
lora_alpha=32,
target_modules=["layers.0.self_attn.q_proj", "layers.0.self_attn.v_proj"], # changed to specific heads
lora_dropout=0.1
)
```
and retrieving the trainable params again:
```py
model = get_peft_model(model, lora_config)
model.print_trainable_parameters()
```
it outputs again
```
trainable params: 1,722,377 || all params: 1,237,555,218 || trainable%: 0.1392
```
but after the update it should be
```
trainable params: 124,937 || all params: 1,235,957,778 || trainable%: 0.0101
```
### Expected behavior
When having updated `lora_config`, `get_peft_model()` should retrieve the current config.
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2295/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2295/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2293
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2293/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2293/comments
|
https://api.github.com/repos/huggingface/peft/issues/2293/events
|
https://github.com/huggingface/peft/issues/2293
| 2,754,818,792
|
I_kwDOIf9iDM6kMzLo
| 2,293
|
Is it possible to add LoRA on specific head?
|
{
"login": "SpeeeedLee",
"id": 132431571,
"node_id": "U_kgDOB-S-0w",
"avatar_url": "https://avatars.githubusercontent.com/u/132431571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SpeeeedLee",
"html_url": "https://github.com/SpeeeedLee",
"followers_url": "https://api.github.com/users/SpeeeedLee/followers",
"following_url": "https://api.github.com/users/SpeeeedLee/following{/other_user}",
"gists_url": "https://api.github.com/users/SpeeeedLee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SpeeeedLee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SpeeeedLee/subscriptions",
"organizations_url": "https://api.github.com/users/SpeeeedLee/orgs",
"repos_url": "https://api.github.com/users/SpeeeedLee/repos",
"events_url": "https://api.github.com/users/SpeeeedLee/events{/privacy}",
"received_events_url": "https://api.github.com/users/SpeeeedLee/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 11
| 2024-12-22T19:57:54
| 2025-03-08T15:03:21
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Feature request
Could I add LoRA only to some selected heads on the model?
I read some documentation [here](https://huggingface.co/docs/peft/developer_guides/custom_models), but am still not sure about how to implement my goal.
### Motivation
Current LoRA Config can allow users to decide where matrices to add LoRA, a more fine-grained control on which heads to add LoRA would be beneficial for the developers.
### Your contribution
I would appreciate some tips on how to implement this.
| null |
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2293/timeline
| null | null | false
|
https://api.github.com/repos/huggingface/peft/issues/2292
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2292/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2292/comments
|
https://api.github.com/repos/huggingface/peft/issues/2292/events
|
https://github.com/huggingface/peft/issues/2292
| 2,753,852,491
|
I_kwDOIf9iDM6kJHRL
| 2,292
|
Cannot import name 'EncoderDecoderCache' from 'transformers'
|
{
"login": "Huang-jia-xuan",
"id": 122351359,
"node_id": "U_kgDOB0ru_w",
"avatar_url": "https://avatars.githubusercontent.com/u/122351359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Huang-jia-xuan",
"html_url": "https://github.com/Huang-jia-xuan",
"followers_url": "https://api.github.com/users/Huang-jia-xuan/followers",
"following_url": "https://api.github.com/users/Huang-jia-xuan/following{/other_user}",
"gists_url": "https://api.github.com/users/Huang-jia-xuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Huang-jia-xuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Huang-jia-xuan/subscriptions",
"organizations_url": "https://api.github.com/users/Huang-jia-xuan/orgs",
"repos_url": "https://api.github.com/users/Huang-jia-xuan/repos",
"events_url": "https://api.github.com/users/Huang-jia-xuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/Huang-jia-xuan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-12-21T09:00:04
| 2025-03-08T15:03:21
| 2025-03-08T15:03:21
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
transformer==4.39.3;peft==0.14.0
Maybe this is from transformer's update,so which version can i use.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
from src import models
from src.utils import IImage, resize
import numpy as np
from src.methods import rasg, sd, sr
from PIL import Image
from peft import get_peft_model, LoraConfig, TaskType
inp_model = models.load_inpainting_model('ds8_inp', device='cpu', cache=True)
lora_config = LoraConfig(
task_type=TaskType.IMAGE_GENERATION,
inference_mode=True,
r=8,
lora_alpha=16,
lora_dropout=0.05,
)
new_model = get_peft_model(inp_model.unet, lora_config)
print(new_model.state_dict().keys())
### Expected behavior
/root/miniconda3/lib/python3.10/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
Traceback (most recent call last):
File "/root/autodl-tmp/workspace/HD-Painter/paratest.py", line 6, in <module>
from peft import get_peft_model, LoraConfig, TaskType
File "/root/miniconda3/lib/python3.10/site-packages/peft/__init__.py", line 22, in <module>
from .auto import (
File "/root/miniconda3/lib/python3.10/site-packages/peft/auto.py", line 32, in <module>
from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING
File "/root/miniconda3/lib/python3.10/site-packages/peft/mapping.py", line 25, in <module>
from .mixed_model import PeftMixedModel
File "/root/miniconda3/lib/python3.10/site-packages/peft/mixed_model.py", line 29, in <module>
from .peft_model import PeftModel
File "/root/miniconda3/lib/python3.10/site-packages/peft/peft_model.py", line 37, in <module>
from transformers import Cache, DynamicCache, EncoderDecoderCache, PreTrainedModel
ImportError: cannot import name 'Cache' from 'transformers' (/root/miniconda3/lib/python3.10/site-packages/transformers/__init__.py)
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2292/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2292/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2291
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2291/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2291/comments
|
https://api.github.com/repos/huggingface/peft/issues/2291/events
|
https://github.com/huggingface/peft/issues/2291
| 2,752,889,094
|
I_kwDOIf9iDM6kFcEG
| 2,291
|
get_peft_model() adds unwanted arguments to CLIPModel
|
{
"login": "TimonKaeppel",
"id": 57712240,
"node_id": "MDQ6VXNlcjU3NzEyMjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/57712240?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TimonKaeppel",
"html_url": "https://github.com/TimonKaeppel",
"followers_url": "https://api.github.com/users/TimonKaeppel/followers",
"following_url": "https://api.github.com/users/TimonKaeppel/following{/other_user}",
"gists_url": "https://api.github.com/users/TimonKaeppel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TimonKaeppel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TimonKaeppel/subscriptions",
"organizations_url": "https://api.github.com/users/TimonKaeppel/orgs",
"repos_url": "https://api.github.com/users/TimonKaeppel/repos",
"events_url": "https://api.github.com/users/TimonKaeppel/events{/privacy}",
"received_events_url": "https://api.github.com/users/TimonKaeppel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-12-20T14:48:35
| 2024-12-29T02:33:25
| 2024-12-29T02:33:25
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Python 3.12.6
peft Version: 0.14.0
transformers Version: 4.47.1
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
```python
from peft import get_peft_model, LoraConfig, TaskType
from transformers import CLIPModel
# Load the pre-trained CLIP model
model = CLIPModel.from_pretrained(model_name)
# Define the PEFT configuration
peft_config = LoraConfig(
task_type=TaskType.FEATURE_EXTRACTION, # CLIP is used for feature extraction
inference_mode=False, # Enable fine-tuning mode
r=16, # LoRA rank (adapter size)
lora_alpha=32, # Scaling factor for LoRA updates
lora_dropout=0.1, # Dropout probability in LoRA layers
bias="none", # Usually 'none', 'all', or 'lora_only'
target_modules=["q_proj", "v_proj"] # Typical attention projections for transformers # , "k_proj"
)
print(f"Forward call arguments: {model.forward.__code__.co_varnames}")
model = get_peft_model(model, peft_config)
print(f"Forward call arguments: {model.forward.__code__.co_varnames}")
model.print_trainable_parameters()
```
-----
This prints:
```
Forward call arguments: ('self', 'input_ids', 'pixel_values', 'attention_mask', 'position_ids', 'return_loss', 'output_attentions', 'output_hidden_states', 'interpolate_pos_encoding', 'return_dict', 'vision_outputs', 'text_outputs', 'image_embeds', 'text_embeds', 'logit_scale', 'logits_per_text', 'logits_per_image', 'loss', 'output')
Forward call arguments: ('self', 'input_ids', 'attention_mask', 'inputs_embeds', 'output_attentions', 'output_hidden_states', 'return_dict', 'task_ids', 'kwargs', 'peft_config', 'k', 'v', 'batch_size', 'prefix_attention_mask', 'prompts')
```
------
When training this wrapped peft model the trainer throws
```
TypeError: CLIPModel.forward() got an unexpected keyword argument 'inputs_embeds'
```
### Expected behavior
Peft should not add incompatible keyword arguments to base models
|
{
"login": "githubnemo",
"id": 264196,
"node_id": "MDQ6VXNlcjI2NDE5Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/264196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/githubnemo",
"html_url": "https://github.com/githubnemo",
"followers_url": "https://api.github.com/users/githubnemo/followers",
"following_url": "https://api.github.com/users/githubnemo/following{/other_user}",
"gists_url": "https://api.github.com/users/githubnemo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/githubnemo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/githubnemo/subscriptions",
"organizations_url": "https://api.github.com/users/githubnemo/orgs",
"repos_url": "https://api.github.com/users/githubnemo/repos",
"events_url": "https://api.github.com/users/githubnemo/events{/privacy}",
"received_events_url": "https://api.github.com/users/githubnemo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2291/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2291/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2289
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2289/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2289/comments
|
https://api.github.com/repos/huggingface/peft/issues/2289/events
|
https://github.com/huggingface/peft/issues/2289
| 2,749,020,906
|
I_kwDOIf9iDM6j2rrq
| 2,289
|
Inconsistent Parameter Mismatches After Merging PEFT and Base Models
|
{
"login": "enhulu-ms",
"id": 182672401,
"node_id": "U_kgDOCuNcEQ",
"avatar_url": "https://avatars.githubusercontent.com/u/182672401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enhulu-ms",
"html_url": "https://github.com/enhulu-ms",
"followers_url": "https://api.github.com/users/enhulu-ms/followers",
"following_url": "https://api.github.com/users/enhulu-ms/following{/other_user}",
"gists_url": "https://api.github.com/users/enhulu-ms/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enhulu-ms/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enhulu-ms/subscriptions",
"organizations_url": "https://api.github.com/users/enhulu-ms/orgs",
"repos_url": "https://api.github.com/users/enhulu-ms/repos",
"events_url": "https://api.github.com/users/enhulu-ms/events{/privacy}",
"received_events_url": "https://api.github.com/users/enhulu-ms/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 32
| 2024-12-19T00:54:08
| 2025-01-20T17:28:17
| 2025-01-20T17:28:16
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
peft 0.14.0, transformers 4.45.2, accelerate 1.0.1, Python 3.11.9, windows
### Who can help?
@BenjaminBossan @sayakpaul
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers_custom.modeling impor6t CustomConfig
from transformers_custom.tokenization import CustomTokenizer
from transformers_custom.multitask_model import CustomForSequenceClassificationMultitask
from peft import PeftModel
import torch
def compare_model_params(model1, model2):
# Extract state dictionaries
sd1 = model1.state_dict()
sd2 = model2.state_dict()
# First, check if they have the same keys
keys1 = set(sd1.keys())
keys2 = set(sd2.keys())
# Find parameters that are not present in both
missing_in_model2 = keys1 - keys2
missing_in_model1 = keys2 - keys1
if missing_in_model2:
print("Parameters missing in model2:", missing_in_model2)
if missing_in_model1:
print("Parameters missing in model1:", missing_in_model1)
# Now compare parameters that exist in both
mismatch_names = []
for key in sorted(keys1.intersection(keys2)):
param1 = sd1[key]
param2 = sd2[key]
# Check for shape mismatch
if param1.shape != param2.shape:
mismatch_names.append(key)
continue
# Check for value mismatch
if not torch.allclose(param1, param2):
print("Mismatched values for parameter:", key, f"model1: {param1}", f"model2: {param2}")
mismatch_names.append(key)
# Print out results
if mismatch_names:
print("Mismatched parameters:", mismatch_names)
else:
print("All parameters match perfectly.")
base_model_path = r"C:\models\tms\download\base2"
peft_path = r"C:\models\tms\download\adapter2"
merged_model_path = r"C:\models\tms\download\adapter2_merged\peft_merged"
config = CustomConfig.from_pretrained(
base_model_path,
num_labels=8,
finetuning_task=None,
cache_dir=None,
revision="main",
)
base_model = CustomForSequenceClassificationMultitask.from_pretrained(
base_model_path,
config=config,
cache_dir=None,
revision="main",
)
peft_model = PeftModel.from_pretrained(base_model, peft_path)
peft_model_merged = peft_model.merge_and_unload()
peft_model_merged.eval()
merged_config = CustomConfig.from_pretrained(
merged_model_path,
num_labels=8,
finetuning_task=None,
cache_dir=None,
revision="main",
)
merged_model = CustomForSequenceClassificationMultitask.from_pretrained(
merged_model_path,
config=merged_config,
cache_dir=None,
revision="main",
)
merged_model.eval()
compare_model_params(peft_model_merged, merged_model)
```
### Expected behavior
I saved the base model and the merged model (using save_pretrained) after training and calling merge_and_unload(). I also saved the PEFT model (via trainer.save_model). After loading the PEFT parameters on top of the base model and calling merge_and_unload(), I compared the newly merged model with the previously saved merged model. Some parameters do not match, and the specific mismatches change with each run to compare models. For example, sometimes the mismatched parameters are ['classifier2.class_dense.bias', 'classifier2.class_dense.weight', ...] and other times ['custom.encoder.layer.19.attention.self.query.weight'].
How can I resolve this issue? Ideally, there should be no mismatches, or at least the mismatches should be consistent across runs.
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2289/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2289/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2286
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2286/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2286/comments
|
https://api.github.com/repos/huggingface/peft/issues/2286/events
|
https://github.com/huggingface/peft/issues/2286
| 2,743,932,927
|
I_kwDOIf9iDM6jjRf_
| 2,286
|
ValueError: Target module Dropout(p=0.05, inplace=False) is not supported. Currently, only the following modules are supported: `torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv2d`, `transformers.pytorch_utils.Conv1D`.
|
{
"login": "gyuilLim",
"id": 50009192,
"node_id": "MDQ6VXNlcjUwMDA5MTky",
"avatar_url": "https://avatars.githubusercontent.com/u/50009192?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gyuilLim",
"html_url": "https://github.com/gyuilLim",
"followers_url": "https://api.github.com/users/gyuilLim/followers",
"following_url": "https://api.github.com/users/gyuilLim/following{/other_user}",
"gists_url": "https://api.github.com/users/gyuilLim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gyuilLim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gyuilLim/subscriptions",
"organizations_url": "https://api.github.com/users/gyuilLim/orgs",
"repos_url": "https://api.github.com/users/gyuilLim/repos",
"events_url": "https://api.github.com/users/gyuilLim/events{/privacy}",
"received_events_url": "https://api.github.com/users/gyuilLim/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2024-12-17T04:57:31
| 2024-12-19T07:13:34
| 2024-12-19T07:13:34
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Library version: PEFT==0.13.2, PyTorch==2.4.0, Transformers==4.46.3
Python version: 3.8.19
CUDA version: 12.6
I am trying to implement Low-Rank Adaptation (LoRA) in my model, but I encountered the following error when running the training script:
ValueError: Target module Dropout(p=0.05, inplace=False) is not supported. Currently, only the following modules are supported: `torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv2d`, `transformers.pytorch_utils.Conv1D`.
```
[rank0]: Traceback (most recent call last):
[rank0]: File "/home/vision/gyuil/lab/vga_finetuning/LLaVA/llava/train/train_mem.py", line 6, in <module>
[rank0]: train(attn_implementation="flash_attention_2")
[rank0]: File "/home/vision/gyuil/lab/vga_finetuning/LLaVA/llava/train/train.py", line 921, in train
[rank0]: model = get_peft_model(model, lora_config)
[rank0]: File "/home/vision/anaconda3/envs/torch/lib/python3.8/site-packages/peft/mapping.py", line 194, in get_peft_model
[rank0]: return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](
[rank0]: File "/home/vision/anaconda3/envs/torch/lib/python3.8/site-packages/peft/peft_model.py", line 1609, in __init__
[rank0]: super().__init__(model, peft_config, adapter_name, **kwargs)
[rank0]: File "/home/vision/anaconda3/envs/torch/lib/python3.8/site-packages/peft/peft_model.py", line 171, in __init__
[rank0]: self.base_model = cls(model, {adapter_name: peft_config}, adapter_name)
[rank0]: File "/home/vision/anaconda3/envs/torch/lib/python3.8/site-packages/peft/tuners/lora/model.py", line 141, in __init__
[rank0]: super().__init__(model, config, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage)
[rank0]: File "/home/vision/anaconda3/envs/torch/lib/python3.8/site-packages/peft/tuners/tuners_utils.py", line 184, in __init__
[rank0]: self.inject_adapter(self.model, adapter_name, low_cpu_mem_usage=low_cpu_mem_usage)
[rank0]: File "/home/vision/anaconda3/envs/torch/lib/python3.8/site-packages/peft/tuners/tuners_utils.py", line 496, in inject_adapter
[rank0]: self._create_and_replace(peft_config, adapter_name, target, target_name, parent, current_key=key)
[rank0]: File "/home/vision/anaconda3/envs/torch/lib/python3.8/site-packages/peft/tuners/lora/model.py", line 227, in _create_and_replace
[rank0]: new_module = self._create_new_module(lora_config, adapter_name, target, **kwargs)
[rank0]: File "/home/vision/anaconda3/envs/torch/lib/python3.8/site-packages/peft/tuners/lora/model.py", line 353, in _create_new_module
[rank0]: raise ValueError(
[rank0]: ValueError: Target module Dropout(p=0.05, inplace=False) is not supported. Currently, only the following modules are supported: `torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv2d`, `transformers.pytorch_utils.Conv1D`.
```
It seems that the LoRA implementation currently does not allow for Dropout layers to be included as target modules. Could you provide guidance on how to properly handle dropout with LoRA or whether it will be supported in future updates?
Thank you for your assistance!
|
{
"login": "gyuilLim",
"id": 50009192,
"node_id": "MDQ6VXNlcjUwMDA5MTky",
"avatar_url": "https://avatars.githubusercontent.com/u/50009192?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gyuilLim",
"html_url": "https://github.com/gyuilLim",
"followers_url": "https://api.github.com/users/gyuilLim/followers",
"following_url": "https://api.github.com/users/gyuilLim/following{/other_user}",
"gists_url": "https://api.github.com/users/gyuilLim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gyuilLim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gyuilLim/subscriptions",
"organizations_url": "https://api.github.com/users/gyuilLim/orgs",
"repos_url": "https://api.github.com/users/gyuilLim/repos",
"events_url": "https://api.github.com/users/gyuilLim/events{/privacy}",
"received_events_url": "https://api.github.com/users/gyuilLim/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2286/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2285
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2285/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2285/comments
|
https://api.github.com/repos/huggingface/peft/issues/2285/events
|
https://github.com/huggingface/peft/issues/2285
| 2,743,732,159
|
I_kwDOIf9iDM6jige_
| 2,285
|
TypeError: TorchaoLoraLinear.__init__() missing 1 required keyword-only argument: 'get_apply_tensor_subclass'
|
{
"login": "spezialspezial",
"id": 75758219,
"node_id": "MDQ6VXNlcjc1NzU4MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/75758219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/spezialspezial",
"html_url": "https://github.com/spezialspezial",
"followers_url": "https://api.github.com/users/spezialspezial/followers",
"following_url": "https://api.github.com/users/spezialspezial/following{/other_user}",
"gists_url": "https://api.github.com/users/spezialspezial/gists{/gist_id}",
"starred_url": "https://api.github.com/users/spezialspezial/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spezialspezial/subscriptions",
"organizations_url": "https://api.github.com/users/spezialspezial/orgs",
"repos_url": "https://api.github.com/users/spezialspezial/repos",
"events_url": "https://api.github.com/users/spezialspezial/events{/privacy}",
"received_events_url": "https://api.github.com/users/spezialspezial/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 10
| 2024-12-17T01:32:07
| 2025-01-23T07:46:46
| 2024-12-18T10:02:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
peft-0.14.0
diffusers recent
accelerate 1.2.1
torchao 0.7.0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
File "site-packages/peft/tuners/lora/torchao.py", line 147, in dispatch_torchao
new_module = TorchaoLoraLinear(target, adapter_name, **kwargs)
TypeError: TorchaoLoraLinear.__init__() missing 1 required keyword-only argument: 'get_apply_tensor_subclass'
Either init arg is wrongfully mandatory or call is missing an arg. Please check if you find a minute.
### Expected behavior
100% less TypeErrors
|
{
"login": "spezialspezial",
"id": 75758219,
"node_id": "MDQ6VXNlcjc1NzU4MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/75758219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/spezialspezial",
"html_url": "https://github.com/spezialspezial",
"followers_url": "https://api.github.com/users/spezialspezial/followers",
"following_url": "https://api.github.com/users/spezialspezial/following{/other_user}",
"gists_url": "https://api.github.com/users/spezialspezial/gists{/gist_id}",
"starred_url": "https://api.github.com/users/spezialspezial/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/spezialspezial/subscriptions",
"organizations_url": "https://api.github.com/users/spezialspezial/orgs",
"repos_url": "https://api.github.com/users/spezialspezial/repos",
"events_url": "https://api.github.com/users/spezialspezial/events{/privacy}",
"received_events_url": "https://api.github.com/users/spezialspezial/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2285/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2285/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2283
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2283/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2283/comments
|
https://api.github.com/repos/huggingface/peft/issues/2283/events
|
https://github.com/huggingface/peft/issues/2283
| 2,740,633,871
|
I_kwDOIf9iDM6jWsEP
| 2,283
|
TypeError when inference with different LoRA adapters in the same batch
|
{
"login": "yuxiang-guo",
"id": 54578991,
"node_id": "MDQ6VXNlcjU0NTc4OTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/54578991?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuxiang-guo",
"html_url": "https://github.com/yuxiang-guo",
"followers_url": "https://api.github.com/users/yuxiang-guo/followers",
"following_url": "https://api.github.com/users/yuxiang-guo/following{/other_user}",
"gists_url": "https://api.github.com/users/yuxiang-guo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuxiang-guo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuxiang-guo/subscriptions",
"organizations_url": "https://api.github.com/users/yuxiang-guo/orgs",
"repos_url": "https://api.github.com/users/yuxiang-guo/repos",
"events_url": "https://api.github.com/users/yuxiang-guo/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuxiang-guo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 9
| 2024-12-15T13:07:34
| 2025-01-22T15:03:50
| 2025-01-22T15:03:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
transformers 4.41.0
peft 0.13.2
### Who can help?
@BenjaminBossan
I tried to adopt [Inference with different LoRA adapters in the same batch] to an encoder-decoder T5 model.
Specifically, I load the base model, the first LoRA, and the second LoRA adapters, and perform inference with these three models in the same batch. However, some errors occurred.
BTW, does [inference with different LoRA adapters in the same batch] support beam search when using generate()?
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
Code:
```python
base_model = MT5ForConditionalGeneration.from_pretrained(base_model_path, cache_dir='cache')
peft_model = PeftModel.from_pretrained(base_model,<lora_path1> ,adapter_name="l1")
peft_model.load_adapter(<lora_path2>, adapter_name="l2")
adapter_names = ["__base__", "l1", "l2"]
output = model.generate(
input_ids=inputs['input_ids'],
adapter_names=adapter_names,
max_length=20,
prefix_allowed_tokens_fn=self.restrict_decode_vocab,
early_stopping=True
)
```
The error message:
```
Traceback (most recent call last):
File "/home/user/user1/GR/trainer.py", line 1025, in prediction_step
doc_ids = model.generate(
File "/home/user/anaconda3/envs/test/lib/python3.8/site-packages/peft/peft_model.py", line 1972, in generate
with self._enable_peft_forward_hooks(**kwargs):
File "/home/user/anaconda3/envs/test/lib/python3.8/contextlib.py", line 113, in __enter__
python-BaseException
return next(self.gen)
File "/home/user/anaconda3/envs/test/lib/python3.8/site-packages/peft/peft_model.py", line 798, in _enable_peft_forward_hooks
with self.base_model._enable_peft_forward_hooks(*args, **kwargs):
File "/home/user/anaconda3/envs/test/lib/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
File "/home/user/anaconda3/envs/test/lib/python3.8/site-packages/peft/tuners/lora/model.py", line 441, in _enable_peft_forward_hooks
handle = module.register_forward_pre_hook(pre_forward, with_kwargs=True)
TypeError: register_forward_pre_hook() got an unexpected keyword argument 'with_kwargs'
```
### Expected behavior
I expect the existing function for [inference with different LoRA adapters in the same batch] to support T5 with LoRAs and work in my beam search experiments during generation.
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2283/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2283/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2281
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2281/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2281/comments
|
https://api.github.com/repos/huggingface/peft/issues/2281/events
|
https://github.com/huggingface/peft/issues/2281
| 2,737,821,962
|
I_kwDOIf9iDM6jL9kK
| 2,281
|
Incompatibility of X-LoRA and MistralForSequenceClassification
|
{
"login": "cyx96",
"id": 54156215,
"node_id": "MDQ6VXNlcjU0MTU2MjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/54156215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyx96",
"html_url": "https://github.com/cyx96",
"followers_url": "https://api.github.com/users/cyx96/followers",
"following_url": "https://api.github.com/users/cyx96/following{/other_user}",
"gists_url": "https://api.github.com/users/cyx96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyx96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyx96/subscriptions",
"organizations_url": "https://api.github.com/users/cyx96/orgs",
"repos_url": "https://api.github.com/users/cyx96/repos",
"events_url": "https://api.github.com/users/cyx96/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyx96/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
open
| false
| null |
[] | null | 5
| 2024-12-13T08:42:00
| 2025-02-14T15:30:21
| null |
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
peft version: 0.13.2
accelerate version: 1.1.1
transformers version: 4.46.3
Python version: 3.10.15
Platform: Linux-5.10.0-33-cloud-amd64-x86_64-with-glibc2.31
### Who can help?
@BenjaminBossan @EricLBuehler
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [X] My own task or dataset (give details below)
### Reproduction
The adapters are fine-tuned mistral 7b v0.1 on xnli dataset.
I used the following script to load an xlora version of mistral 7b with 3 pre-trained adapters:
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoConfig
from peft import XLoraConfig, get_peft_model
# Load model configuration
model_config = AutoConfig.from_pretrained("mistralai/Mistral-7B-v0.1")
# XLora Configuration
lora_config = XLoraConfig(
task_type="SEQ_CLS",
hidden_size=model_config.hidden_size,
xlora_depth=2,
adapters={
"0": "./mistral_xnli_ckpt/de",
"1": "./mistral_xnli_ckpt/en",
"2": "./mistral_xnli_ckpt/fr",
}
)
# Load and configure model
model = AutoModelForSequenceClassification.from_pretrained(
"mistralai/Mistral-7B-v0.1",
num_labels=3, # XNLI has 3 labels: entailment, neutral, contradiction
trust_remote_code=True,
torch_dtype=torch.bfloat16,
use_cache=False,
)
# Explicitly move the model to GPU
device = torch.device("cuda:0")
model = model.to(device)
# Apply XLora
model = get_peft_model(model, lora_config).to(device)
```
Executing above will result in errors:
```bash
Some weights of MistralForSequenceClassification were not initialized from the model checkpoint at mistralai/Mistral-7B-v0.1 and are newly initialized: ['score.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Traceback (most recent call last):
File "/home/chenyuxu/XLMoE/mistral_xlora_ft.py", line 51, in <module>
model = get_peft_model(model, lora_config).to(device)
File "/opt/conda/envs/handbook/lib/python3.10/site-packages/peft/mapping.py", line 193, in get_peft_model
return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](
File "/opt/conda/envs/handbook/lib/python3.10/site-packages/peft/peft_model.py", line 1378, in __init__
super().__init__(model, peft_config, adapter_name, **kwargs)
File "/opt/conda/envs/handbook/lib/python3.10/site-packages/peft/peft_model.py", line 171, in __init__
self.base_model = cls(model, {adapter_name: peft_config}, adapter_name)
File "/opt/conda/envs/handbook/lib/python3.10/site-packages/peft/tuners/xlora/model.py", line 279, in __init__
_load_adapter_into_lora_model(
File "/opt/conda/envs/handbook/lib/python3.10/site-packages/peft/tuners/xlora/model.py", line 148, in _load_adapter_into_lora_model
raise ValueError(
ValueError: Got unexpected keys! Please raise an issue and tag @EricLBuehler.
unexpected_keys=['model.model.score.modules_to_save.0.weight']
```
### Expected behavior
Reading the above error message, it seems like the `MistralForSequenceClassification` created and initialized some extra weights aside from the ones provided by `"mistralai/Mistral-7B-v0.1"`. Registering the newly added weights to X-LoRA should solve the issue? Any advice or feedback regarding this is greatly appreciated, thanks!
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2281/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2281/timeline
| null |
reopened
| false
|
https://api.github.com/repos/huggingface/peft/issues/2278
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2278/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2278/comments
|
https://api.github.com/repos/huggingface/peft/issues/2278/events
|
https://github.com/huggingface/peft/issues/2278
| 2,737,366,736
|
I_kwDOIf9iDM6jKObQ
| 2,278
|
Adding Dynamic Low-Rank Adaptation (DoRA ACL2024)
|
{
"login": "dohuyduc2002",
"id": 135585343,
"node_id": "U_kgDOCBTePw",
"avatar_url": "https://avatars.githubusercontent.com/u/135585343?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dohuyduc2002",
"html_url": "https://github.com/dohuyduc2002",
"followers_url": "https://api.github.com/users/dohuyduc2002/followers",
"following_url": "https://api.github.com/users/dohuyduc2002/following{/other_user}",
"gists_url": "https://api.github.com/users/dohuyduc2002/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dohuyduc2002/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dohuyduc2002/subscriptions",
"organizations_url": "https://api.github.com/users/dohuyduc2002/orgs",
"repos_url": "https://api.github.com/users/dohuyduc2002/repos",
"events_url": "https://api.github.com/users/dohuyduc2002/events{/privacy}",
"received_events_url": "https://api.github.com/users/dohuyduc2002/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 2
| 2024-12-13T04:23:12
| 2025-01-20T15:04:01
| 2025-01-20T15:04:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Feature request
Paper link: https://arxiv.org/pdf/2405.17357
Source code: https://github.com/MIkumikumi0116/DoRA/blob/main/Src/Finetune_And_Benchmark/Finetune_Utils.py
### Motivation
When I read this paper, I found this intrigued me about enhance AdaLoRA in these quotes:
> Compared to existing methods of dynamic parameter allocation (e.g., AdaLoRA), DoRA can allocate parameter budgets more appropriately based on a richer set of information from projection matrices.
> Compared to previous methods ([Zhang et al., 2023](https://arxiv.org/pdf/2303.10512)), we use \( \|\Delta W_i\|_F \) instead of \( c_i \) to assess the importance of components, thereby incorporating information from \( A_i \) and \( B_i \) for a more comprehensive evaluation of component importance.
I have checked AdaLoRA and found that there are 2 implementation in DoRA paper can be added to PEFT
- Implement DEM loss in the Trainer with method compute_loss to integrate this loss into DoRA
- The pruning method from DoRA to find the rank for LoRA layers
### Your contribution
I'm working on reimplementing this paper, further update will be added to this issue
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2278/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2278/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2277
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2277/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2277/comments
|
https://api.github.com/repos/huggingface/peft/issues/2277/events
|
https://github.com/huggingface/peft/issues/2277
| 2,737,215,081
|
I_kwDOIf9iDM6jJpZp
| 2,277
|
Request to intergrate Monarch-based PEFT (MoRe)
|
{
"login": "Edenzzzz",
"id": 87317405,
"node_id": "MDQ6VXNlcjg3MzE3NDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/87317405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Edenzzzz",
"html_url": "https://github.com/Edenzzzz",
"followers_url": "https://api.github.com/users/Edenzzzz/followers",
"following_url": "https://api.github.com/users/Edenzzzz/following{/other_user}",
"gists_url": "https://api.github.com/users/Edenzzzz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Edenzzzz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Edenzzzz/subscriptions",
"organizations_url": "https://api.github.com/users/Edenzzzz/orgs",
"repos_url": "https://api.github.com/users/Edenzzzz/repos",
"events_url": "https://api.github.com/users/Edenzzzz/events{/privacy}",
"received_events_url": "https://api.github.com/users/Edenzzzz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-12-13T02:03:03
| 2025-01-20T15:04:03
| 2025-01-20T15:04:03
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Feature request
This request proposes to intergrate MoRe, a PEFT method that combines hardware-efficient, block-diagonal structured matrices (BMM) and low-rankness. The ICML paper "**MoRe Fine-Tuning with 10x Fewer Parameters**" can be found here https://arxiv.org/abs/2408.17383, of which i'm the first author.
### Motivation
PEFT already integrates a simlar method BOFT. In our paper we analyzed in detail that BOFT is a degenerated (in-efficient) case of MoRe. Our method theoretically submerges BOFT, has much higher performance on a range of reasoning tasks with much fewer parameters, uses less than half of BOFT's memory and finetunes faster than LoRA. Llama 7B adapted using our method achives higher score than Llama 13B adapted using LoRA on Commonsense reasoning with 10% of LoRA's parameters.

### Your contribution
We have implemented a [helper function ](https://github.com/SprocketLab/sparse_matrix_fine_tuning/blob/8c57492be0f29393e91a67c56354956cfdb608cc/train_utils.py#L475) to easily adapt all modules specified in a config dictionary. Our [config file](https://github.com/SprocketLab/sparse_matrix_fine_tuning/blob/main/task_configs/llama/monarch_config.json) is also quite similar to yours I'll be working towards making them consistent to your codebase and submitting a pull request.
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2277/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2275
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2275/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2275/comments
|
https://api.github.com/repos/huggingface/peft/issues/2275/events
|
https://github.com/huggingface/peft/issues/2275
| 2,733,721,820
|
I_kwDOIf9iDM6i8Ujc
| 2,275
|
TypeError: LoraConfig.__init__() got an unexpected keyword argument 'eva_config'
|
{
"login": "Mohankrish08",
"id": 81806134,
"node_id": "MDQ6VXNlcjgxODA2MTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/81806134?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mohankrish08",
"html_url": "https://github.com/Mohankrish08",
"followers_url": "https://api.github.com/users/Mohankrish08/followers",
"following_url": "https://api.github.com/users/Mohankrish08/following{/other_user}",
"gists_url": "https://api.github.com/users/Mohankrish08/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mohankrish08/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mohankrish08/subscriptions",
"organizations_url": "https://api.github.com/users/Mohankrish08/orgs",
"repos_url": "https://api.github.com/users/Mohankrish08/repos",
"events_url": "https://api.github.com/users/Mohankrish08/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mohankrish08/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 10
| 2024-12-11T18:45:42
| 2025-02-26T10:58:54
| 2024-12-31T14:36:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Name: Peft
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [x] My own task or dataset (give details below)
### Reproduction
1 model_name = "Mohan-08/math-dataset-deepmind-FT"
2 tokenizer = AutoTokenizer.from_pretrained(model_name)
----> 3 model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", trust_remote_code=True)
### Expected behavior
Facing this error:
TypeError: LoraConfig.__init__() got an unexpected keyword argument 'eva_config'

|
{
"login": "Mohankrish08",
"id": 81806134,
"node_id": "MDQ6VXNlcjgxODA2MTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/81806134?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mohankrish08",
"html_url": "https://github.com/Mohankrish08",
"followers_url": "https://api.github.com/users/Mohankrish08/followers",
"following_url": "https://api.github.com/users/Mohankrish08/following{/other_user}",
"gists_url": "https://api.github.com/users/Mohankrish08/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mohankrish08/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mohankrish08/subscriptions",
"organizations_url": "https://api.github.com/users/Mohankrish08/orgs",
"repos_url": "https://api.github.com/users/Mohankrish08/repos",
"events_url": "https://api.github.com/users/Mohankrish08/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mohankrish08/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2275/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2275/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2274
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2274/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2274/comments
|
https://api.github.com/repos/huggingface/peft/issues/2274/events
|
https://github.com/huggingface/peft/issues/2274
| 2,732,915,400
|
I_kwDOIf9iDM6i5PrI
| 2,274
|
Question about use lora for mamba2
|
{
"login": "Doctor-James",
"id": 71418209,
"node_id": "MDQ6VXNlcjcxNDE4MjA5",
"avatar_url": "https://avatars.githubusercontent.com/u/71418209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Doctor-James",
"html_url": "https://github.com/Doctor-James",
"followers_url": "https://api.github.com/users/Doctor-James/followers",
"following_url": "https://api.github.com/users/Doctor-James/following{/other_user}",
"gists_url": "https://api.github.com/users/Doctor-James/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Doctor-James/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Doctor-James/subscriptions",
"organizations_url": "https://api.github.com/users/Doctor-James/orgs",
"repos_url": "https://api.github.com/users/Doctor-James/repos",
"events_url": "https://api.github.com/users/Doctor-James/events{/privacy}",
"received_events_url": "https://api.github.com/users/Doctor-James/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 1
| 2024-12-11T12:58:39
| 2024-12-17T06:24:56
| 2024-12-17T06:24:56
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
I am trying to fine-tune the Mamba2 model using LoRA, following the guidance provided by https://huggingface.co/docs/transformers/en/model_doc/mamba2
I set
```
lora_config = LoraConfig(
r=8,
target_modules=["embeddings", "in_proj", "out_proj"],
task_type="CAUSAL_LM",
bias="none"
)
````
However, after carefully examining the code, I found that Mamba2 uses its own CUDA algorithms, which directly pass **out_proj.weight** and **out_proj.bias**, bypassing **out_proj.forward**. Given this approach, can LoRA work properly?
```
if self.training and cache_params is None:
out, ssm_state = mamba_split_conv1d_scan_combined(
projected_states,
self.conv1d.weight.squeeze(1),
self.conv1d.bias,
self.dt_bias,
A,
D=self.D,
chunk_size=self.chunk_size,
seq_idx=None, # was seq_idx
activation=self.activation,
rmsnorm_weight=self.norm.weight,
rmsnorm_eps=self.norm.variance_epsilon,
outproj_weight=self.out_proj.weight,
outproj_bias=self.out_proj.bias,
headdim=self.head_dim,
ngroups=self.n_groups,
norm_before_gate=False,
return_final_states=True,
**dt_limit_kwargs,
)
```
|
{
"login": "Doctor-James",
"id": 71418209,
"node_id": "MDQ6VXNlcjcxNDE4MjA5",
"avatar_url": "https://avatars.githubusercontent.com/u/71418209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Doctor-James",
"html_url": "https://github.com/Doctor-James",
"followers_url": "https://api.github.com/users/Doctor-James/followers",
"following_url": "https://api.github.com/users/Doctor-James/following{/other_user}",
"gists_url": "https://api.github.com/users/Doctor-James/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Doctor-James/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Doctor-James/subscriptions",
"organizations_url": "https://api.github.com/users/Doctor-James/orgs",
"repos_url": "https://api.github.com/users/Doctor-James/repos",
"events_url": "https://api.github.com/users/Doctor-James/events{/privacy}",
"received_events_url": "https://api.github.com/users/Doctor-James/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2274/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2273
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2273/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2273/comments
|
https://api.github.com/repos/huggingface/peft/issues/2273/events
|
https://github.com/huggingface/peft/issues/2273
| 2,731,945,563
|
I_kwDOIf9iDM6i1i5b
| 2,273
|
Support for Custom Adapters
|
{
"login": "dgme-syz",
"id": 97904453,
"node_id": "U_kgDOBdXnRQ",
"avatar_url": "https://avatars.githubusercontent.com/u/97904453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dgme-syz",
"html_url": "https://github.com/dgme-syz",
"followers_url": "https://api.github.com/users/dgme-syz/followers",
"following_url": "https://api.github.com/users/dgme-syz/following{/other_user}",
"gists_url": "https://api.github.com/users/dgme-syz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dgme-syz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dgme-syz/subscriptions",
"organizations_url": "https://api.github.com/users/dgme-syz/orgs",
"repos_url": "https://api.github.com/users/dgme-syz/repos",
"events_url": "https://api.github.com/users/dgme-syz/events{/privacy}",
"received_events_url": "https://api.github.com/users/dgme-syz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 6
| 2024-12-11T06:28:08
| 2025-02-18T10:05:30
| 2025-01-14T09:51:01
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Feature request
In simple terms, I would like support that allows users to customize their own adapter. I noticed that users only need to add a folder under this path src/peft/tuners and place some adapter-related files, usually `config.py`, `layer.py`, and `model.py`.
However, during implementation, I found that I also need to modify `src/peft/utils/save_and_load.py/get_peft_model_state_dict` to ensure that the custom adapter can be saved correctly. This is because the function is currently only adapted for existing adapters, so I have to modify the source code to ensure that the custom adapter can be used successfully.
**PEFT** is the most convenient and efficient fine-tuning library, and it would be even better if this feature were supported. Perhaps you’ve already implemented this functionality, but I haven’t found it yet. If so, please point it out. Thank you very much.
### Motivation
I hope to use custom adapters to fine-tune large language models.
### Your contribution
Currently, I have no clear ideas.
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2273/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2270
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2270/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2270/comments
|
https://api.github.com/repos/huggingface/peft/issues/2270/events
|
https://github.com/huggingface/peft/issues/2270
| 2,729,191,175
|
I_kwDOIf9iDM6irCcH
| 2,270
|
Different Results When Predicting with Multiple LoRA Adapters in a Loop VS. Using only One LoRA
|
{
"login": "beyondguo",
"id": 37113676,
"node_id": "MDQ6VXNlcjM3MTEzNjc2",
"avatar_url": "https://avatars.githubusercontent.com/u/37113676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/beyondguo",
"html_url": "https://github.com/beyondguo",
"followers_url": "https://api.github.com/users/beyondguo/followers",
"following_url": "https://api.github.com/users/beyondguo/following{/other_user}",
"gists_url": "https://api.github.com/users/beyondguo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/beyondguo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/beyondguo/subscriptions",
"organizations_url": "https://api.github.com/users/beyondguo/orgs",
"repos_url": "https://api.github.com/users/beyondguo/repos",
"events_url": "https://api.github.com/users/beyondguo/events{/privacy}",
"received_events_url": "https://api.github.com/users/beyondguo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 12
| 2024-12-10T06:53:14
| 2025-01-18T15:03:27
| 2025-01-18T15:03:27
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Linux, Python 3.8
A two-H100 node.
Name: transformers
Version: 4.34.1
Name: peft
Version: 0.11.1
### Who can help?
@BenjaminBossan
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
### Description
I encountered a strange issue while using PEFT LoRA adapters with the Hugging Face Trainer. When predicting using different LoRA adapters in a loop, the predictions are different compared to when using the same LoRA adapter (e.g., `m4`) individually. The issue arises when I predict using multiple LoRA adapters sequentially, and then compare the results of the `m4` adapter between the two scenarios.
### Steps to Reproduce
1. I have a dictionary `lora_map` that maps LoRA adapter names to their respective paths.
2. The code below iterates over `lora_map` and predicts using each LoRA adapter:
```python
dfs = []
for lora_name in lora_map:
pred_df = test_df[useful_columns].copy()
# model.set_adapter(lora_name)
model = PeftModel.from_pretrained(base_model, lora_map[lora_name], adapter_name=lora_name)
print("predicting with lora", lora_name)
trainer = Trainer(model=model, args=args, data_collator=data_collator)
preds = trainer.predict(token_test_dataset).predictions # logits
pred_df[['neu','pos','neg']] = torch.softmax(torch.tensor(preds), dim=-1).numpy()
pred_df['lora'] = lora_name
dfs.append(pred_df)
final_pred_df = pd.concat(dfs)
```
the `lora_map` is like `lora_map={'m1':xxx,'m2':xxx,...}`
I found the results in `final_pred_df[final_pred_df.lora == 'm4']` is different from predicting with loading `m4` only. But the results for `m1` is the same, probably because its the first in the lora_map.
What could be the problem? What happend when I load the second adapter using ` PeftModel.from_pretrained` ?
---
I'm sorry I can't share my lora weights (it was trained with **PiSSA**) since its a private model.
### Expected behavior
Same results.
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2270/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2270/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2266
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2266/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2266/comments
|
https://api.github.com/repos/huggingface/peft/issues/2266/events
|
https://github.com/huggingface/peft/issues/2266
| 2,726,733,078
|
I_kwDOIf9iDM6ihqUW
| 2,266
|
Can't PromptTuning in Multi-GPU with DeepSpeed and Qwen2.5-14B-Instruct
|
{
"login": "dongshou",
"id": 31725976,
"node_id": "MDQ6VXNlcjMxNzI1OTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/31725976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dongshou",
"html_url": "https://github.com/dongshou",
"followers_url": "https://api.github.com/users/dongshou/followers",
"following_url": "https://api.github.com/users/dongshou/following{/other_user}",
"gists_url": "https://api.github.com/users/dongshou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dongshou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dongshou/subscriptions",
"organizations_url": "https://api.github.com/users/dongshou/orgs",
"repos_url": "https://api.github.com/users/dongshou/repos",
"events_url": "https://api.github.com/users/dongshou/events{/privacy}",
"received_events_url": "https://api.github.com/users/dongshou/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-12-09T11:13:06
| 2025-01-17T15:03:49
| 2025-01-17T15:03:49
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### System Info
Name: peft
Version: 0.12.0
Name: transformers
Version: 4.47.0
Name: accelerate
accelerate 0.34.2
Python 3.11.9
cuda
Build cuda_11.8.r11.8/compiler.31833905_0
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
# 1 prompt tuning
```python
model_name_or_path = "/workspace/labels/Qwen2-fintune/qwen/Qwen2.5-14B-Instruct"
tokenizer_name_or_path = "/workspace/labels/Qwen2-fintune/qwen/Qwen2.5-14B-Instruct"
peft_config = PromptTuningConfig(
task_type=TaskType.CAUSAL_LM,
prompt_tuning_init=PromptTuningInit.TEXT,
num_virtual_tokens=16,
prompt_tuning_init_text=" prompt text which text length more than 16",
tokenizer_name_or_path=tokenizer_name_or_path,
)
```
# 2. dataset
```python
dataset have landed from json
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
def preprocess_fn(
examples,
):
"""Preprocesses the data for supervised fine-tuning."""
# tokenize input
goal = "xxxxx"
texts = []
for query,response in zip(examples['query'],examples['response']):
msg = [
{"role": "user", "content": goal+query},
{"role":"assistant","content":response}
]
texts.append(tokenizer.apply_chat_template(
msg,
chat_template=qwen_chat_template,
tokenize=True,
add_generation_prompt=False,
padding="max_length",
max_length=max_length,
truncation=True,
))
input_ids = torch.tensor(texts, dtype=torch.int)
target_ids = input_ids.clone()
target_ids[target_ids == tokenizer.pad_token_id] = IGNORE_TOKEN_ID
# target_ids[target_ids <= tokenizer.assistant] = IGNORE_TOKEN_ID
attention_mask = input_ids.ne(tokenizer.pad_token_id)
return dict(
input_ids=input_ids, labels=target_ids, attention_mask=attention_mask
)
processed_datasets = dataset.map(
preprocess_fn,
batched=True,
num_proc=16,
remove_columns=dataset.column_names, #remove unprocessed column for training
load_from_cache_file=False,
desc="Running tokenizer on datasset"
)
```
# 3. model
```python
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
config= transformers.AutoConfig.from_pretrained(model_name_or_path),
attn_implementation="flash_attention_2",
torch_dtype=torch.bfloat16,
device_map = 'balanced'
)
model = get_peft_model(model, peft_config)
print(model.print_trainable_parameters())
```
# 4. trainer
```python
from transformers import Trainer, TrainingArguments
trainer = Trainer(
model=model,
train_dataset=train_data,
eval_dataset=val_data,
data_collator=default_data_collator,
args=TrainingArguments(
output_dir=output_dir,
per_device_train_batch_size=batch_size,
num_train_epochs=num_epochs,
learning_rate=learning_rate,
lr_scheduler_type='cosine',
per_device_eval_batch_size=batch_size,
deepspeed='deepspeed/ds_z2_config.json',
load_best_model_at_end=False,
logging_strategy='steps',
logging_steps=10,
evaluation_strategy='steps',
eval_steps=1000,
save_strategy='steps',
save_steps=10,
)
)
trainer.train()
```
# 5. deepspeed config json
```python
{
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"zero_allow_untested_optimizer": true,
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"contiguous_gradients": true,
"round_robin_gradients": true
}
}
```
# 6 debug info
when mv label to another coda, the label value have been changed!
loss code from transformers/loss/loss_utils.py
```python
def ForCausalLMLoss(
logits, labels, vocab_size: int, num_items_in_batch: int = None, ignore_index: int = -100, **kwargs
):
# Upcast to float if we need to compute the loss to avoid potential precision issues
logits = logits.float()
# Shift so that tokens < n predict n
shift_logits = logits[..., :-1, :].contiguous()
shift_labels = labels[..., 1:].contiguous()
# Flatten the tokens
shift_logits = shift_logits.view(-1, vocab_size)
shift_labels = shift_labels.view(-1)
# Enable model parallelism
print("label before move",shift_labels.min(),shift_labels.max(),shift_labels.shape)
shift_labels = shift_labels.to(shift_logits.device)
print("label after move",shift_labels.min(),shift_labels.max(),shift_labels.shape)
loss = fixed_cross_entropy(shift_logits, shift_labels, num_items_in_batch, ignore_index, **kwargs)
return loss
```
## 6.1log and error
```python
peft label dtail tensor(-100, device='cuda:0') tensor(151645, device='cuda:0') torch.Size([4, 2048])
peft label dtail 2 tensor(-100, device='cuda:0') tensor(151645, device='cuda:0') torch.Size([4, 2048])
### label before move tensor(-100, device='cuda:0') tensor(151645, device='cuda:0') torch.Size([8252])
###label after move tensor(0, device='cuda:3') tensor(0, device='cuda:3') torch.Size([8252])
0%| | 1/47500 [00:16<218:02:43, 16.53s/it]peft label dtail tensor(-100, device='cuda:0') tensor(151645, device='cuda:0') torch.Size([4, 2048])
peft label dtail 2 tensor(-100, device='cuda:0') tensor(151645, device='cuda:0') torch.Size([4, 2048])
### label before move tensor(-100, device='cuda:0') tensor(151645, device='cuda:0') torch.Size([8252])
### label after move tensor(-9223372034707292160, device='cuda:3') tensor(0, device='cuda:3') torch.Size([8252])
Traceback (most recent call last):
File "/workspace/llm-tuning/prompt_tuning_qa.py", line 173, in <module>
trainer.train()
File "/opt/conda/lib/python3.11/site-packages/transformers/trainer.py", line 2164, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/transformers/trainer.py", line 2522, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/transformers/trainer.py", line 3688, in training_step
self.accelerator.backward(loss, **kwargs)
File "/opt/conda/lib/python3.11/site-packages/accelerate/accelerator.py", line 2196, in backward
loss.backward(**kwargs)
File "/opt/conda/lib/python3.11/site-packages/torch/_tensor.py", line 581, in backward
torch.autograd.backward(
File "/opt/conda/lib/python3.11/site-packages/torch/autograd/__init__.py", line 347, in backward
_engine_run_backward(
File "/opt/conda/lib/python3.11/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/torch/autograd/function.py", line 307, in apply
return user_fn(self, *args)
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/flash_attn/bert_padding.py", line 27, in backward
grad_input = torch.zeros(
^^^^^^^^^^^^
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed.
../aten/src/ATen/native/cuda/Loss.cu:250: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed.
```
### Expected behavior
expect prompt-tuning with multi-gpu
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2266/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2266/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2264
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2264/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2264/comments
|
https://api.github.com/repos/huggingface/peft/issues/2264/events
|
https://github.com/huggingface/peft/issues/2264
| 2,723,078,376
|
I_kwDOIf9iDM6iTuDo
| 2,264
|
Guidance Needed on Two-Stage Fine-Tuning with LoRA(SFT and DPO) for Model Adaptation
|
{
"login": "none0663",
"id": 169760423,
"node_id": "U_kgDOCh5Wpw",
"avatar_url": "https://avatars.githubusercontent.com/u/169760423?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/none0663",
"html_url": "https://github.com/none0663",
"followers_url": "https://api.github.com/users/none0663/followers",
"following_url": "https://api.github.com/users/none0663/following{/other_user}",
"gists_url": "https://api.github.com/users/none0663/gists{/gist_id}",
"starred_url": "https://api.github.com/users/none0663/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/none0663/subscriptions",
"organizations_url": "https://api.github.com/users/none0663/orgs",
"repos_url": "https://api.github.com/users/none0663/repos",
"events_url": "https://api.github.com/users/none0663/events{/privacy}",
"received_events_url": "https://api.github.com/users/none0663/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 5
| 2024-12-06T13:35:20
| 2025-01-06T10:50:09
| 2025-01-06T10:50:09
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
# I am planning to perform a two-stage fine-tuning process and need some guidance on how to proceed.
## First Stage
1. Load Base Model: I start by loading the base model, qwen1.5 32B.
2. Apply LoRA Fine-Tuning: I then apply LoRA fine-tuning to this base model and obtain a new model state.
3. Save Adapter Model: This fine-tuned model state is saved as adapter_model.safetensors, named qwen1.5_lora_sft.
## Second Stage
1. Load the Model from the First Stage: I load both qwen1.5 32B and qwen1.5_lora_sft. It's crucial that qwen1.5_lora_sft integrates correctly with the base model qwen1.5 32B.
2. . Continue Fine-Tuning: On this model, which already includes the LoRA adapter, I continue to apply LoRA and DPO for further fine-tuning.
3. Save the New Adapter Model: After fine-tuning, I need to save the new adapter state, which includes adjustments from both the original LoRA and the new DPO.
## My questions are:
1. How to load the model from the base model(qwen1.5 32B) with the lora module qwen1.5_lora_sft
2. How to Continue Fine-Tuning from the First Stage model, and save the lora model after dpo training with the base model(qwen1.5 32B) and only one qwen1.5_lora_sft_dpo module.( adapter_model_sft_dpo.safetensors)
## What I had now
1. base model, qwen1.5 32B model path
2. qwen1.5_lora_sft module path: adapter_model.safetensors
## What I Need
1. qwen1.5_lora_sft _dpo module: adapter_model_sft_dpo.safetensors
## This is
train a base_model to get LoRA_weights_1
base_model_1 = merge(base_model and LoRA_weights_1)
train base_model_1 to get LoRA_weights_2
base_model_2 = merge(base_model_1 and LoRA_weights_2)
how to split the base_model_2 into base_model and LoRA_weights_1_2
Thinks!
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2264/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2264/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2262
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2262/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2262/comments
|
https://api.github.com/repos/huggingface/peft/issues/2262/events
|
https://github.com/huggingface/peft/issues/2262
| 2,720,228,617
|
I_kwDOIf9iDM6iI2UJ
| 2,262
|
Could you provide example code for AdaLoRA finetuning decoder-only model?
|
{
"login": "SpeeeedLee",
"id": 132431571,
"node_id": "U_kgDOB-S-0w",
"avatar_url": "https://avatars.githubusercontent.com/u/132431571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SpeeeedLee",
"html_url": "https://github.com/SpeeeedLee",
"followers_url": "https://api.github.com/users/SpeeeedLee/followers",
"following_url": "https://api.github.com/users/SpeeeedLee/following{/other_user}",
"gists_url": "https://api.github.com/users/SpeeeedLee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SpeeeedLee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SpeeeedLee/subscriptions",
"organizations_url": "https://api.github.com/users/SpeeeedLee/orgs",
"repos_url": "https://api.github.com/users/SpeeeedLee/repos",
"events_url": "https://api.github.com/users/SpeeeedLee/events{/privacy}",
"received_events_url": "https://api.github.com/users/SpeeeedLee/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-12-05T12:03:31
| 2025-01-18T15:03:29
| 2025-01-18T15:03:29
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Feature request
The current [example of AdaLoRA](https://github.com/huggingface/peft/blob/b2922565c4c4445706a87cf7b988c828b451fe61/examples/conditional_generation/peft_adalora_seq2seq.py) is on **facebook/bart-base**. Since AdaLoRA requires hand-crafted calculations on loss, would it be possible to provide me some hints on how can this be done when it comes to decoder-only (e.g., Llama-Instruct) LM?
Specificially, I would like to mask out the loss calculation on the instruction part or system prompt, focusing only on the assistant response.
### Motivation
AdaLoRA requires hand-crafted calculations on loss, which becomes complex when desired to mask out some system/instructino tokens.
### Your contribution
N.A.
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2262/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2261
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2261/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2261/comments
|
https://api.github.com/repos/huggingface/peft/issues/2261/events
|
https://github.com/huggingface/peft/issues/2261
| 2,720,065,982
|
I_kwDOIf9iDM6iIOm-
| 2,261
|
Make module imports / re-export conforming with typing specs for proper type checker support
|
{
"login": "bluenote10",
"id": 3620703,
"node_id": "MDQ6VXNlcjM2MjA3MDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3620703?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bluenote10",
"html_url": "https://github.com/bluenote10",
"followers_url": "https://api.github.com/users/bluenote10/followers",
"following_url": "https://api.github.com/users/bluenote10/following{/other_user}",
"gists_url": "https://api.github.com/users/bluenote10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bluenote10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bluenote10/subscriptions",
"organizations_url": "https://api.github.com/users/bluenote10/orgs",
"repos_url": "https://api.github.com/users/bluenote10/repos",
"events_url": "https://api.github.com/users/bluenote10/events{/privacy}",
"received_events_url": "https://api.github.com/users/bluenote10/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 4
| 2024-12-05T10:51:46
| 2024-12-13T14:51:00
| 2024-12-13T14:51:00
|
CONTRIBUTOR
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Feature request
Make peft's modules type checker friendly.
### Motivation
Currently, using peft with a type checker is somewhat painful, because its module re-exports are not conforming to the [typing specs for library interfaces](https://github.com/python/typing/blob/ad4cbc45f868f47031494694bdb7349119302f59/docs/spec/distributing.rst#library-interface-public-and-private-symbols). This means that very basic things do not type check properly. For instance the following is a type error in both pyright and mypy:
```py
from peft import LoraConfig
```
Mypy error is for instance:
```
example.py:1: error: Module "peft" does not explicitly export attribute "LoraConfig" [attr-defined]
```
This is obviously not great, because the code works fine as runtime. Suppressing all these type errors causes a lot of noise on user side, and even worse, fails to make any benefit from the type annotations.
### Your contribution
As the specs say: _Imported symbols are considered private by default._
There are basically two ways to get proper library interfaces:
- Using an explicit `__all__` in the modules to mark the re-exports.
- Using the patterns shown in the [Import Conventions](https://github.com/python/typing/blob/ad4cbc45f868f47031494694bdb7349119302f59/docs/spec/distributing.rst#import-conventions) section.
Using an explicit `__all__` would also allow you to remove the linter suppressions like this one
https://github.com/huggingface/peft/blob/f86522e011af21697ef1477da4d232e74de83232/src/peft/__init__.py#L1-L3
because the linter would then understand that the symbol is being used for re-export purposes.
|
{
"login": "BenjaminBossan",
"id": 6229650,
"node_id": "MDQ6VXNlcjYyMjk2NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6229650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminBossan",
"html_url": "https://github.com/BenjaminBossan",
"followers_url": "https://api.github.com/users/BenjaminBossan/followers",
"following_url": "https://api.github.com/users/BenjaminBossan/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminBossan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminBossan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminBossan/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminBossan/orgs",
"repos_url": "https://api.github.com/users/BenjaminBossan/repos",
"events_url": "https://api.github.com/users/BenjaminBossan/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminBossan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2261/timeline
| null |
completed
| false
|
https://api.github.com/repos/huggingface/peft/issues/2260
|
https://api.github.com/repos/huggingface/peft
|
https://api.github.com/repos/huggingface/peft/issues/2260/labels{/name}
|
https://api.github.com/repos/huggingface/peft/issues/2260/comments
|
https://api.github.com/repos/huggingface/peft/issues/2260/events
|
https://github.com/huggingface/peft/issues/2260
| 2,719,217,739
|
I_kwDOIf9iDM6iE_hL
| 2,260
|
Is it possible to support the transformer engine when using Lora in Megatron?
|
{
"login": "liulong11",
"id": 25365827,
"node_id": "MDQ6VXNlcjI1MzY1ODI3",
"avatar_url": "https://avatars.githubusercontent.com/u/25365827?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liulong11",
"html_url": "https://github.com/liulong11",
"followers_url": "https://api.github.com/users/liulong11/followers",
"following_url": "https://api.github.com/users/liulong11/following{/other_user}",
"gists_url": "https://api.github.com/users/liulong11/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liulong11/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liulong11/subscriptions",
"organizations_url": "https://api.github.com/users/liulong11/orgs",
"repos_url": "https://api.github.com/users/liulong11/repos",
"events_url": "https://api.github.com/users/liulong11/events{/privacy}",
"received_events_url": "https://api.github.com/users/liulong11/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null | 3
| 2024-12-05T03:24:15
| 2025-01-12T15:03:29
| 2025-01-12T15:03:29
|
NONE
|
{
"total": 0,
"completed": 0,
"percent_completed": 0
}
| null | null | null |
### Feature request
I am currently using the Megatron framework and want to use Lora for training. I saw that the Megatron format is supported at https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora/tp_layer.py RowParallelLinear and ColumnParallelLinear do the adaptation. But if I use the transformer engine, the corresponding TELayerNormColumnParallelLinear and TERowParallelLinear will not be adapted.
### Motivation
This will better support Megatron framework using LoRA.
### Your contribution
I don't have a PR.
|
{
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
}
|
{
"url": "https://api.github.com/repos/huggingface/peft/issues/2260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/peft/issues/2260/timeline
| null |
completed
| false
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.