Error in merging adapter with model in autotrain

  Kiến thức lập trình

I am trying to use autotrain from Hugging Face to fine tune some models. Since I don’t have big computing resources, I tried to fine tune model EleutherAI/pythia-14m and this dataset. But I recieved this message:

Failed to merge adapter weights: Error(s) in loading state_dict for PeftModelForCausalLM:
    size mismatch for base_model.model.gpt_neox.embed_in.weight: copying a param with shape torch.Size([50280, 128]) from checkpoint, the shape in current model is torch.Size([50277, 128]).
    size mismatch for base_model.model.embed_out.weight: copying a param with shape torch.Size([50280, 128]) from checkpoint, the shape in current model is torch.Size([50277, 128]).

This error occured when I started this script, that is just terminal command, written in Jupiter.

!autotrain llm 
    --train 
    --model "EleutherAI/pythia-14m" 
    --project-name "my-llm" 
    --data-path data/ 
    --text-column text 
    --batch-size "4" 
    --lr "2e-5" 
    --epochs "3" 
    --block-size "1024" 
    --warmup-ratio "0.03" 
    --lora-r "16" 
    --lora-alpha "32" 
    --lora-dropout "0.05" 
    --weight-decay "0." 
    --gradient-accumulation "4" 
    --logging-steps "10" 
    --use-peft 
    --merge-adapter 

Also, the same problem showed, when I tried to autotrain in Hugging Face space.
I am not experienced in ml, so I can’t imagine, what can cause this problem.

New contributor

MaxVorosh is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

LEAVE A COMMENT