How to finetune certain part of params, via Deepspeed + transformers.Trainner
I have to add some LoRA layers by hand(without left) to a pre-trained Multi-modal model, to finetune the model for new data. I want Deepspeed to optimize ONLY the parameters from the LoRA layer rather than all the parameters. Like this
enter image description here
how to train with deepspeed?
I decided to train my language model and chose the “deepspeed” tool. In the description on huggingface, the following command is written: