- Training: Train the model with whole data with almost all the cases
- Fine-tuning: Re-train the model with some data to focus on
- PEFT (Parameter-Efficient Fine-Tuning): Re-train the model with some data and update part of the params (weights, biases) only
- LoRA (LOw Rank Adaption): Method of PEFT with prioritising nearby weights
Training and Fine-tuning can be done with existing optimizers, eg. Adam, in machine learning libraries. However, PEFT and LoRA won't be done with existing optimizers alone, some additional procedures are needed to identify which set of params should be updated.
No comments:
Post a Comment