Combine Multiple LoRA Adapters for Llama 2
Add skills to your LLM without fine-tuning new adapters
Published in
12 min readNov 30, 2023
Fully fine-tuning a pre-trained large language model (LLM) for different tasks is very costly. Instead, we can freeze the parameters of the LLM while only fine-tuning a few million trainable parameters added through a LoRA adapter.