Combine Multiple LoRA Adapters for Llama 2

Add skills to your LLM without fine-tuning new adapters

Benjamin Marie
Towards Data Science
12 min readNov 30, 2023

--

Image by the author — Made with an image from Pixabay

Fully fine-tuning a pre-trained large language model (LLM) for different tasks is very costly. Instead, we can freeze the parameters of the LLM while only fine-tuning a few million trainable parameters added through a LoRA adapter.

--

--

Ph.D, research scientist in NLP/AI. Medium "Top writer" in AI and Technology. Exclusive articles and all my AI notebooks on https://kaitchup.substack.com/