Peft
-
LLM unlearning without model degradation is achieved through direct training on the replacement data and…
7 min read -
Representation Fintuning – Beyond the PEFT Techniques for fine-tuning LLMs
6 min read -
Abstract: applying ~1bit transformer technology to LoRA adapters allows us to reach comparable performance with…
15 min read -
Deliberately Exploring Design Decisions for Parameter Efficient Finetuning (PEFT) with LoRA
41 min read -
How to efficiently fine-tune your own open-source LLM using novel techniques – code provided
Data ScienceIn this article I tune a base LLama2 LLM to output SQL code. I use…
17 min read -
Exploring Parameter Efficient Finetuning (PEFT): Intuitively Understanding Finetuning Using LoRA
17 min read