Maxime LabonneinTowards Data ScienceFine-tune Llama 3 with ORPOA cheaper and faster unified fine-tuning techniqueApr 198Apr 198
Maxime LabonneinTowards Data ScienceCreate Mixtures of Experts with MergeKitCombine multiple models into a single MoEMar 278Mar 278
Maxime LabonneinTowards Data ScienceMerge Large Language Models with mergekitCreate your own models easily, no GPU required!Jan 817Jan 817
Maxime LabonneinTowards Data ScienceFine-tune a Mistral-7b model with Direct Preference OptimizationBoost the performance of your supervised fine-tuned modelsJan 110Jan 110
Maxime LabonneinTowards Data ScienceExLlamaV2: The Fastest Library to Run LLMsQuantize and run EXL2 modelsNov 20, 20236Nov 20, 20236
Maxime LabonneinTowards Data ScienceQuantize Llama models with GGUF and llama.cppGGML vs. GPTQ vs. NF4Sep 4, 20235Sep 4, 20235
Maxime LabonneinTowards Data ScienceA Beginner’s Guide to LLM Fine-TuningHow to fine-tune Llama and other LLMs with one toolAug 30, 20237Aug 30, 20237
Maxime LabonneinTowards Data ScienceGraph Convolutional Networks: Introduction to GNNsA step-by-step guide using PyTorch GeometricAug 14, 20237Aug 14, 20237
Maxime LabonneinTowards Data Science4-bit Quantization with GPTQQuantize your own LLMs using AutoGPTQJul 31, 20233Jul 31, 20233
Maxime LabonneinTowards Data ScienceFine-Tune Your Own Llama 2 Model in a Colab NotebookA practical introduction to LLM fine-tuningJul 25, 202343Jul 25, 202343