Efficient LLM Fine-tuning with LoRA and QLoRA

PyTorch Transformers bitsandbytes PEFT

An implementation of efficient fine-tuning techniques for large language models, focusing on parameter-efficient methods like LoRA and QLoRA. This project demonstrates how to fine-tune LLMs with minimal computational resources.

Features

Back View on GitHub