Go to file
John Smith 2b84b32fbe
Update autograd_4bit.py
2023-03-18 22:13:11 +08:00
GPTQ-for-LLaMa Update autograd_4bit.py 2023-03-18 22:13:11 +08:00
peft/tuners add patch for gptq and peft 2023-03-18 13:31:48 +08:00
README.md Update README.md 2023-03-18 18:21:01 +08:00
requirements.txt add patch for gptq and peft 2023-03-18 13:31:48 +08:00

README.md

Alpaca Lora 4bit

Made some adjust for the code in peft and gptq for llama, and make it possible for lora finetuning with a 4 bits base model. The same adjustment can be made for 2, 3 and 8 bits.
Still numerically unstable.

Requirements

gptq-for-llama: https://github.com/qwopqwop200/GPTQ-for-LLaMa
peft: https://github.com/huggingface/peft.git

Install

copy files from GPTQ-for-LLaMa into GPTQ-for-LLaMa path and re-compile cuda extension
copy files from peft/tuners/lora.py to peft path, replace it

Finetuning

The same finetune script from https://github.com/tloen/alpaca-lora can be used.