Go to file
John Smith 8d198e0171
Update README.md
2023-03-21 16:48:17 +08:00
GPTQ-for-LLaMa add fast_4bit_matmul and auto switch 2 methods according to bottleneck 2023-03-21 08:43:07 +00:00
peft/tuners add patch for gptq and peft 2023-03-18 13:31:48 +08:00
README.md Update README.md 2023-03-21 16:48:17 +08:00
requirements.txt add patch for gptq and peft 2023-03-18 13:31:48 +08:00

README.md

Alpaca Lora 4bit

Made some adjust for the code in peft and gptq for llama, and make it possible for lora finetuning with a 4 bits base model. The same adjustment can be made for 2, 3 and 8 bits.
~Still numerically unstable.~ Resolved.
Reconstruct fp16 matrix from 4bit data and call torch.matmul drastically increased the inference speed.

Requirements

gptq-for-llama: https://github.com/qwopqwop200/GPTQ-for-LLaMa
peft: https://github.com/huggingface/peft.git

Install

copy files from GPTQ-for-LLaMa into GPTQ-for-LLaMa path and re-compile cuda extension
copy files from peft/tuners/lora.py to peft path, replace it

Finetuning

The same finetune script from https://github.com/tloen/alpaca-lora can be used.