Go to file
sterlind 0aea8f45e3
Fix path to autograd_4bit.py in install.sh
2023-03-22 00:24:52 -07:00
GPTQ-for-LLaMa add more scripts and adjust code for transformer branch 2023-03-22 04:09:04 +00:00
peft/tuners add patch for gptq and peft 2023-03-18 13:31:48 +08:00
README.md Update README.md 2023-03-22 14:56:50 +08:00
data.txt add data 2023-03-22 12:13:34 +08:00
finetune.py add more scripts and adjust code for transformer branch 2023-03-22 04:09:04 +00:00
inference.py add more scripts and adjust code for transformer branch 2023-03-22 04:09:04 +00:00
install.bat add more scripts and adjust code for transformer branch 2023-03-22 04:09:04 +00:00
install.sh Fix path to autograd_4bit.py in install.sh 2023-03-22 00:24:52 -07:00
requirements.txt add more scripts and adjust code for transformer branch 2023-03-22 04:09:04 +00:00

README.md

Alpaca Lora 4bit

Made some adjust for the code in peft and gptq for llama, and make it possible for lora finetuning with a 4 bits base model. The same adjustment can be made for 2, 3 and 8 bits.
~Still numerically unstable.~ Resolved.
Reconstruct fp16 matrix from 4bit data and call torch.matmul largely increased the inference speed.
Added install script for windows and linux.

Requirements

gptq-for-llama: https://github.com/qwopqwop200/GPTQ-for-LLaMa
peft: https://github.com/huggingface/peft.git

Install

~copy files from GPTQ-for-LLaMa into GPTQ-for-LLaMa path and re-compile cuda extension~
~copy files from peft/tuners/lora.py to peft path, replace it~

Linux:

./install.sh

Windows:

./install.bat

Finetune

~The same finetune script from https://github.com/tloen/alpaca-lora can be used.~

After installation, this script can be used:

python finetune.py

Inference

After installation, this script can be used:

python inference.py