|
|
||
|---|---|---|
| GPTQ-for-LLaMa | ||
| peft/tuners | ||
| README.md | ||
| data.txt | ||
| finetune.py | ||
| inference.py | ||
| install.bat | ||
| install.sh | ||
| requirements.txt | ||
README.md
Alpaca Lora 4bit
Made some adjust for the code in peft and gptq for llama, and make it possible for lora finetuning with a 4 bits base model. The same adjustment can be made for 2, 3 and 8 bits.
~Still numerically unstable.~ Resolved.
Reconstruct fp16 matrix from 4bit data and call torch.matmul largely increased the inference speed.
Added install script for windows and linux.
Requirements
gptq-for-llama: https://github.com/qwopqwop200/GPTQ-for-LLaMa
peft: https://github.com/huggingface/peft.git
Install
~copy files from GPTQ-for-LLaMa into GPTQ-for-LLaMa path and re-compile cuda extension~
~copy files from peft/tuners/lora.py to peft path, replace it~
Linux:
./install.sh
Windows:
./install.bat
Finetune
~The same finetune script from https://github.com/tloen/alpaca-lora can be used.~
After installation, this script can be used:
python finetune.py
Inference
After installation, this script can be used:
python inference.py