Go to file
John Smith 551f62a0e8
add patch for gptq and peft
2023-03-18 13:31:48 +08:00
GPTQ-for-LLaMa add patch for gptq and peft 2023-03-18 13:31:48 +08:00
peft/tuners add patch for gptq and peft 2023-03-18 13:31:48 +08:00
README.md Update README.md 2023-03-18 13:26:04 +08:00
requirements.txt add patch for gptq and peft 2023-03-18 13:31:48 +08:00

README.md

alpaca_lora_4bit

Made some adjust for the code in peft and gptq for llama, and make it possible for lora finetuning with a 4 bits base model. The same adjustment can be made for 2, 3 and 8 bits.

packages required

gptq-for-llama: https://github.com/qwopqwop200/GPTQ-for-LLaMa