Update README.md
This commit is contained in:
parent
ae04f88e57
commit
fecce0e1a5
|
|
@ -1,6 +1,8 @@
|
||||||
# Alpaca Lora 4bit
|
# Alpaca Lora 4bit
|
||||||
Made some adjust for the code in peft and gptq for llama, and make it possible for lora finetuning with a 4 bits base model. The same adjustment can be made for 2, 3 and 8 bits.
|
Made some adjust for the code in peft and gptq for llama, and make it possible for lora finetuning with a 4 bits base model. The same adjustment can be made for 2, 3 and 8 bits.
|
||||||
<br>
|
<br>
|
||||||
|
Still numerically unstable.
|
||||||
|
<br>
|
||||||
# Requirements
|
# Requirements
|
||||||
gptq-for-llama: https://github.com/qwopqwop200/GPTQ-for-LLaMa<br>
|
gptq-for-llama: https://github.com/qwopqwop200/GPTQ-for-LLaMa<br>
|
||||||
peft: https://github.com/huggingface/peft.git<br>
|
peft: https://github.com/huggingface/peft.git<br>
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue