Update README.md
This commit is contained in:
parent
04f5575a23
commit
5c1411ff18
|
|
@ -1,7 +1,7 @@
|
|||
# Alpaca Lora 4bit
|
||||
Made some adjust for the code in peft and gptq for llama, and make it possible for lora finetuning with a 4 bits base model. The same adjustment can be made for 2, 3 and 8 bits.
|
||||
<br>
|
||||
Still numerically unstable.
|
||||
~Still numerically unstable.~ Resolved.
|
||||
<br>
|
||||
# Requirements
|
||||
gptq-for-llama: https://github.com/qwopqwop200/GPTQ-for-LLaMa<br>
|
||||
|
|
|
|||
Loading…
Reference in New Issue