Update README.md
This commit is contained in:
parent
bbaf1b1bf5
commit
ae04f88e57
16
README.md
16
README.md
|
|
@ -1,14 +1,14 @@
|
|||
# Alpaca Lora 4bit
|
||||
Made some adjust for the code in peft and gptq for llama, and make it possible for lora finetuning with a 4 bits base model. The same adjustment can be made for 2, 3 and 8 bits.
|
||||
|
||||
<br>
|
||||
# Requirements
|
||||
gptq-for-llama: https://github.com/qwopqwop200/GPTQ-for-LLaMa
|
||||
peft: https://github.com/huggingface/peft.git
|
||||
|
||||
gptq-for-llama: https://github.com/qwopqwop200/GPTQ-for-LLaMa<br>
|
||||
peft: https://github.com/huggingface/peft.git<br>
|
||||
<br>
|
||||
# Install
|
||||
copy files from GPTQ-for-LLaMa into GPTQ-for-LLaMa path and re-compile cuda extension
|
||||
copy files from peft/tuners/lora.py to peft path, replace it
|
||||
|
||||
copy files from GPTQ-for-LLaMa into GPTQ-for-LLaMa path and re-compile cuda extension<br>
|
||||
copy files from peft/tuners/lora.py to peft path, replace it<br>
|
||||
<br>
|
||||
# Finetuning
|
||||
The same finetune script from https://github.com/tloen/alpaca-lora can be used.
|
||||
The same finetune script from https://github.com/tloen/alpaca-lora can be used.<br>
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue