Update README.md

This commit is contained in:
John Smith 2023-03-18 13:36:06 +08:00 committed by GitHub
parent bbaf1b1bf5
commit ae04f88e57
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 8 additions and 8 deletions

View File

@ -1,14 +1,14 @@
# Alpaca Lora 4bit
Made some adjust for the code in peft and gptq for llama, and make it possible for lora finetuning with a 4 bits base model. The same adjustment can be made for 2, 3 and 8 bits.
<br>
# Requirements
gptq-for-llama: https://github.com/qwopqwop200/GPTQ-for-LLaMa
peft: https://github.com/huggingface/peft.git
gptq-for-llama: https://github.com/qwopqwop200/GPTQ-for-LLaMa<br>
peft: https://github.com/huggingface/peft.git<br>
<br>
# Install
copy files from GPTQ-for-LLaMa into GPTQ-for-LLaMa path and re-compile cuda extension
copy files from peft/tuners/lora.py to peft path, replace it
copy files from GPTQ-for-LLaMa into GPTQ-for-LLaMa path and re-compile cuda extension<br>
copy files from peft/tuners/lora.py to peft path, replace it<br>
<br>
# Finetuning
The same finetune script from https://github.com/tloen/alpaca-lora can be used.
The same finetune script from https://github.com/tloen/alpaca-lora can be used.<br>