Update README.md

This commit is contained in:
John Smith 2023-03-22 14:55:24 +08:00 committed by GitHub
parent 02bd0338f1
commit cab067fef9
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 36 additions and 6 deletions

View File

@ -7,14 +7,44 @@ Reconstruct fp16 matrix from 4bit data and call torch.matmul largely increased t
<br>
Added install script for windows and linux.
<br>
# Requirements
gptq-for-llama: https://github.com/qwopqwop200/GPTQ-for-LLaMa<br>
peft: https://github.com/huggingface/peft.git<br>
<br>
# Install
copy files from GPTQ-for-LLaMa into GPTQ-for-LLaMa path and re-compile cuda extension<br>
copy files from peft/tuners/lora.py to peft path, replace it<br>
<br>
# Finetuning
The same finetune script from https://github.com/tloen/alpaca-lora can be used.<br>
# Install
~copy files from GPTQ-for-LLaMa into GPTQ-for-LLaMa path and re-compile cuda extension~<br>
~copy files from peft/tuners/lora.py to peft path, replace it~<br>
<br>
Linux:
```
./install.sh
```
<br>
Windows:
```
./install.bat
```
<br>
# Finetune
~The same finetune script from https://github.com/tloen/alpaca-lora can be used.~<br>
After installation, this script can be used:
```
python finetune.py
```
# Inference
After installation, this script can be used:
```
python inference.py
```