Update README.md
This commit is contained in:
parent
0f25304184
commit
6130b9bd0f
|
|
@ -7,6 +7,7 @@ Based on https://github.com/johnsmith0031/alpaca_lora_4bit
|
||||||
Can run real-time LLM chat using alpaca on a 8GB NVIDIA/CUDA GPU (ie 3070 Ti mobile)
|
Can run real-time LLM chat using alpaca on a 8GB NVIDIA/CUDA GPU (ie 3070 Ti mobile)
|
||||||
|
|
||||||
## Requirements
|
## Requirements
|
||||||
|
- Linux
|
||||||
- Docker
|
- Docker
|
||||||
- NVIDIA GPU with driver version that supports CUDA 11.7+ (e.g. 525)
|
- NVIDIA GPU with driver version that supports CUDA 11.7+ (e.g. 525)
|
||||||
|
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue