diff --git a/README.md b/README.md index 13f1bde..1b969a1 100644 --- a/README.md +++ b/README.md @@ -7,6 +7,7 @@ Based on https://github.com/johnsmith0031/alpaca_lora_4bit Can run real-time LLM chat using alpaca on a 8GB NVIDIA/CUDA GPU (ie 3070 Ti mobile) ## Requirements +- Linux - Docker - NVIDIA GPU with driver version that supports CUDA 11.7+ (e.g. 525)