diff --git a/README.md b/README.md index f3e5242..13f1bde 100644 --- a/README.md +++ b/README.md @@ -8,7 +8,7 @@ Can run real-time LLM chat using alpaca on a 8GB NVIDIA/CUDA GPU (ie 3070 Ti mob ## Requirements - Docker -- NVIDIA GPU +- NVIDIA GPU with driver version that supports CUDA 11.7+ (e.g. 525) ## Installation