@ -8,7 +8,7 @@ Can run real-time LLM chat using alpaca on a 8GB NVIDIA/CUDA GPU (ie 3070 Ti mob
## Requirements
- Docker
- NVIDIA GPU
- NVIDIA GPU with driver version that supports CUDA 11.7+ (e.g. 525)
## Installation