diff --git a/README.md b/README.md
index fb0326d..f22e387 100644
--- a/README.md
+++ b/README.md
@@ -2,6 +2,9 @@
Made some adjust for the code in peft and gptq for llama, and make it possible for lora finetuning with a 4 bits base model. The same adjustment can be made for 2, 3 and 8 bits.
* Install Manual by s4rduk4r: https://github.com/s4rduk4r/alpaca_lora_4bit_readme/blob/main/README.md (**NOTE:** don't use the install script, use the requirements.txt instead.)
+
+* Also Remember to create a venv if you do not want the packages be overwritten.
+
# Update Logs
* Resolved numerically unstable issue