diff --git a/README.md b/README.md
index a34d5d6..7f2e90d 100644
--- a/README.md
+++ b/README.md
@@ -1,7 +1,7 @@
# Alpaca Lora 4bit
Made some adjust for the code in peft and gptq for llama, and make it possible for lora finetuning with a 4 bits base model. The same adjustment can be made for 2, 3 and 8 bits.
-Still numerically unstable.
+~Still numerically unstable.~ Resolved.
# Requirements
gptq-for-llama: https://github.com/qwopqwop200/GPTQ-for-LLaMa