diff --git a/README.md b/README.md index 7f2e90d..bf8eac8 100644 --- a/README.md +++ b/README.md @@ -3,6 +3,8 @@ Made some adjust for the code in peft and gptq for llama, and make it possible f
~Still numerically unstable.~ Resolved.
+Reconstruct fp16 matrix from 4bit data and call torch.matmul drastically increased the inference speed. +
# Requirements gptq-for-llama: https://github.com/qwopqwop200/GPTQ-for-LLaMa
peft: https://github.com/huggingface/peft.git