diff --git a/README.md b/README.md
index de63f58..36ac88a 100644
--- a/README.md
+++ b/README.md
@@ -1,30 +1,17 @@
# Alpaca Lora 4bit
Made some adjust for the code in peft and gptq for llama, and make it possible for lora finetuning with a 4 bits base model. The same adjustment can be made for 2, 3 and 8 bits.
-
-* Install Manual by s4rduk4r: https://github.com/s4rduk4r/alpaca_lora_4bit_readme/blob/main/README.md (**NOTE:** don't use the install script, use the requirements.txt instead.)
-
+* Install Manual by s4rduk4r: https://github.com/s4rduk4r/alpaca_lora_4bit_readme/blob/main/README.md (**NOTE:** don't use the install script, use the requirements.txt instead.)
* Also Remember to create a venv if you do not want the packages be overwritten.
-
# Update Logs
* Resolved numerically unstable issue
-
-
* Reconstruct fp16 matrix from 4bit data and call torch.matmul largely increased the inference speed.
-
-
* Added install script for windows and linux.
-
-
* Added Gradient Checkpointing. Now It can finetune 30b model 4bit on a single GPU with 24G VRAM with Gradient Checkpointing enabled. (finetune.py updated) (but would reduce training speed, so if having enough VRAM this option is not needed)
-
-
* Added install manual by s4rduk4r
-
-
* Added pip install support by sterlind, preparing to merge changes upstream
-
+* Add V2 model support (with groupsize, both inference + finetune)
# Requirements
gptq-for-llama: https://github.com/qwopqwop200/GPTQ-for-LLaMa