Mistral-7B-Instruct-v0.2-8bit-abliterated
This model was abliterated by computing a refusal vector an 8-bit bitsandbytes quant, and then applying the vector to the full weight model. Abliteration was performed locally using a CUDA GPU, the VRAM memory consumption appeared to be constrained to be under 12GB.
No additional fine-tuning was performed on these weights. Repair is required for proper use.
The code used can be found on Github at https://github.com/jim-plus/llm-abliteration.
- Downloads last month
- 9