Mistral-7B-Instruct-v0.2-8bit-abliterated

This model was abliterated by computing a refusal vector an 8-bit bitsandbytes quant, and then applying the vector to the full weight model. Abliteration was performed locally using a CUDA GPU, the VRAM memory consumption appeared to be constrained to be under 12GB.

No additional fine-tuning was performed on these weights. Repair is required for proper use.

The code used can be found on Github at https://github.com/jim-plus/llm-abliteration.

Downloads last month
9
Safetensors
Model size
7B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for grimjim/Mistral-7B-Instruct-v0.2-8bit-abliterated

Finetuned
(1025)
this model
Quantizations
2 models