gemma-3-12b-it-orthogonal-rotation-bounded-ablation-v2-12B

ORBA (Orthogonal Rotational Bounded Ablation) has been applied to several layers in this model, to both mlp.down_proj.weight and self_attn.o_proj.weight streams, along with a few supporting techniques. Preserving norms at the neuron level also ensured numerical conservation of the Frobenius norm for each stream subjected to intervention.

Some refusal behaviors have been geometrically ablated, refusal being a classic high-contrast case that has been well-studied. Safety knowledge and awareness appears to be intact. We posit that a refusal persona was ablated. The vision stack remains part of the model was not subjected to intervention.

It turns out Winsorization (magnitude clipping) to 0.995 was accidentally ad hoc good to avoid token-level glitching under the GeGLU activation function. Magnitude clipping to 0.9997 or so (0.9996?) was sufficient to avoid overflow, but resulted aforementioned glitches during generation. Additionally, ensuring numerical stability is paramount under GeGLU.

In this model, we avoided the standard subtractive techniques for both directional contrast and for ablation. The addition of numerical stabilization during calculations make this version (v2) better (less glitch prone) than the prior (v1). Further details of the intervention will be forthcoming.

Downloads last month
15
Safetensors
Model size
12B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for grimjim/gemma-3-12b-it-orthogonal-rotation-bounded-ablation-v2-12B

Finetuned
(176)
this model
Quantizations
2 models