--- base_model: google/gemma-2-9b-it library_name: peft --- # LoRA Adapter for SAE Introspection This is a LoRA (Low-Rank Adaptation) adapter trained for SAE (Sparse Autoencoder) introspection tasks. ## Base Model - **Base Model**: `google/gemma-2-9b-it` - **Adapter Type**: LoRA - **Task**: SAE Feature Introspection ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel # Load base model and tokenizer base_model = AutoModelForCausalLM.from_pretrained("google/gemma-2-9b-it") tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-9b-it") # Load LoRA adapter model = PeftModel.from_pretrained(base_model, "thejaminator/gemma-hook-layer-0-step-3000") ``` ## Training Details This adapter was trained using the lightweight SAE introspection training script to help the model understand and explain SAE features through activation steering.