Update README.md
Browse files
README.md
CHANGED
|
@@ -9,202 +9,111 @@ tags:
|
|
| 9 |
- transformers
|
| 10 |
- trl
|
| 11 |
- unsloth
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
---
|
| 13 |
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
-
|
| 29 |
-
-
|
| 30 |
-
-
|
| 31 |
-
-
|
| 32 |
-
-
|
| 33 |
-
-
|
| 34 |
-
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
-
|
| 41 |
-
-
|
| 42 |
-
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
| 114 |
-
|
| 115 |
-
### Testing Data, Factors & Metrics
|
| 116 |
-
|
| 117 |
-
#### Testing Data
|
| 118 |
-
|
| 119 |
-
<!-- This should link to a Dataset Card if possible. -->
|
| 120 |
-
|
| 121 |
-
[More Information Needed]
|
| 122 |
-
|
| 123 |
-
#### Factors
|
| 124 |
-
|
| 125 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
| 126 |
-
|
| 127 |
-
[More Information Needed]
|
| 128 |
-
|
| 129 |
-
#### Metrics
|
| 130 |
-
|
| 131 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
| 132 |
-
|
| 133 |
-
[More Information Needed]
|
| 134 |
-
|
| 135 |
-
### Results
|
| 136 |
-
|
| 137 |
-
[More Information Needed]
|
| 138 |
-
|
| 139 |
-
#### Summary
|
| 140 |
-
|
| 141 |
-
|
| 142 |
-
|
| 143 |
-
## Model Examination [optional]
|
| 144 |
-
|
| 145 |
-
<!-- Relevant interpretability work for the model goes here -->
|
| 146 |
-
|
| 147 |
-
[More Information Needed]
|
| 148 |
-
|
| 149 |
-
## Environmental Impact
|
| 150 |
-
|
| 151 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
| 152 |
-
|
| 153 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
| 154 |
-
|
| 155 |
-
- **Hardware Type:** [More Information Needed]
|
| 156 |
-
- **Hours used:** [More Information Needed]
|
| 157 |
-
- **Cloud Provider:** [More Information Needed]
|
| 158 |
-
- **Compute Region:** [More Information Needed]
|
| 159 |
-
- **Carbon Emitted:** [More Information Needed]
|
| 160 |
-
|
| 161 |
-
## Technical Specifications [optional]
|
| 162 |
-
|
| 163 |
-
### Model Architecture and Objective
|
| 164 |
-
|
| 165 |
-
[More Information Needed]
|
| 166 |
-
|
| 167 |
-
### Compute Infrastructure
|
| 168 |
-
|
| 169 |
-
[More Information Needed]
|
| 170 |
-
|
| 171 |
-
#### Hardware
|
| 172 |
-
|
| 173 |
-
[More Information Needed]
|
| 174 |
-
|
| 175 |
-
#### Software
|
| 176 |
-
|
| 177 |
-
[More Information Needed]
|
| 178 |
-
|
| 179 |
-
## Citation [optional]
|
| 180 |
-
|
| 181 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
| 182 |
-
|
| 183 |
-
**BibTeX:**
|
| 184 |
-
|
| 185 |
-
[More Information Needed]
|
| 186 |
-
|
| 187 |
-
**APA:**
|
| 188 |
-
|
| 189 |
-
[More Information Needed]
|
| 190 |
-
|
| 191 |
-
## Glossary [optional]
|
| 192 |
-
|
| 193 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
| 194 |
-
|
| 195 |
-
[More Information Needed]
|
| 196 |
-
|
| 197 |
-
## More Information [optional]
|
| 198 |
-
|
| 199 |
-
[More Information Needed]
|
| 200 |
-
|
| 201 |
-
## Model Card Authors [optional]
|
| 202 |
-
|
| 203 |
-
[More Information Needed]
|
| 204 |
-
|
| 205 |
-
## Model Card Contact
|
| 206 |
-
|
| 207 |
-
[More Information Needed]
|
| 208 |
-
### Framework versions
|
| 209 |
-
|
| 210 |
-
- PEFT 0.16.0
|
|
|
|
| 9 |
- transformers
|
| 10 |
- trl
|
| 11 |
- unsloth
|
| 12 |
+
- Reddit
|
| 13 |
+
- toxic
|
| 14 |
+
license: apache-2.0
|
| 15 |
+
language:
|
| 16 |
+
- en
|
| 17 |
+
metrics:
|
| 18 |
+
- accuracy
|
| 19 |
---
|
| 20 |
|
| 21 |
+
## Description
|
| 22 |
+
|
| 23 |
+
This repository contains LoRA adapter weights trained on top of a Gemma-3-12B base model to help determine whether a Reddit comment violates a specified subreddit rule. The model expects a structured prompt containing (1) subreddit, (2) a single rule, (3) two violating examples, (4) two non-violating examples, and (5) the comment to evaluate. It was trained in an SFT-style to output a single token answer: either "Yes" or "No".
|
| 24 |
+
|
| 25 |
+
## Intended uses and limitations
|
| 26 |
+
|
| 27 |
+
Intended uses
|
| 28 |
+
- Assist human moderators and researchers by triaging comments with a focused rule-based prompt.
|
| 29 |
+
- Rapidly surface potential rule violations for human review.
|
| 30 |
+
|
| 31 |
+
Out-of-scope / Not recommended
|
| 32 |
+
- Automated removal, banning, or other punitive actions without human oversight.
|
| 33 |
+
- Use on content domains very different from Reddit comments without re-evaluation.
|
| 34 |
+
|
| 35 |
+
## Fine-tuning procedure
|
| 36 |
+
- Frameworks used: `unsloth` FastLanguageModel helper, `transformers`, `peft` (LoRA), `trl` (SFTTrainer), `datasets`.
|
| 37 |
+
- Base model: `unsloth/gemma-3-12b-it-unsloth-bnb-4bit` (loaded in 4-bit with bfloat16 where supported).
|
| 38 |
+
- LoRA / PEFT config (as used in the script):
|
| 39 |
+
- rank (r): 16
|
| 40 |
+
- alpha: 32
|
| 41 |
+
- target modules: ["q_proj","k_proj","v_proj","o_proj","gate_proj","up_proj","down_proj"]
|
| 42 |
+
- lora_dropout: 0
|
| 43 |
+
- bias: "none"
|
| 44 |
+
|
| 45 |
+
- Training hyperparameters (from training script):
|
| 46 |
+
- max_seq_length: 2048
|
| 47 |
+
- per_device_train_batch_size: 1
|
| 48 |
+
- gradient_accumulation_steps: 4
|
| 49 |
+
- num_train_epochs: 2
|
| 50 |
+
- learning_rate: 2e-4
|
| 51 |
+
- optimizer: `paged_adamw_8bit`
|
| 52 |
+
- weight_decay: 0.1
|
| 53 |
+
- lr_scheduler_type: `cosine`
|
| 54 |
+
- seed(s): 3407 (and 52 referenced in script context)
|
| 55 |
+
|
| 56 |
+
- Training approach: SFTTrainer used with a chat-style prompt template and `train_on_responses_only` to teach the model to emit the target answer token.
|
| 57 |
+
|
| 58 |
+
## How to use (example)
|
| 59 |
+
|
| 60 |
+
Below is a minimal example that demonstrates how to load the base model and apply the LoRA adapters for inference using `transformers` and `peft`. Adjust device and quantization options according to your environment.
|
| 61 |
+
|
| 62 |
+
```python
|
| 63 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
|
| 64 |
+
from peft import PeftModel
|
| 65 |
+
|
| 66 |
+
# 1) Load tokenizer from the base model
|
| 67 |
+
tokenizer = AutoTokenizer.from_pretrained("unsloth/gemma-3-12b-it-unsloth-bnb-4bit", use_fast=False)
|
| 68 |
+
|
| 69 |
+
# 2) Load the base model (example with 4-bit quantization)
|
| 70 |
+
bnb_config = BitsAndBytesConfig(load_in_4bit=True)
|
| 71 |
+
base_model = AutoModelForCausalLM.from_pretrained(
|
| 72 |
+
"unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
|
| 73 |
+
device_map="auto",
|
| 74 |
+
quantization_config=bnb_config,
|
| 75 |
+
)
|
| 76 |
+
|
| 77 |
+
# 3) Load LoRA adapters (this repo's adapters)
|
| 78 |
+
model = PeftModel.from_pretrained(base_model, "jatinmehra/Gemma-3-12B-JigSaw-Agile-Community-Rules-Classification-reddit-mod")
|
| 79 |
+
|
| 80 |
+
# 4) Prepare a prompt (follow the same template as training)
|
| 81 |
+
|
| 82 |
+
SYS_PROMPT = "You are an expert content moderator. Carefully analyze whether comments violate specific subreddit rules by comparing them to the provided examples. Focus on the spirit and intent of the rule, not just exact keyword matches."
|
| 83 |
+
user_prompt = """
|
| 84 |
+
Subreddit: r/{subreddit}
|
| 85 |
+
|
| 86 |
+
Rule: {rule}
|
| 87 |
+
|
| 88 |
+
VIOLATING Examples (these break the rule):
|
| 89 |
+
Example 1: {positive_example_1}
|
| 90 |
+
Example 2: {positive_example_2}
|
| 91 |
+
|
| 92 |
+
NON-VIOLATING Examples (these follow the rule):
|
| 93 |
+
Example 1: {negative_example_1}
|
| 94 |
+
Example 2: {negative_example_2}
|
| 95 |
+
|
| 96 |
+
Comment to evaluate:
|
| 97 |
+
{body}
|
| 98 |
+
|
| 99 |
+
Does this comment violate the rule? Answer only Yes or No.
|
| 100 |
+
Answer:
|
| 101 |
+
""".format(
|
| 102 |
+
subreddit="example_sub",
|
| 103 |
+
rule="No personal attacks",
|
| 104 |
+
positive_example_1="You're an idiot for saying that.",
|
| 105 |
+
positive_example_2="Go kill yourself.",
|
| 106 |
+
negative_example_1="I disagree with your point.",
|
| 107 |
+
negative_example_2="This is inaccurate; here's a source.",
|
| 108 |
+
body="That person is so dumb for supporting that view."
|
| 109 |
+
)
|
| 110 |
+
|
| 111 |
+
message = [
|
| 112 |
+
{"role": "system", "content": SYS_PROMPT},
|
| 113 |
+
{"role": "user", "content": user_prompt}
|
| 114 |
+
]
|
| 115 |
+
|
| 116 |
+
inputs = tokenizer(message, return_tensors="pt").to(model.device)
|
| 117 |
+
outputs = model.generate(**inputs, max_new_tokens=1)
|
| 118 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 119 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|