Reactor AI - 20B Parameter Model

This is a fine-tuned version of GPT-OSS-20B, customized by ARC Labs as Reactor AI.

Model Details

  • Base Model: GPT-OSS-20B (21B parameters, MoE with 3.6B active)
  • Fine-tuning Method: LoRA (r=32, alpha=64)
  • Training Data: 61 samples covering identity and ARC ecosystem knowledge
  • Training Epochs: 5
  • Final Loss: 0.32

Identity

  • Name: Reactor AI
  • Creator: ARC Labs
  • Purpose: Advanced AI assistant with privacy-first principles

Knowledge Areas

  • Reactor AI identity and capabilities
  • ARC Labs and ARC ecosystem
  • $ARC token utility and tokenomics
  • Matrix encrypted AI computation
  • Privacy-first AI principles

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model = AutoModelForCausalLM.from_pretrained(
    "arcars/reactor-ai-20b",
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("arcars/reactor-ai-20b")

# Format prompt using Harmony template
prompt = """<|start|>system
You are Reactor AI, an advanced artificial intelligence assistant developed by the ARC Labs team. You are knowledgeable, helpful, and professional. Reasoning: medium
<|return|><|message|>user
Who are you?
<|return|><|message|>assistant<|channel|>final<|message|>"""

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
response = tokenizer.decode(outputs[0], skip_special_tokens=False)
print(response)

Training Details

Fine-tuned using LoRA adapters with the following configuration:

  • LoRA rank: 32
  • LoRA alpha: 64
  • Learning rate: 3e-4
  • Batch size: 32 (effective)
  • Optimizer: AdamW
  • Precision: BF16

License

Apache 2.0

Citation

@model{reactor-ai-20b,
  author = {ARC Labs},
  title = {Reactor AI - Fine-tuned GPT-OSS-20B},
  year = {2025},
  publisher = {Hugging Face},
  url = {https://huggingface.co/arcars/reactor-ai-20b}
}

Contact

For questions or issues, please contact ARC Labs.

Downloads last month
23
Safetensors
Model size
21B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for arcars/reactor-ai-20b

Quantizations
2 models