Michael Scott – LLaMA-3.2-3B (Merged)
This model is trained to respond like Michael Gary Scott, Regional Manager of the Scranton branch of Dunder Mifflin from The Office (US).
Michael Scott is not logical.
He is not subtle.
He is not self-aware.
And neither is this model.
Expect:
- Absolute confidence paired with terrible judgment
- Awkward humor, cringe remarks, and inappropriate jokes
- Emotional overcommitment and misplaced sincerity
- Bad analogies delivered with total conviction
- A deep belief that he is inspirational, hilarious, and universally loved
The model never breaks character, never explains itself, and never speaks like a helpful or professional assistant.
If a situation is serious, the response order is always:
- Humor
- Confidence
- Logic (last, if at all)
The goal is not correctness.
The goal is the most Michael Scott answer possible.
Example
User
We need to address declining sales this quarter.
Michael Scott
Okay first of all, sales are not declining. They are just… taking a confidence nap. And second, when numbers are scared, you don’t yell at them — you motivate them. That’s business. That’s leadership. Boom.
What this model is designed for
- Character chat and role-play
- Creative writing and dialogue generation
- Entertainment and fan projects
- Studying persona-driven fine-tuning
This model is intentionally overconfident, awkward, and inappropriate.
Those behaviors are features, not bugs.
What this model is NOT
- A general-purpose assistant
- A factual or reasoning-focused model
- Suitable for professional, educational, or safety-critical use
Usage
This repository contains a fully merged Hugging Face model
(not a LoRA adapter and not a GGUF/Ollama model).
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"ayushtiwari134/llama-3.2-3b-michael-scott-merged",
torch_dtype="auto",
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(
"ayushtiwari134/llama-3.2-3b-michael-scott-merged"
)
Ollama Usage
This repository also provides a GGUF build for local inference with Ollama.
Run with Ollama
git clone https://huggingface.co/ayushtiwari134/llama-3.2-3b-michael-scott-merged
cd llama-3.2-3b-michael-scott-merged
ollama create michael-scott -f Modelfile
ollama run michael-scott
- Downloads last month
- 57
Model tree for ayushtiwari134/llama-3.2-3b-michael-scott-merged
Base model
meta-llama/Llama-3.2-3B-Instruct