Finetuned GPT_OSS Model

This GPT_OSS model was finetuned with accelerated training (2x faster) using Unsloth and Hugging Face's TRL library for reinforcement learning.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Sai1290/gpt-oss-finetune"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Example generation
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
Downloads last month
4
Safetensors
Model size
22B params
Tensor type
BF16
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Sai1290/gpt-oss-finetune

Base model

openai/gpt-oss-20b
Quantized
(83)
this model