--- base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gpt_oss license: apache-2.0 language: - en --- # Finetuned GPT_OSS Model - **Developed by:** Sai1290 - text-generation-inference - transformers - unsloth - gpt_oss - **License:** Apache-2.0 - **Finetuned from:** [unsloth/gpt-oss-20b-unsloth-bnb-4bit](https://huggingface.co/unsloth/gpt-oss-20b-unsloth-bnb-4bit) This GPT_OSS model was finetuned with accelerated training (2x faster) using [Unsloth](https://github.com/unslothai/unsloth) and Hugging Face's [TRL](https://github.com/huggingface/transformers/tree/main/examples/research_projects/trl) library for reinforcement learning. ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Sai1290/gpt-oss-finetune" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Example generation inputs = tokenizer("Hello, how are you?", return_tensors="pt") outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0]))