Uploaded finetuned model

Resources Notifications settings

  • Developed by: Entity-27th
  • License: apache-2.0
  • Finetuned from model : unsloth/gemma-4-26B-A4B-it
  • Hardware: AMD Instinct MI300X x 1

This gemma4 model was trained 2x faster with Unsloth and Huggingface's TRL library.

GemPT-26B-A4B is a fine-tuned/distilled variant of Gemma-4-26B-A4B-IT, customized to enhance the original model's reasoning capabilities by injecting step-by-step CoT extracted from GPT-5.4-High. It was trained on a single MI300X accelerator, using LoRA.

Downloads last month
-
Safetensors
Model size
27B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Entity-27th/GemPT-26B-A4B

Finetuned
(1)
this model

Dataset used to train Entity-27th/GemPT-26B-A4B