Finetuning Overview:
Model Used: gpt2
Dataset: cognitivecomputations/dolphin-coder
Dataset Insights:
Dolphin-Coder dataset – a high-quality collection of 100,000+ coding questions and responses. It's perfect for supervised fine-tuning (SFT), and teaching language models to improve on coding-based tasks.
Finetuning Details:
With the utilization of MonsterAPI's no-code LLM finetuner, this finetuning:
- Was achieved with great cost-effectiveness.
 - Completed in a total duration of 58mins 48s for 1 epochs using an A6000 48GB GPU.
 - Costed 
$1.96for the entire 1 epoch. 
Hyperparameters & Additional Details:
- Epochs: 1
 - Total Finetuning Cost: $1.96
 - Model Path: gpt2
 - Learning Rate: 0.0002
 - Data Split: 100% train
 - Gradient Accumulation Steps: 128
 - lora r: 32
 - lora alpha: 64
 
license: apache-2.0
- Downloads last month
 - -
 
	Inference Providers
	NEW
	
	
	This model isn't deployed by any Inference Provider.
	🙋
			
		Ask for provider support
Model tree for monsterapi/gpt2_137m_DolphinCoder
Base model
openai-community/gpt2