metadata
			base_model:
  - NickyNicky/Llama-1B-base-GRPO-miniThinky_v1
  - enesarda22/Med-Llama-3.2-1B-DeepSeek67B-Distilled
  - unsloth/Llama-3.2-1B-Instruct
  - bunnycore/LLama-3.2-1B-General-lora_model
  - keeeeenw/Llama-3.2-1B-Instruct-Open-R1-Distill
  - marcuscedricridia/Mixmix-LlaMAX3.2-1B
  - unsloth/Llama-3.2-1B-Instruct
  - Leon1309/Llama-3.2-1B-SFT-LoRA-All
  - unsloth/Llama-3.2-1B-Instruct
library_name: transformers
tags:
  - mergekit
  - merge
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using unsloth/Llama-3.2-1B-Instruct as a base.
Models Merged
The following models were included in the merge:
- NickyNicky/Llama-1B-base-GRPO-miniThinky_v1
- enesarda22/Med-Llama-3.2-1B-DeepSeek67B-Distilled
- unsloth/Llama-3.2-1B-Instruct + bunnycore/LLama-3.2-1B-General-lora_model
- keeeeenw/Llama-3.2-1B-Instruct-Open-R1-Distill
- marcuscedricridia/Mixmix-LlaMAX3.2-1B
- unsloth/Llama-3.2-1B-Instruct + Leon1309/Llama-3.2-1B-SFT-LoRA-All
Configuration
The following YAML configuration was used to produce this model:
merge_method: model_stock  
base_model: unsloth/Llama-3.2-1B-Instruct
models:  
  - model: NickyNicky/Llama-1B-base-GRPO-miniThinky_v1
  - model: enesarda22/Med-Llama-3.2-1B-DeepSeek67B-Distilled
  - model: keeeeenw/Llama-3.2-1B-Instruct-Open-R1-Distill
  - model: unsloth/Llama-3.2-1B-Instruct+Leon1309/Llama-3.2-1B-SFT-LoRA-All
  - model: unsloth/Llama-3.2-1B-Instruct+bunnycore/LLama-3.2-1B-General-lora_model
  - model: marcuscedricridia/Mixmix-LlaMAX3.2-1B
dtype: bfloat16  
tokenizer_source: base  
int8_mask: true  
normalize: true  
name: Mixmix-LlaMAX3.2-1B-Merge