Version: [WhiteSnake](https://huggingface.co/DoppelReflEx/MN-12B-Mimicore-WhiteSnake) - [Orochi](https://huggingface.co/DoppelReflEx/MN-12B-Mimicore-Orochi) - [GreenSnake](#)
# What is it?
Previous version of WhiteSnake, not too much different in OpenLLM LeaderBoard scores. Not too good to archiving 'human response', but still good enough.
This merge model is a gift for Lunar New Year, haha. Enjoy it.
Good for RP, ERP, Story Telling.
# Chat Format? ChatML of course!
### Models Merged The following models were included in the merge: * [PocketDoc/Dans-PersonalityEngine-V1.1.0-12b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.1.0-12b) * [inflatebot/MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: inflatebot/MN-12B-Mag-Mell-R1 - model: PocketDoc/Dans-PersonalityEngine-V1.1.0-12b merge_method: slerp base_model: inflatebot/MN-12B-Mag-Mell-R1 parameters: t: [0.1, 0.2, 0.4, 0.6, 0.6, 0.4, 0.2, 0.1] dtype: bfloat16 tokenizer_source: base ```