Yi-1.5-34B-32K finetuned on adamo1139/uninstruct-v1-experimental-chatml. It's an attempt to fix synthetic SFT contamination of original Yi-1.5-34B-32K.

Next up this model tuned with ORPO on rawrr_v2-2_stage1. Then will come HESOYAM and AEZAKMI finetunes based on those fixed base models.

Downloads last month
5
Safetensors
Model size
34B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support