This GGUF file is a direct conversion of xiabs/DreamOmni2-7.6B.

As a quantized model, all original licensing terms and usage restrictions continue to apply.

Usage

I created some custom nodes to run on ComfyUI. You can download it here: DreamOmni2-GGUF Place the model files in ComfyUI/models/unet and loras on ComfyUI/models/loras and refer to the GitHub README for detailed installation instructions.

Model Details

Model Description

Model Sources [optional]

Downloads last month
1,973
GGUF
Model size
8B params
Architecture
qwen2vl
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for rafacost/DreamOmni2-7.6B-GGUF

Base model

xiabs/DreamOmni2
Quantized
(3)
this model