Phantom Wan GGUFs
					Collection
				
				2 items
				โข 
				Updated
					
				
This is a direct GGUF conversion of bytedance-research/Phantom .
The model files can be used in ComfyUI with the ComfyUI-GGUF custom node. Place the required model(s) in the following folders:
| Type | Name | Location | Download | 
|---|---|---|---|
| Main Model | Phantom_Wan_14B | ComfyUI/models/unet | 
GGUF (this repo) | 
| Text Encoder | umt5-xxl-encoder | ComfyUI/models/text_encoders | 
Safetensors / GGUF | 
| VAE | wan_2.1_vae | ComfyUI/models/vae | 
Safetensors | 
โ ๏ธ Important:
This model only supports CausVid LoRA 14B version. 1.3B version is not compatible.
As this is a quantized model not a finetune, all the same restrictions/original license terms still apply.
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Base model
bytedance-research/Phantom