Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
QuantStack
/
Qwen-Image-Layered-GGUF
like
45
Follow
QuantStack
1.58k
Image-Text-to-Image
GGUF
English
Chinese
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
1
README.md exists but content is empty.
Downloads last month
6,977
GGUF
Model size
20B params
Architecture
qwen_image
Hardware compatibility
Log In
to view the estimation
2-bit
Q2_K
7.15 GB
3-bit
Q3_K_S
9.04 GB
Q3_K_M
9.76 GB
4-bit
Q4_K_S
12.2 GB
Q4_0
11.9 GB
Q4_1
12.9 GB
Q4_K_M
13.1 GB
5-bit
Q5_K_S
14.1 GB
Q5_0
14.4 GB
Q5_1
15.4 GB
Q5_K_M
14.9 GB
6-bit
Q6_K
16.8 GB
8-bit
Q8_0
21.8 GB
Inference Providers
NEW
Image-Text-to-Image
This model isn't deployed by any Inference Provider.
๐
1
Ask for provider support
Model tree for
QuantStack/Qwen-Image-Layered-GGUF
Base model
Qwen/Qwen-Image
Finetuned
Qwen/Qwen-Image-Layered
Quantized
(
5
)
this model