tsqn commited on
Commit
3ef2a93
·
1 Parent(s): 6935cb9

update README

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md CHANGED
@@ -103,6 +103,50 @@ image.save("example.png")
103
 
104
  ```
105
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
106
  ## 🎯 Recommendations
107
 
108
  - **RTX 3060 and similar**: Use **BF16** or **FP16** for optimal performance
 
103
 
104
  ```
105
 
106
+ #### Example 2
107
+
108
+ ```py
109
+ import torch
110
+ from diffusers import ZImagePipeline, ZImageTransformer2DModel, AutoencoderKL, FlowMatchEulerDiscreteScheduler
111
+ from transformers import Qwen3Model, Qwen2Tokenizer
112
+
113
+
114
+ MODEL_PATH = "tsqn/Z-Image-Turbo_fp32-fp16-bf16_full_and_ema-only"
115
+
116
+ vae = AutoencoderKL.from_pretrained(MODEL_PATH, subfolder="vae", torch_dtype=torch.bfloat16)
117
+ text_encoder = Qwen3Model.from_pretrained(MODEL_PATH, subfolder="text_encoder", torch_dtype=torch.bfloat16)
118
+ tokenizer = Qwen2Tokenizer.from_pretrained(MODEL_PATH, subfolder="tokenizer")
119
+ transformer = ZImageTransformer2DModel.from_pretrained(MODEL_PATH, subfolder="transformer", torch_dtype=torch.float32)
120
+
121
+ pipe = ZImagePipeline.from_pretrained(
122
+ MODEL_PATH,
123
+ vae=vae,
124
+ text_encoder=text_encoder,
125
+ tokenizer=tokenizer,
126
+ transformer=transformer,
127
+ torch_dtype=torch.float32,
128
+ low_cpu_mem_usage=False,
129
+ )
130
+ pipe.scheduler = FlowMatchEulerDiscreteScheduler.from_config(pipe.scheduler.config)
131
+ pipe.enable_model_cpu_offload()
132
+
133
+ prompt = "Young Chinese woman in red Hanfu, intricate embroidery. Impeccable makeup, red floral forehead pattern. Elaborate high bun, golden phoenix headdress, red flowers, beads. Holds round folding fan with lady, trees, bird. Neon lightning-bolt lamp (⚡️), bright yellow glow, above extended left palm. Soft-lit outdoor night background, silhouetted tiered pagoda (西安大雁塔), blurred colorful distant lights."
134
+
135
+ with torch.inference_mode():
136
+ image = pipe(
137
+ prompt=prompt,
138
+ height=1024,
139
+ width=1024,
140
+ num_inference_steps=9,
141
+ guidance_scale=0.0,
142
+ generator=torch.Generator("cuda").manual_seed(42),
143
+ ).images[0]
144
+
145
+ image.save("example.png")
146
+ torch.cuda.empty_cache()
147
+ ```
148
+
149
+
150
  ## 🎯 Recommendations
151
 
152
  - **RTX 3060 and similar**: Use **BF16** or **FP16** for optimal performance