Any-to-Any
Diffusers
PyTorch
Sierkinhane commited on
Commit
c2233be
·
verified ·
1 Parent(s): 163dcef

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -5
README.md CHANGED
@@ -10,7 +10,7 @@
10
 
11
  <sup>1</sup> [Show Lab](https://sites.google.com/view/showlab/home?authuser=0), National University of Singapore&nbsp; <sup>2</sup> Bytedance&nbsp;
12
 
13
- [![ArXiv](https://img.shields.io/badge/Report-PDF-<COLOR>.svg)](https://github.com/showlab/Show-o/blob/main/show-o2/Show_o2.pdf) [![WeChat badge](https://img.shields.io/badge/微信-加入-green?logo=wechat&amp)](https://github.com/showlab/Show-o/blob/main/docs/wechat_qa_3.jpg)
14
  </div>
15
 
16
  ## What is the new about Show-o2?
@@ -20,7 +20,7 @@ We perform the unified learning of multimodal understanding and generation on th
20
  ## Pre-trained Model Weigths
21
  The Show-o2 checkpoints can be found on Hugging Face:
22
  * [showlab/show-o2-1.5B](https://huggingface.co/showlab/show-o2-1.5B)
23
- * [showlab/show-o2-1.5B-HQ](https://huggingface.co/showlab/show-o2-1.5B)
24
  * [showlab/show-o2-7B](https://huggingface.co/showlab/show-o2-7B)
25
 
26
  ## Getting Started
@@ -46,9 +46,14 @@ python3 inference_mmu.py config=configs/showo2_7b_demo_432x432.yaml \
46
  mmu_image_path=./docs/mmu/pexels-taryn-elliott-4144459.jpg question='How many avocados (including the halved) are in this image? Tell me how to make an avocado milkshake in detail.'
47
  ```
48
 
49
-
50
  Demo for **Text-to-Image Generation** and you can find the results on wandb.
51
  ```
 
 
 
 
 
 
52
  python3 inference_t2i.py config=configs/showo2_1.5b_demo_432x432.yaml \
53
  batch_size=4 guidance_scale=7.5 num_inference_steps=50;
54
 
@@ -56,8 +61,6 @@ python3 inference_t2i.py config=configs/showo2_7b_demo_432x432.yaml \
56
  batch_size=4 guidance_scale=7.5 num_inference_steps=50;
57
  ```
58
 
59
-
60
-
61
  ### Citation
62
  To cite the paper and model, please use the below:
63
  ```
 
10
 
11
  <sup>1</sup> [Show Lab](https://sites.google.com/view/showlab/home?authuser=0), National University of Singapore&nbsp; <sup>2</sup> Bytedance&nbsp;
12
 
13
+ [![ArXiv](https://img.shields.io/badge/Report-PDF-<COLOR>.svg)](https://github.com/showlab/Show-o/blob/main/show-o2/Show_o2.pdf) [![WeChat badge](https://img.shields.io/badge/微信-加入-green?logo=wechat&amp)](https://github.com/showlab/Show-o/blob/main/docs/wechat_qa_3.jpg)
14
  </div>
15
 
16
  ## What is the new about Show-o2?
 
20
  ## Pre-trained Model Weigths
21
  The Show-o2 checkpoints can be found on Hugging Face:
22
  * [showlab/show-o2-1.5B](https://huggingface.co/showlab/show-o2-1.5B)
23
+ * [showlab/show-o2-1.5B-HQ](https://huggingface.co/showlab/show-o2-1.5B-HQ)
24
  * [showlab/show-o2-7B](https://huggingface.co/showlab/show-o2-7B)
25
 
26
  ## Getting Started
 
46
  mmu_image_path=./docs/mmu/pexels-taryn-elliott-4144459.jpg question='How many avocados (including the halved) are in this image? Tell me how to make an avocado milkshake in detail.'
47
  ```
48
 
 
49
  Demo for **Text-to-Image Generation** and you can find the results on wandb.
50
  ```
51
+ python3 inference_t2i.py config=configs/showo2_1.5b_demo_1024x1024.yaml \
52
+ batch_size=4 guidance_scale=7.5 num_inference_steps=50;
53
+
54
+ python3 inference_t2i.py config=configs/showo2_1.5b_demo_512x512.yaml \
55
+ batch_size=4 guidance_scale=7.5 num_inference_steps=50;
56
+
57
  python3 inference_t2i.py config=configs/showo2_1.5b_demo_432x432.yaml \
58
  batch_size=4 guidance_scale=7.5 num_inference_steps=50;
59
 
 
61
  batch_size=4 guidance_scale=7.5 num_inference_steps=50;
62
  ```
63
 
 
 
64
  ### Citation
65
  To cite the paper and model, please use the below:
66
  ```