BryanW commited on
Commit
5a9c8d4
·
verified ·
1 Parent(s): 00955cb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -12,7 +12,7 @@ pinned: false
12
  **MeissonFlow Research** is a non-commercial research group dedicated to advancing generative modeling techniques for structured visual and multimodal content creation.
13
  We aim to design models and algorithms that help creators produce high-quality content with greater efficiency and control.
14
 
15
- Our journey began with [**MaskGIT**](https://arxiv.org/abs/2202.04200), a pioneering work by [**Huiwen Chang**](https://scholar.google.com/citations?hl=en&user=eZQNcvcAAAAJ), which introduced a bidirectional transformer decoder for image synthesis—outperforming traditional raster-scan autoregressive (AR) generation.
16
  This paradigm was later extended to text-to-image synthesis in [**MUSE**](https://arxiv.org/abs/2301.00704).
17
 
18
  Building upon these foundations, we scaled masked generative modeling with the latest architectural designs and sampling strategies, culminating in [**Monetico** and **Meissonic**](https://github.com/viiika/Meissonic) built from scratch, which are on par with leading diffusion models such as SDXL while maintaining greater efficiency.
 
12
  **MeissonFlow Research** is a non-commercial research group dedicated to advancing generative modeling techniques for structured visual and multimodal content creation.
13
  We aim to design models and algorithms that help creators produce high-quality content with greater efficiency and control.
14
 
15
+ Our journey began with [**MaskGIT**](https://arxiv.org/abs/2202.04200), a pioneering work by [**Huiwen Chang**](https://scholar.google.com/citations?hl=en&user=eZQNcvcAAAAJ), which introduced a bidirectional transformer decoder for image synthesis and outperformed traditional raster-scan autoregressive (AR) generation.
16
  This paradigm was later extended to text-to-image synthesis in [**MUSE**](https://arxiv.org/abs/2301.00704).
17
 
18
  Building upon these foundations, we scaled masked generative modeling with the latest architectural designs and sampling strategies, culminating in [**Monetico** and **Meissonic**](https://github.com/viiika/Meissonic) built from scratch, which are on par with leading diffusion models such as SDXL while maintaining greater efficiency.