
References
[1] David Bau, Jun-Yan Zhu, Jonas Wulff, William Peebles,
Hendrik Strobelt, Bolei Zhou, and Antonio Torralba. Invert-
ing layers of a large generator. In ICLR workshop, page 4,
2019. 3
[2] David Bau, Jun-Yan Zhu, Jonas Wulff, William Peebles,
Hendrik Strobelt, Bolei Zhou, and Antonio Torralba. Seeing
what a gan cannot generate. In Proceedings of the IEEE/CVF
international conference on computer vision, pages 4502–
4511, 2019. 3
[3] Mingdeng Cao, Xintao Wang, Zhongang Qi, Ying Shan, Xi-
aohu Qie, and Yinqiang Zheng. Masactrl: Tuning-free mu-
tual self-attention control for consistent image synthesis and
editing. In Proceedings of the IEEE/CVF International Con-
ference on Computer Vision (ICCV), pages 22560–22570,
2023. 2,3,7
[4] Antonia Creswell and Anil Anthony Bharath. Inverting the
generator of a generative adversarial network. IEEE transac-
tions on neural networks and learning systems, 30(7):1967–
1974, 2018. 3
[5] Trung Dao, Thuan Hoang Nguyen, Thanh Le, Duc Vu, Khoi
Nguyen, Cuong Pham, and Anh Tran. Swiftbrush v2: Make
your one-step diffusion model better than its teacher. In
European Conference on Computer Vision, pages 176–192.
Springer, 2025. 1,2,3
[6] Gilad Deutch, Rinon Gal, Daniel Garibi, Or Patashnik, and
Daniel Cohen-Or. Turboedit: Text-based image editing using
few-step diffusion models. In SIGGRAPH Asia 2024 Con-
ference Papers, New York, NY, USA, 2024. Association for
Computing Machinery. 2,3,7,8
[7] Keyan Ding, Kede Ma, Shiqi Wang, and Eero P. Simoncelli.
Image quality assessment: Unifying structure and texture
similarity. IEEE Transactions on Pattern Analysis and Ma-
chine Intelligence, 44(5):2567–2581, 2022. 5
[8] Daniel Garibi, Or Patashnik, Andrey Voynov, Hadar
Averbuch-Elor, and Daniel Cohen-Or. Renoise: Real im-
age inversion through iterative noising. In Computer Vision
– ECCV 2024, pages 395–413, Cham, 2025. Springer Nature
Switzerland. 2,3,7
[9] Aaron Gokaslan, A Feder Cooper, Jasmine Collins, Lan-
dan Seguin, Austin Jacobson, Mihir Patel, Jonathan Fran-
kle, Cory Stephenson, and Volodymyr Kuleshov. Com-
moncanvas: An open diffusion model trained with creative-
commons images. arXiv preprint arXiv:2310.16825, 2023.
6
[10] Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman,
Yael Pritch, and Daniel Cohen-or. Prompt-to-prompt image
editing with cross-attention control. In The Eleventh Inter-
national Conference on Learning Representations, 2023. 3,
7
[11] Xuan Ju, Ailing Zeng, Yuxuan Bian, Shaoteng Liu, and
Qiang Xu. Pnp inversion: Boosting diffusion-based editing
with 3 lines of code. International Conference on Learning
Representations (ICLR), 2024. 1,3,6,7,8,13
[12] Diederik P. Kingma and Jimmy Ba. Adam: A method for
stochastic optimization. In 3rd International Conference on
Learning Representations, ICLR 2015, San Diego, CA, USA,
May 7-9, 2015, Conference Track Proceedings, 2015. 6
[13] Senmao Li, Joost van de Weijer, Taihang Hu, Fahad Shahbaz
Khan, Qibin Hou, Yaxing Wang, and Jian Yang. Styledif-
fusion: Prompt-embedding inversion for text-based editing.
arXiv preprint arXiv:2303.15649, 2023. 1
[14] Zachary C Lipton and Subarna Tripathi. Precise recovery of
latent vectors from generative adversarial networks. arXiv
preprint arXiv:1702.04782, 2017. 3
[15] Xingchao Liu, Xiwen Zhang, Jianzhu Ma, Jian Peng, and
Qiang Liu. Instaflow: One step is enough for high-quality
diffusion-based text-to-image generation. In International
Conference on Learning Representations, 2024. 2,3,12
[16] Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang
Zhao. Latent consistency models: Synthesizing high-
resolution images with few-step inference. arXiv preprint
arXiv:2310.04378, 2023. 2
[17] Fangchang Ma, Ulas Ayaz, and Sertac Karaman. Invertibility
of convolutional generative networks from partial measure-
ments. Advances in Neural Information Processing Systems,
31, 2018. 3
[18] Chenlin Meng, Robin Rombach, Ruiqi Gao, Diederik
Kingma, Stefano Ermon, Jonathan Ho, and Tim Salimans.
On distillation of guided diffusion models. In Proceedings
of the IEEE/CVF Conference on Computer Vision and Pat-
tern Recognition, pages 14297–14306, 2023. 2
[19] Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and
Daniel Cohen-Or. Null-text inversion for editing real im-
ages using guided diffusion models. In Proceedings of
the IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), pages 6038–6047, 2023. 1,2,3,5,7,
8
[20] Thuan Hoang Nguyen and Anh Tran. Swiftbrush: One-step
text-to-image diffusion model with variational score distilla-
tion. In Proceedings of the IEEE/CVF Conference on Com-
puter Vision and Pattern Recognition (CVPR), 2024. 1,2,3,
12
[21] Trong-Tung Nguyen, Duc-Anh Nguyen, Anh Tran, and
Cuong Pham. Flexedit: Flexible and controllable
diffusion-based object-centric image editing. arXiv preprint
arXiv:2403.18605, 2024. 2,3
[22] Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yi-
jun Li, Jingwan Lu, and Jun-Yan Zhu. Zero-shot image-to-
image translation. New York, NY, USA, 2023. Association
for Computing Machinery. 3,7
[23] Guim Perarnau, Joost van de Weijer, Bogdan Raducanu, and
Jose M. ´
Alvarez. Invertible Conditional GANs for image
editing. In NIPS Workshop on Adversarial Training, 2016. 3
[24] Dustin Podell, Zion English, Kyle Lacey, Andreas
Blattmann, Tim Dockhorn, Jonas M¨
uller, Joe Penna, and
Robin Rombach. SDXL: Improving latent diffusion models
for high-resolution image synthesis. In The Twelfth Inter-
national Conference on Learning Representations, 2024. 1,
2
[25] Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Milden-
hall. Dreamfusion: Text-to-3d using 2d diffusion. In The
Eleventh International Conference on Learning Representa-
tions, 2023. 3
9