arXiv:2412.04301v4 [cs.CV] 2 Jun 2025
SwiftEdit: Lightning Fast Text-Guided Image Editing via One-Step Diffusion
Trong-Tung Nguyen, Quang Nguyen, Khoi Nguyen, Anh Tran, Cuong Pham
Qualcomm AI Research+
{tunnguy,quanghon,khoinguy,anhtra,pcuong}@qti.qualcomm.com
basket of apples basket of puppies in forest at sea mouth closed mouth opened
empty street crowded street couples on beach couples on beach dog holding flower dog holding flower
Source Image
hold hands
Edited Image Source Image Edited Image Source Image Edited Image

Figure 1. SwiftEdit empowers instant, localized image editing using only text prompts, freeing users from the need to define masks. In just
0.23 seconds on a single A100 GPU, it unlocks a world of creative possibilities demonstrated across diverse editing scenarios.
Abstract
Recent advances in text-guided image editing enable users
to perform image edits through simple text inputs, leverag-
ing the extensive priors of multi-step diffusion-based text-
to-image models. However, these methods often fall short
of the speed demands required for real-world and on-device
applications due to the costly multi-step inversion and sam-
pling process involved. In response to this, we introduce
SwiftEdit, a simple yet highly efficient editing tool that
achieve instant text-guided image editing (in 0.23s). The
advancement of SwiftEdit lies in its two novel contributions:
a one-step inversion framework that enables one-step im-
age reconstruction via inversion and a mask-guided editing
technique with our proposed attention rescaling mechanism
to perform localized image editing. Extensive experiments
are provided to demonstrate the effectiveness and efficiency
of SwiftEdit. In particular, SwiftEdit enables instant text-
guided image editing, which is extremely faster than pre-
+Qualcomm Vietnam Company Limited
also affiliated with Posts & Telecom. Inst. of Tech., Vietnam
Contact email: [email protected]
vious multi-step methods (at least 50×times faster) while
maintain a competitive performance in editing results. Our
project is at https://swift-edit.github.io/.
1. Introduction
Recent text-to-image diffusion models [5,24,26,27] have
achieved remarkable results in generating high-quality im-
ages semantically aligned with given text prompts. To gen-
erate realistic images, most of them rely on multi-step sam-
pling techniques, which reverse the diffusion process start-
ing from random noise to realistic image. To overcome this
time-consuming sampling process, some works focus on re-
ducing the number of sampling steps to a few (4-8 steps)
[29] or even one step [5,20,39,40] via distillation tech-
niques while not compromising results. These approaches
not only accelerate image generation but also enable faster
inference for downstream tasks, such as image editing.
For text-guided image editing, recent approaches [11,13,
19] use an inversion process to determine the initial noise
for a source image, allowing for (1) source image recon-
struction and (2) content modification aligned with guided
text while preserving other details. Starting from this in-
verted noise, additional techniques, such as attention ma-
1
20
25
PSNR Score
100101102
Execution Time (s) in log scale
23
24
25
CLIP Score
SwiftEdit
ReNoise-SDXL-Turbo
TurboEdit
ICD-SD15
Null-text Inversion
MasaCtrl
Pix2Pix-Zero
Plug-and-Play
DDIM
Figure 2. Comparing our one-step SwiftEdit with few-step and
multi-step diffusion editing methods in terms of background
preservation (PSNR), editing semantics (CLIP score), and run-
time. Our method delivers lightning-fast text-guided editing while
achieving competitive results.
nipulation and hijacking [3,21,35], are applied at each de-
noising step to inject edits gradually while preserving key
background elements. This typical approach, however, is
resource-intensive, requiring two lengthy multi-step pro-
cesses: inversion and editing. To address this, recent works
[6,8,33] use few-step diffusion models, like SD-Turbo
[30], to reduce the sampling steps required for inversion
and editing, incorporating additional guidance for disen-
tangled editing via text prompts. However, these methods
still struggle to achieve sufficiently fast text-guided image
editing for on-device applications while maintaining perfor-
mance competitive with multistep approaches.
In this work, we take a different approach by building on
aone-step text-to-image model for image editing. We intro-
duce SwiftEdit the first one-step text-guided image edit-
ing tool which achieves at least 50×faster execution than
previous multi-step methods while maintaining competitive
editing quality. Notably, both the inversion and editing in
SwiftEdit are accomplished in a single step.
Inverting one-step diffusion models is challenging, as ex-
isting techniques like DDIM Inversion [31] and Null-text
Inversion [19] are unsuitable for our one-step real-time edit-
ing goal. To achieve this, we design a novel one-step in-
version framework inspired by encoder-based GAN Inver-
sion methods [36,37,41]. Unlike GAN inversion, which
requires domain-specific networks and retraining, our in-
version framework generalizes to any input images. For
this, we leverage SwiftBrushv2 [5], a recent one-step text-
to-image model known for speed, diversity, and quality, us-
ing it as both the one-step image generator and backbone
for our one-step inversion network. We then train it with
weights initialized from SwiftBrushv2 to handle any source
inputs through a two-stage training strategy, combining su-
pervision from both synthetic and real data.
Following the one-step inversion, we introduce an effi-
cient mask-based editing technique. Our method can either
accept an input editing mask or infer it directly from the
trained inversion network and guidance prompts. The mask
is then used in our novel attention-rescaling technique to
blend and control the edit strength while preserving back-
ground elements, enabling high-quality editing results.
To the best of our knowledge, our work is the first to
explore diffusion-based one-step inversion using a one-step
text-to-image generation model to instantly perform text-
guided image editing (in 0.23 seconds). While being signif-
icantly fast compared to other multi-step and few-step edit-
ing methods, our approach achieves a competitive editing
result as shown in Fig. 2. In summary, our main contribu-
tion includes:
We propose a novel one-step inversion framework trained
with a two-stage strategy. Once trained, our framework
can invert any input images into an editable latent in a
single step without further retraining or finetuning.
We show that our well-trained inversion framework can
produce an editing mask guided by source and target text
prompts within a single batchified forward pass.
We propose a novel attention-rescaling technique for
mask-based editing, offering flexible control over editing
strength while preserving key background information.
2. Related Work
2.1. Text-to-image Diffusion Models
Diffusion-based text-to-image models [24,26,27] typically
rely on computationally expensive iterative denoising to
generate realistic images from Gaussian noise. Recent ad-
vances [16,18,28,32] alleviate this by distilling the knowl-
edge from multi-step teacher models into a few-step stu-
dent network. Notable works [5,15,16,20,32,39,40]
show that this knowledge can be distilled even into a one-
step student model. Specifically, Instaflow [15] uses recti-
fied flow to train a one-step network, while DMD [40] ap-
plies distribution-matching objectives for knowledge trans-
fer. DMDv2 [39] removes costly regression losses, en-
abling efficient few-step sampling. SwiftBrush [20] uti-
lizes an image-free distillation method with text-to-3D gen-
eration objectives, and SwiftBrushv2 [5] integrates post-
training model merging and clamped CLIP loss, surpassing
its teacher model to achieve state-of-the-art one-step text-
to-image performance. These one-step models provide rich
prior information about text-image alignment and are ex-
tremely fast, making them ideal for our one-step text-based
image editing approach.
2.2. Text-based Image Editing
Several approaches leverage the strong prior of image-
text relationships in text-to-image models to perform text-
guided multi-step image editing via an inverse-to-edit ap-
proach. First, they invert the source image into “infor-
mative” noise. Methods like DDIM Inversion [31] use
2
linear approximations of noise prediction, while Null-
text Inversion [19] enhances reconstruction quality through
costly per-step optimization. Direct Inversion [11] bypasses
these issues by disentangling source and target generation
branches. Second, editing methods such as [3,10,21,22,
35] manipulate attention maps to embed edits while pre-
serving background content. However, their multi-step dif-
fusion process remains too slow for practical applications.
To address this issue, several works [6,8,33] enable
few-step image editing using fast generation models [29].
ICD [33] achieves accurate inversion in 3-4 steps with a
consistency distillation framework, followed by text-guided
editing. ReNoise [8] refines the sampling process with an
iterative renoising technique at each step. TurboEdit [6]
uses a shifted noise schedule to align inverted noise with
the expected schedule in fast models like SDXL Turbo [29].
Though these methods reduce inference time, they fall short
of instant text-based image editing needed for fast applica-
tions. Our one-step inversion and one-step localized editing
approach dramatically boosts time efficiency while surpass-
ing few-step methods in editing performance.
2.3. GAN Inversion
GAN inversion [2,4,14,17,23,36,41] maps a source im-
age into the latent space of a pre-trained GAN, allowing the
generator to recreate the image, which is valuable for tasks
like image editing. Effective editing requires a latent space
that can both reconstruct the image and support realistic ed-
its through variations in the latent code. Approaches fall
into three groups: encoder-based [23,41,42], optimization-
based [4,14,17], and hybrid [1,2,41]. Encoder-based
methods learn a mapping from the image to the latent code
for fast reconstruction. Optimization-based methods refine
a code by iteratively optimizing it, while hybrid methods
combine both, using an encoder’s output as initialization for
further optimization. Inspired by encoder-based speed, we
develop a one-step inversion network, but instead of GAN,
we leverage a one-step text-to-image diffusion model. This
allows us to achieve text-based image editing across diverse
domains rather than being restricted to specific domain as
in GAN-based methods.
3. Preliminaries
Multi-step diffusion model. Text-to-image diffusion
model ϵϕattempts to generate image ˆ
xgiven the target
prompt embedding cy(extracted from the CLIP text en-
coder of a given text prompt y) through a Titerative denois-
ing steps, starting from Gaussian noise, zT=ϵ N (0, I):
zt1=ztσtϵϕ(zt, t, cy)
αt
+δtϵt,ϵt N (0, I),(1)
where tis the timestep, and σt, αt, δtare three coefficients.
The final latent z=z0is then input to a VAE decoder Dto
produce the image ˆ
x=D(z).
One-step diffusion model. The traditional diffusion
model’s sampling process requires multiple steps, mak-
ing it time-consuming. To address this, one-step text-to-
image diffusion models like InstaFlow [15], DMD [40],
DMD2 [39], SwiftBrush [20], and SwiftBrushv2 [5] have
been developed, reducing the sampling steps to a single
step. Specifically, one-step text-to-image diffusion model
Gaims to transform a noise input ϵ N (0,1), given a
text prompt embedding cy, directly into an image latent ˆ
z,
without iterative denoising steps, or ˆ
z=G(ϵ,cy). Swift-
Brushv2 (SBv2) stands out in one-step image generation
by quickly producing high-quality, diverse outputs, form-
ing the basis of our approach. Building on its predecessor,
SBv2 integrates key improvements: it uses SD-Turbo ini-
tialization for enhanced output quality, a clamped CLIP loss
to strengthen visual-text alignment, and model fusion with
post-enhancement techniques, all contributing to superior
performance and visual fidelity.
Score Distillation Sampling (SDS) [25] is a popular ob-
jective function that utilizes the strong prior learned by 2D
diffusion models to optimize a target data point zby calcu-
lating its gradient as follows:
θLSDS Et,ϵw(t) (ϵϕ(zt, t, cy)ϵ)z
θ ,(2)
where z=g(θ)is rendered by a differentiable image gen-
erator gparameterized by θ,ztdenotes a perturbed version
of zwith a random amount of noise ϵ, and w(t)is a scal-
ing function corresponding to the timestep t. The objective
of SDS loss is to provide an updated direction that would
move zto a high-density region of the data manifold us-
ing the score function of the diffusion model ϵϕ(zt, t, cy).
Notably, this gradient omits the Jacobian term of the diffu-
sion backbone, removing the expensive computation when
backpropagating through the entire diffusion model U-Net.
Image-Prompt via Decoupled Cross-Attention. IP-
Adapter [38] introduces an image-prompt condition xthat
can be seamlessly integrated into a pre-trained text-to-
image generation model. It achieves this through a decou-
pled cross-attention mechanism, which separates the condi-
tioning effects of text and image features. This is done by
adding an extra cross-attention layer to each cross-attention
layer in the original U-Net. Given image features cx(ex-
tracted from xby a CLIP image encoder), text features cy
(from text prompt yusing a CLIP text encoder), and query
features Zlfrom the previous U-Net layer l1, the output
hlof the decoupled cross-attention is computed as:
hl= Attn(Ql, Ky, Vy) + sxAttn(Ql, Kx, Vx),(3)
where Attn(.)denotes the attention operation. Scaling fac-
tors sxis used to control the influence of cxon the gener-
3
SBv2 G(.)
Text Encoder
()
VAE
Inversion
Net Fθ(.)
Inverted Noise
IP-SBv2 GIP(.)
VAE
Inversion
Net Fθ(.)
Inverted Noise
IP-SBv2 GIP(.)
Reconstructed
Image x
Reconstructed
Image x
IP-Adapter
IP-Adapter
Synthetic Image x
Real Image x
Stage 2: Training with Real Images
Stage 1: Training with Synthetic Images
Figure 3. Proposed two-stage training for our one-step inversion framework. In stage 1, we warms up our inversion network on
synthetic data generated by SwiftBrushv2. At stage 2, we shift our focus to real images, continue to train our inversion network to enable
instantly image inversion for any input images without additional fine-tuning or retraining.
ated output. Ql=WQZlis the query matrix projected by
the weight matrix WQ. The key and value matrices for text
features cyare Ky=WK
ycyand Vy=WV
ycy, respec-
tively, while the projected key and value matrices for image
features cxare Kx=WK
xcxand Vx=WV
xcx. Notably,
only the two weight matrices WK
xand WV
xare trainable,
while the remaining weights remain frozen to preserve the
original behavior of the pretrained diffusion model.
4. Proposed Method
Our goal is to enable instant image editing with the one-
step text-to-image model, SBv2. In Sec. 4.1, we develop
a one-step inversion network that predicts inverted noise
to reconstruct a source image when passed through SBv2.
We introduce a two-stage training strategy for this inver-
sion network, enabling single-step reconstruction of any
input images without further retraining. An overview is
shown in Fig. 3. During inference, as described in Sec. 4.2,
we use self-guided editing mask to locate edited regions.
Our attention-rescaling technique then utilizes the mask to
achieve disentangled editing and control the editing strength
while preserving the background.
4.1. Inversion Network and Two-stage Training
Given an input image that may be synthetic (generated by
a model like SBv2) or real, our first objective is to inverse
and reconstruct it using SBv2 model. To achieve this, we
develop a one-step inversion network Fθto transform the
image latent zinto an inverted noise ˆ
ϵ=Fθ(z,cy), and
then feed back to SBv2 to compute the reconstructed latent
ˆ
z=G(ˆ
ϵ,cy) = G(Fθ(z,cy),cy).For synthetic images,
training Fθis straightforward, with pairs (ϵ,z), where ϵ
is the noise used to generate z, allowing direct regression
of ˆ
ϵto ϵ, and aligning the inverted noise with SBv2’s in-
put noise distribution. However, for real images, the do-
main gap poses a challenge, as the original noise ϵis un-
available, preventing us from computing regression objec-
tive and potentially causing ˆ
ϵto deviate from the desired
distribution. In the following section, we discuss our inver-
sion network and a two-stage training strategy designed to
overcome these challenges effectively.
Our Inversion Network Fθfollows the architecture of
the one-step diffusion model Gand is initialized with Gs
weights. However, we found this approach suboptimal: the
inverted noise ˆ
ϵpredicted by Fθattempts to perfectly recon-
struct the input image, leading to overfitting on specific pat-
terns from the input. This tailoring makes the noise overly
dependent on input features, which limits editing flexibility.
To overcome this, we introduce an auxiliary, image-
conditioned branch similar to IP-Adapter [38] within
the one-step generator G, named GIP. This branch inte-
grates image features encoded from the input image xalong
with text prompt y, aiding in reconstruction and reducing
the need for Fθto embed extensive visual details from the
input image. This approach effectively alleviates the bur-
den on ˆ
ϵ, enhancing both reconstruction and editing capa-
bilities. We compute the inverted noise ˆ
ϵalong with the
reconstructed image latent ˆ
zas follows:
ˆ
ϵ=Fθ(z, cy),ˆ
z=GIP(ˆ
ϵ,cy,cx).(4)
Stage 1: Training with synthetic images. As mentioned
above, this stage aims to pretrain the inversion network Fθ
with synthetic training data sampled from a text-to-image
diffusion network G, i.e., SBv2. In Fig. 3, we visualize the
flow of stage 1 training in orange color. Pairs of training
samples (ϵ,z)are created as follows:
ϵ N (0,1),z=G(ϵ,cy).(5)
We combine the reconstruction loss Lstage1
rec and regression
4
Source Image
Visualization of Inverted Noise
withwithout
Figure 4. Comparison of inverted noise predicted by our inversion
network when trained without and with stage 2 regularization loss.
loss Lstage1
regr to train the inversion network Fθand part of the
IP-Adapter branch (including the linear mapping and cross-
attention layers for image conditions). The regression loss
Lstage1
regr encourages Fθ(.)to produce an inverted noise ˆ
ϵthat
closely follows SBv2’s input noise distribution by regress-
ing ˆ
ϵto ϵ. This ensures that the inverted noise remains close
to the multivariate normal distribution, which is crucial for
effective editability as shown in prior work [19]. On the
other hand, the reconstruction loss Lstage1
rec enforces align-
ment between the reconstructed latent ˆ
zand the original
source latent z, preserving input image details. In summary,
the training objectives are as follows:
Lstage1
rec =||zˆ
z||2
2,Lstage1
regr =||ϵˆ
ϵ||2
2,(6)
Lstage1 =Lstage1
rec +λstage1.Lstage1
regr ,(7)
where we set λstage1 = 1 during training. After this stage,
our inversion framework could reconstruct source input im-
ages generated by the SBv2 model. However, it fails to
work with real images due to the domain gap which mo-
tivates us to continue training with stage 2.
Stage 2: Training with real images. We replace the re-
construction loss from stage 1 with a perceptual loss using
the Deep Image Structure and Texture Similarity (DISTS)
metric [7]. This perceptual loss, Lstage2
perceptual = DISTS(x,ˆ
x),
compares ˆ
x=D(ˆ
z)(where ˆ
z=GIP(ˆ
ϵ,cy,cx)) with the
real input image x. DISTS is trained on real images, cap-
turing perceptual details in structure and texture, making it
a more robust visual similarity measure than the pixel-wise
reconstruction loss used in stage 1.
Since the original noise ϵ, used to reconstruct zin SBv2,
is unavailable at this stage, we cannot directly apply the
regression objective from stage 1. Training stage 2 solely
with Lstage2
perceptual can cause the inverted noise ˆ
ϵto drift from
the ideal noise distribution N(0, I), as the perceptual loss
encourages ˆ
ϵto capture source image patterns, aiding re-
construction but constraining future editing flexibility (see
Fig. 4, column 2). To address this, we introduce a new reg-
ularization term Lstage2
regu , inspired by Score Distillation Sam-
pling (SDS) as defined in Eq. (2). The SDS gradient steers
the optimized latent toward dense regions of the data mani-
fold. Given that the real image latent z=E(x)already lies
in a high-density region, we shift the optimization focus to
the noise term ϵ, treating our inverted noise as an added
noise to z. We then compute the loss gradient as follows:
ˆ
ϵ=Fθ(z,cy),zt=αtz+σtˆ
ϵ,
θLstage2
regu Et,ˆ
ϵw(t) (ˆ
ϵϵϕ(zt, t, cy)) ˆ
ϵ
θ .(8)
Our regularization gradient has the opposite sign of Eq. (2)
since it optimizes ˆ
ϵinstead of z(derivation details in Ap-
pendix). After initializing from stage 1, ˆ
ϵresembles Gaus-
sian noise N(0,1), making the noisy latent ztcompati-
ble with the multi-step teacher’s training data. This allows
the teacher to accurately predict ϵϕ(zt, t, cy), and achieve
ϵϕ(zt, t, cy)ˆ
ϵ0. Thus, ˆ
ϵstays the same. Over
time, the reconstruction loss nudges Fθto generate an in-
verted noise, ˆ
ϵ, tailored for reconstruction, diverging from
N(0,1) and creating an unfamiliar zt. The resulting gra-
dient prevents excessive drift from the original distribution,
reinforcing stability from stage 1, as shown in third column
of Fig. 4. Similar to stage 1, we combine both percep-
tual losses Lstage2
perceptual and regularization loss Lstage2
regu where
we set λstage2 = 1. During training , we train only the in-
version network, keeping the IP-Adapter branch and decou-
pled cross-attention layers frozen to retain the image prior
features learned in stage 1. Flow of training stage 2 are vi-
sualized as teal color in Fig. 3.
4.2. Attention Rescaling for Mask-aware Editing
(ARaM)
During inference, given a source image xsource, a source
prompt ysource, and an editing prompt yedit, our target is to
produce an edited image xedit following the editing prompt
without modifying irrelevant background elements. Af-
ter two-stage training, we obtain a well-trained inversion
network Fθto transform source image latent zsource =
E(xsource)to inverted noise ˆ
ϵ. Intuitively, we can use the
one-step image generator, GIP(.), to regenerate the image
but with an edit prompt embedding cedit
yas guided prompt
instead. The edited image latent is computed via zedit =
GIP(ˆ
ϵ,cedit
y,cx). As discussed in Sec. 4.1, the source image
condition cxis crucial for reconstruction, with its influence
modulated by sxas shown in Eq. (3). To illustrate this, we
vary sxwhile generating the edited image xedit =D(zedit)
in orange block of Fig. 5b. As shown, higher values of sx
enforce fidelity to the source image, limiting editing flexi-
5
-Normalized
Diffrence
Source Image Editing Mask
Source Prompt: An orange cat sitting on top of a fence.
Edit Prompt: A black cat sitting on top of a fence.
(a) Self-guided editing mask extraction. Given source and editing
prompts, our inversion network predicts two different noise maps, high-
lighting the editing regions M.
Better Edit Semantic Better Preservation
With Global Scale With ARaM
Edit
Semantic
BG
Preservation
(b) Effect of global scale and our edit-aware scale. Comparison of edited
results between varying global image condition scale sxwith our ARaM.
Control editing strength with mask-guided text-alignment scale
(c) Effect of editing strength scale. Visualization of edited results when
varying mask-based text-alignment scale sy.
Figure 5. Illustration of Attention Rescaling for Mask-aware
Editing (ARaM). We apply attention rescaling with our self-
guided editing mask to achieve local image editing and enable
editing strength control.
bility due to tight control by cx. Conversely, lower sxal-
lows more flexible edits but reduces reconstruction quality.
Based on this observation, we introduce Attention Rescal-
ing for Mask-aware editing (ARaM) in GIP, guided by the
editing mask M. The key idea is to amplify the influence of
cxin non-edited regions for better preservation while reduc-
ing its effect within edited regions, providing greater editing
flexibility. To implement this, we reformulate the computa-
tion in Eq. (3) within GIP by removing the global scale sx
and introducing region-specific scales as follows:
hl=sy.M. Attn(Ql, Ky, Vy)
+sedit.M. Attn(Ql, Kx, Vx)
+snon-edit.(1 M).Attn(Ql, Kx, Vx).
(9)
This disentangled cross-attention differs slightly from
Eq. (3) in three scaling factors: sy,sedit, and snon-edit, apply
on different image regions. Two scaling factors sedit, and
snon-edit are used to separately control the influence of the
image condition cxon the editing and non-editing regions.
As shown in violet block of Fig. 5b, this effectively results
in an edited image which both follow prompt edit semantics
and achieve good background preservation compared to us-
ing the same sx. On the other hand, we introduce the ad-
ditional syto lessen/strengthen the edit prompt-alignment
effect within the editing region Mwhich could be used to
control the editing strength as shown in Fig. 5c.
The editing mask Mdiscussed above can either be pro-
vided by the user or generated automatically from our inver-
sion network Fθ. To extract self-guided editing mask, we
observe that a well-trained Fθcan discern spatial seman-
tic differences in the inverted noise maps when conditioned
on varying text prompts. As shown in Fig. 5a, we input
the source image latent zsource to Fθwith two different text
prompts: the source csource
yand the edit cedit
y. The difference
noise map, ˆ
ϵsource ˆ
ϵedit, is then computed and normalized,
yielding the editing mask M, which effectively highlights
the editing areas.
5. Experiments
5.1. Experimental Setup
Dataset and evaluation metrics. We evaluate our editing
performance on PieBench [11], a popular benchmark con-
taining 700 samples across 10 diverse editing types. Each
sample includes a source prompt, edit prompt, instruction
prompt, source image, and a manually annotated editing
mask. Using PieBench’s metrics, we assess both back-
ground preservation and editing semantics, aiming for a
balance between them for high-quality edits. Background
preservation is evaluated with PSNR and MSE scores on
unedited regions of the source and edited images. Editing
alignment is assessed using CLIP-Whole and CLIP-Edited
scores, measuring prompt alignment with the full image and
edited region, respectively.
Implementation details. Our inversion network is based
on the architecture of SBv2, initialized with SBv2 weights
for stage 1 training. In stage 2, we continue training from
stage 1’s pretrained weights. For image encoding, we adopt
the IP-Adapter [38] design, using a pretrained CLIP image
encoder followed by a small projection network that maps
the image embeddings to a sequence of features with length
N= 4, matching the text feature dimensions of the diffu-
sion model. Both stages use the Adam optimizer [12] with
weight decay of 1e-4, a learning rate of 1e-5, and an expo-
nential moving average (EMA) in every iteration. In stage
1, we train with a batch size of 4 for 100k iterations on syn-
thetic samples generated by SBv2, paired with 40k captions
from the JourneyDB dataset [34]. For stage 2, we train with
a batch size of 1 and train over 180k iterations using 5k
real images and their prompt descriptions from the Com-
monCanvas dataset [9]. All experiments are conducted on a
single NVIDIA A100 40GB GPU.
6
Type Method Background Preservation CLIP Semantics Runtime
PSNRMSE×104Whole Edited(seconds)
Multi-step
(50 steps)
DDIM + P2P 17.87 219.88 25.01 22.44 25.98
NT-Inv + P2P 27.03 35.86 24.75 21.86 134.06
DDIM + MasaCtrl 22.17 86.97 23.96 21.16 23.21
Direct Inversion + MasaCtrl 22.64 81.09 24.38 21.35 29.68
DDIM + P2P-Zero 20.44 144.12 22.80 20.54 35.57
Direct Inversion + P2P-Zero 21.53 127.32 23.31 21.05 35.34
DDIM + PnP 22.28 83.64 25.41 22.55 12.62
Direct Inversion + PnP 22.46 80.45 25.41 22.62 12.79
Few-steps
(4 steps)
ReNoise (SDXL Turbo) 20.28 54.08 24.29 21.07 5.11
TurboEdit 22.43 9.48 25.49 21.82 1.32
ICD (SD 1.5) 26.93 3.32 22.42 19.07 1.62
One-step SwiftEdit (Ours) 23.33 6.60 25.16 21.25 0.23
SwiftEdit (Ours with GT masks) 23.31 6.18 25.56 21.91 0.23
Table 1. Quantitative comparison of SwiftEdit against other editing methods with metrics employed from PieBench [11].
.





Source
Image NT + P2P DDIM
+ P2P
Pix2Pix-
Zero MasaCtrl
Plug-
and-Play ReNoise TurboEdit ICD
(SD 1.5)
SwiftEdit
(Ours)
> 130s > 12s > 1.3s < 0.3s
Figure 6. Comparative edited results. The first column shows the source image, while source and edit prompts are noted under each row.
Comparison Methods. We perform an extensive compari-
son of SwiftEdit with representative multi-step and recently
introduced few-step image editing methods. For multi-step
methods, we choose Prompt-to-Prompt (P2P) [10], MasaC-
trl [3], Pix2Pix-Zero (P2P-Zero) [22], and Plug-and-Play
[35], combined with corresponding inversion methods such
as DDIM [31], Null-text Inversion (NT-Inv) [19], and Direct
Inversion [11]. For few-step methods, we select Renoise
[8], TurboEdit [6], and ICD [33].
5.2. Comparison with Prior Methods
Quantitative Results. In Tab. 1, we present the quan-
titative results comparing SwiftEdit to various multi-step
and few-step image editing methods. Overall, SwiftEdit
7
Figure 7. User Study.
Method PSNRLPIPS×103MSE×104SSIM×102
w/o stage 1 22.26 111.57 7.03 72.39
w/o stage 2 17.95 305.23 17.46 55.97
w/o IP-Adapter 18.57 165.78 16.11 63.87
Full Setting (Ours) 24.35 89.69 4.59 76.34
Table 2. Impact of inversion framework design on real image re-
construction.
Setting Lstage1
regr Lstage2
regu
CLIP Semantics
Whole () Edited()
Setting 1 22.91 19.07
Setting 2 22.98 19.01
Setting 3 24.19 20.55
Setting 4 (Full) 25.16 21.25
Table 3. Effect of loss on editing semantics score.
demonstrates superior time efficiency due to our one-step
inversion and editing process, while maintaining competi-
tive editing performance. Compared to multi-step methods,
SwiftEdit shows strong results in background preservation
scores, surpassing most approaches. Although it achieves a
slightly lower PSNR score than NT-Inv + P2P, it has a better
MSE score and is approximately 500 times faster. In terms
of CLIP Semantics, we also achieve competitive results in
CLIP-Whole (second best) and CLIP-Edited. Compared
with few-step methods, SwiftEdit performs as the second-
best in background preservation (with ICD being the best)
and second-best in CLIP Semantics (with TurboEdit lead-
ing), while maintaining a speed advantage, being at least 5
times faster than these methods. Since SwiftEdit allows for
user-defined editing masks, we also report results using the
ground-truth editing masks from PieBench [11]. As shown
in the last row of Tab. 1, results with the ground-truth masks
show slight improvements, indicating that our self-guided
editing masks are nearly as accurate as the ground truth.
Qualitative Results. In Fig. 6, we present visual compar-
isons of editing results generated by SwiftEdit and other
methods. As illustrated, SwiftEdit successfully adheres
to the given edit prompt while preserving essential back-
ground details. This balance demonstrates SwiftEdit’s
strength over other multi-step methods, as it produces high-
quality edits while being significantly faster. When com-
pared to few-step methods, SwiftEdit demonstrates a clear
advantage in edit quality. Although ICD [33] scores high
on background preservation (as shown in Tab. 1), it often
fails to produce edits that align with the prompt. TurboEdit
[6], while achieving a higher CLIP score than SwiftEdit,
generates lower-quality results that compromise key back-
ground elements, as seen in the first, second, and fifth rows
of Fig. 6. This highlights SwiftEdit’s high-quality edits with
prompt alignment and background preservation.
User Study. We conducted a user study with 140 partic-
ipants to evaluate preferences for different editing results.
Using 20 random edit prompts from PieBench [11], partic-
ipants compared images edited by three methods: Null-text
Inversion [19], TurboEdit [6], and our SwiftEdit. Partic-
ipants selected the most appropriate edits based on back-
ground preservation and editing semantics. As shown in
Fig. 7, SwiftEdit was the preferred choice, with 47.8% fa-
voring it for editing semantics and 40% for background
preservation, while also surpassing other methods in speed.
6. Ablation Study
Analysis of Inversion Framework Design. We conduct
ablation studies to evaluate the impact of our inversion
framework and two-stage training on image reconstruction.
Our two-stage strategy is essential for the one-step inversion
framework’s effectiveness. In Tab. 2, we show that omitting
any stages degrades reconstruction quality. The IP-Adapter
with decoupled cross-attention is critical; removing it leads
to poor reconstruction, as seen in row 3.
Effect of loss on Editing Quality. As noted by [19], an
editable noise should follow a normal distribution to ensure
flexibility. We conduct ablation studies to assess the im-
pact of our loss functions on noise editability. As shown
in Tab. 3, omitting any loss component reduces editability,
measured by CLIP Semantics, while using both yields the
highest scores. This emphasizes the importance of each loss
in maintaining noise distributions that enhance editability.
7. Conclusion and Discussion
Conclusion. In this work, we introduce SwiftEdit, a
lightning-fast text-guided image editing tool capable of in-
stant edits in 0.23 seconds. Extensive experiments demon-
strate SwiftEdit’s ability to deliver high-quality results
while significantly surpassing previous methods in speed,
enabled by its one-step inversion and editing process. We
hope SwiftEdit will facilitate interactive image editing.
Discussion. While SwiftEdit achieves instant-level image
editing, challenges remain. Its performance still relies on
the quality of the SBv2 generator, thus, biases in the train-
ing data can transfer to our inversion network. For future
work, we want to improve the method by transitioning from
instant-level to real-time editing capabilities. This enhance-
ment would address current limitations and have a signifi-
cant impact across various fields.
8
References
[1] David Bau, Jun-Yan Zhu, Jonas Wulff, William Peebles,
Hendrik Strobelt, Bolei Zhou, and Antonio Torralba. Invert-
ing layers of a large generator. In ICLR workshop, page 4,
2019. 3
[2] David Bau, Jun-Yan Zhu, Jonas Wulff, William Peebles,
Hendrik Strobelt, Bolei Zhou, and Antonio Torralba. Seeing
what a gan cannot generate. In Proceedings of the IEEE/CVF
international conference on computer vision, pages 4502–
4511, 2019. 3
[3] Mingdeng Cao, Xintao Wang, Zhongang Qi, Ying Shan, Xi-
aohu Qie, and Yinqiang Zheng. Masactrl: Tuning-free mu-
tual self-attention control for consistent image synthesis and
editing. In Proceedings of the IEEE/CVF International Con-
ference on Computer Vision (ICCV), pages 22560–22570,
2023. 2,3,7
[4] Antonia Creswell and Anil Anthony Bharath. Inverting the
generator of a generative adversarial network. IEEE transac-
tions on neural networks and learning systems, 30(7):1967–
1974, 2018. 3
[5] Trung Dao, Thuan Hoang Nguyen, Thanh Le, Duc Vu, Khoi
Nguyen, Cuong Pham, and Anh Tran. Swiftbrush v2: Make
your one-step diffusion model better than its teacher. In
European Conference on Computer Vision, pages 176–192.
Springer, 2025. 1,2,3
[6] Gilad Deutch, Rinon Gal, Daniel Garibi, Or Patashnik, and
Daniel Cohen-Or. Turboedit: Text-based image editing using
few-step diffusion models. In SIGGRAPH Asia 2024 Con-
ference Papers, New York, NY, USA, 2024. Association for
Computing Machinery. 2,3,7,8
[7] Keyan Ding, Kede Ma, Shiqi Wang, and Eero P. Simoncelli.
Image quality assessment: Unifying structure and texture
similarity. IEEE Transactions on Pattern Analysis and Ma-
chine Intelligence, 44(5):2567–2581, 2022. 5
[8] Daniel Garibi, Or Patashnik, Andrey Voynov, Hadar
Averbuch-Elor, and Daniel Cohen-Or. Renoise: Real im-
age inversion through iterative noising. In Computer Vision
ECCV 2024, pages 395–413, Cham, 2025. Springer Nature
Switzerland. 2,3,7
[9] Aaron Gokaslan, A Feder Cooper, Jasmine Collins, Lan-
dan Seguin, Austin Jacobson, Mihir Patel, Jonathan Fran-
kle, Cory Stephenson, and Volodymyr Kuleshov. Com-
moncanvas: An open diffusion model trained with creative-
commons images. arXiv preprint arXiv:2310.16825, 2023.
6
[10] Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman,
Yael Pritch, and Daniel Cohen-or. Prompt-to-prompt image
editing with cross-attention control. In The Eleventh Inter-
national Conference on Learning Representations, 2023. 3,
7
[11] Xuan Ju, Ailing Zeng, Yuxuan Bian, Shaoteng Liu, and
Qiang Xu. Pnp inversion: Boosting diffusion-based editing
with 3 lines of code. International Conference on Learning
Representations (ICLR), 2024. 1,3,6,7,8,13
[12] Diederik P. Kingma and Jimmy Ba. Adam: A method for
stochastic optimization. In 3rd International Conference on
Learning Representations, ICLR 2015, San Diego, CA, USA,
May 7-9, 2015, Conference Track Proceedings, 2015. 6
[13] Senmao Li, Joost van de Weijer, Taihang Hu, Fahad Shahbaz
Khan, Qibin Hou, Yaxing Wang, and Jian Yang. Styledif-
fusion: Prompt-embedding inversion for text-based editing.
arXiv preprint arXiv:2303.15649, 2023. 1
[14] Zachary C Lipton and Subarna Tripathi. Precise recovery of
latent vectors from generative adversarial networks. arXiv
preprint arXiv:1702.04782, 2017. 3
[15] Xingchao Liu, Xiwen Zhang, Jianzhu Ma, Jian Peng, and
Qiang Liu. Instaflow: One step is enough for high-quality
diffusion-based text-to-image generation. In International
Conference on Learning Representations, 2024. 2,3,12
[16] Simian Luo, Yiqin Tan, Longbo Huang, Jian Li, and Hang
Zhao. Latent consistency models: Synthesizing high-
resolution images with few-step inference. arXiv preprint
arXiv:2310.04378, 2023. 2
[17] Fangchang Ma, Ulas Ayaz, and Sertac Karaman. Invertibility
of convolutional generative networks from partial measure-
ments. Advances in Neural Information Processing Systems,
31, 2018. 3
[18] Chenlin Meng, Robin Rombach, Ruiqi Gao, Diederik
Kingma, Stefano Ermon, Jonathan Ho, and Tim Salimans.
On distillation of guided diffusion models. In Proceedings
of the IEEE/CVF Conference on Computer Vision and Pat-
tern Recognition, pages 14297–14306, 2023. 2
[19] Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and
Daniel Cohen-Or. Null-text inversion for editing real im-
ages using guided diffusion models. In Proceedings of
the IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), pages 6038–6047, 2023. 1,2,3,5,7,
8
[20] Thuan Hoang Nguyen and Anh Tran. Swiftbrush: One-step
text-to-image diffusion model with variational score distilla-
tion. In Proceedings of the IEEE/CVF Conference on Com-
puter Vision and Pattern Recognition (CVPR), 2024. 1,2,3,
12
[21] Trong-Tung Nguyen, Duc-Anh Nguyen, Anh Tran, and
Cuong Pham. Flexedit: Flexible and controllable
diffusion-based object-centric image editing. arXiv preprint
arXiv:2403.18605, 2024. 2,3
[22] Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yi-
jun Li, Jingwan Lu, and Jun-Yan Zhu. Zero-shot image-to-
image translation. New York, NY, USA, 2023. Association
for Computing Machinery. 3,7
[23] Guim Perarnau, Joost van de Weijer, Bogdan Raducanu, and
Jose M. ´
Alvarez. Invertible Conditional GANs for image
editing. In NIPS Workshop on Adversarial Training, 2016. 3
[24] Dustin Podell, Zion English, Kyle Lacey, Andreas
Blattmann, Tim Dockhorn, Jonas M¨
uller, Joe Penna, and
Robin Rombach. SDXL: Improving latent diffusion models
for high-resolution image synthesis. In The Twelfth Inter-
national Conference on Learning Representations, 2024. 1,
2
[25] Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Milden-
hall. Dreamfusion: Text-to-3d using 2d diffusion. In The
Eleventh International Conference on Learning Representa-
tions, 2023. 3
9
[26] Robin Rombach, Andreas Blattmann, Dominik Lorenz,
Patrick Esser, and Bj¨
orn Ommer. High-resolution image
synthesis with latent diffusion models. In Proceedings of
the IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), pages 10684–10695, 2022. 1,2
[27] Chitwan Saharia, William Chan, Saurabh Saxena, Lala
Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour,
Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Sali-
mans, Jonathan Ho, David J Fleet, and Mohammad Norouzi.
Photorealistic text-to-image diffusion models with deep lan-
guage understanding. In Advances in Neural Information
Processing Systems, pages 36479–36494. Curran Associates,
Inc., 2022. 1,2
[28] Tim Salimans and Jonathan Ho. Progressive distillation for
fast sampling of diffusion models. In International Confer-
ence on Learning Representations, 2022. 2
[29] Axel Sauer, Frederic Boesel, Tim Dockhorn, Andreas
Blattmann, Patrick Esser, and Robin Rombach. Fast high-
resolution image synthesis with latent adversarial diffusion
distillation. In SIGGRAPH Asia 2024 Conference Papers,
New York, NY, USA, 2024. Association for Computing Ma-
chinery. 1,3
[30] Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin
Rombach. Adversarial diffusion distillation. In European
Conference on Computer Vision, pages 87–103. Springer,
2025. 2
[31] Jiaming Song, Chenlin Meng, and Stefano Ermon.
Denoising diffusion implicit models. arXiv preprint
arXiv:2010.02502, 2020. 2,7
[32] Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya
Sutskever. Consistency models. In Proceedings of the
40th International Conference on Machine Learning, pages
32211–32252. PMLR, 2023. 2
[33] Nikita Starodubcev, Mikhail Khoroshikh, Artem Babenko,
and Dmitry Baranchuk. Invertible consistency distillation for
text-guided image editing in around 7 steps. arXiv preprint
arXiv:2406.14539, 2024. 2,3,7,8
[34] Keqiang Sun, Junting Pan, Yuying Ge, Hao Li, Haodong
Duan, Xiaoshi Wu, Renrui Zhang, Aojun Zhou, Zipeng Qin,
Yi Wang, et al. Journeydb: A benchmark for generative im-
age understanding. Advances in Neural Information Process-
ing Systems, 36, 2024. 6
[35] Narek Tumanyan, Michal Geyer, Shai Bagon, and Tali
Dekel. Plug-and-play diffusion features for text-driven
image-to-image translation. In Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition
(CVPR), pages 1921–1930, 2023. 2,3,7
[36] Tengfei Wang, Yong Zhang, Yanbo Fan, Jue Wang, and
Qifeng Chen. High-fidelity gan inversion for image attribute
editing. In Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition (CVPR), 2022. 2,
3
[37] Weihao Xia, Yulun Zhang, Yujiu Yang, Jing-Hao Xue, Bolei
Zhou, and Ming-Hsuan Yang. Gan inversion: A survey.
IEEE Transactions on Pattern Analysis and Machine Intel-
ligence, 45(3):3121–3138, 2023. 2
[38] Hu Ye, Jun Zhang, Sibo Liu, Xiao Han, and Wei Yang. Ip-
adapter: Text compatible image prompt adapter for text-to-
image diffusion models. 2023. 3,4,6
[39] Tianwei Yin, Micha¨
el Gharbi, Taesung Park, Richard Zhang,
Eli Shechtman, Fredo Durand, and William T Freeman. Im-
proved distribution matching distillation for fast image syn-
thesis. In NeurIPS, 2024. 1,2,3,12
[40] Tianwei Yin, Micha¨
el Gharbi, Richard Zhang, Eli Shecht-
man, Fr´
edo Durand, William T Freeman, and Taesung Park.
One-step diffusion with distribution matching distillation. In
CVPR, 2024. 1,2,3
[41] Jiapeng Zhu, Yujun Shen, Deli Zhao, and Bolei Zhou. In-
domain gan inversion for real image editing. In Proceedings
of European Conference on Computer Vision (ECCV), 2020.
2,3
[42] Jun-Yan Zhu, Philipp Kr¨
ahenb¨
uhl, Eli Shechtman, and
Alexei A Efros. Generative visual manipulation on the natu-
ral image manifold. In Computer Vision–ECCV 2016: 14th
European Conference, Amsterdam, The Netherlands, Octo-
ber 11-14, 2016, Proceedings, Part V 14, pages 597–613.
Springer, 2016. 3
10
SwiftEdit: Lightning Fast Text-Guided Image Editing via One-Step Diffusion
Supplementary Material
In this supplementary material, we first provide a de-
tailed derivation of the regularization loss used in Stage 2,
as outlined in Sec. 8. Next, we present several additional
ablation studies in Sec. 9. Finally, we include more quanti-
tative and qualitative results in Sec. 10, and Sec. 11. Then
we discuss societal impacts in Sec. 12.
8. Derivation of the Regularization Loss in
Stage 2
We provide a detailed derivation of the gradient of our pro-
posed regularization loss, as defined in Eq. (8) of the main
paper. The regularization loss is formulated as follows:
Lstage2
regu =Et,ˆ
ϵw(t)ϵϕ(zt, t, cy)ˆ
ϵ2
2,(10)
where ϵϕ(.)is a teacher denoising UNet, here, we use SD
2.1 in our implementation.
The gradient of the loss w.r.t our inversion network’s pa-
rameters θis computed as:
θLstage2
regu Et,ˆ
ϵ[w(t)(ϵϕ(zt, t, cy)ˆ
ϵ)
(ϵϕ(zt, t, cy)
θ ˆ
ϵ
θ )],
(11)
where we absorb all constants into w(t). Expanding the
term ϵϕ(zt,t,cy)
θ , we have:
ϵϕ(zt, t, cy)
θ =ϵϕ(zt, t, cy)
zt
zt
z
z
θ .(12)
Since z(extracted from real images) and θare indepen-
dent, z
θ = 0, thus, we can turn Eq. (11) into:
θLstage2
regu Et,ˆ
ϵw(t)(ϵϕ(zt, t, cy)ˆ
ϵ)(ˆ
ϵ
θ )(13)
=Et,ˆ
ϵw(t)(ˆ
ϵϵϕ(zt, t, cy)) ˆ
ϵ
θ ,(14)
which has the opposite sign of the SDS gradient w.r.t zloss
as discussed in the main paper.
9. Additional Ablation Studies
Compatibility of multi-step inversion with one-step text-
to-image model. To showcase the strength of our one-step
inversion framework, we test existing inversion techniques
on one-step generators. Specifically, we evaluate multi-step
methods like DDIM Inversion (DDIMInv) and direct inver-
sion on SBv2. As shown in the first and second row of
 
Source Image
Reconstructed
Image Edited Image

 



Figure 8. Edit images with flexible prompting. SwiftEdit
achieves satisfactory reconstructed and edited results with flexi-
ble source and edit prompt input (denoted under each image).
Tab. 5, these methods yield lower performance and slower
inference time, while SwiftEdit excels with superior results
and high efficiency.
Combined with other one-step text-to-image models. As
discussed in the main paper, our inversion framework is
not limited to SBv2 and can be seamlessly integrated with
11
Model PSNRCLIP-WholeCLIP-Edited
Ours + InstaFlow24.88 24.03 20.47
Ours + DMD226.08 23.35 19.84
Ours + SBv125.09 23.64 19.96
Ours + SBv2(SwiftEdit) 23.33 25.16 21.25
Table 4. Ablation studies on combining our technique with other
one-step text-to-image generation models. means that these
models are based on SD 1.5 while means that these models are
based on SD 2.1.
Source Image Ours + Instaflow
Ours + DMDv2
(SD 1.5) Ours + SBv1 Ours + SBv2
(SwiftEdit)



Figure 9. Qualitative results when combining our inversion frame-
work with other one-step text-to-image generation models.
other one-step text-to-image generators. To demonstrate
this, we conducted experiments replacing SBv2 with alter-
native models, including DMD2 [39], InstaFlow [15], and
SBv1 [20]. For these experiments, the architecture and pre-
trained weights of each generator Gwere used to initialize
our inversion network in Stage 1. Specifically, DMD2 was
implemented using the SD 1.5 backbone, while InstaFlow
uses SD 1.5. All training experiments for both stages were
conducted on the same dataset, similar to the experiments
presented in Tab. 1 of the main paper.
Figure 9presents edited results obtained by integrating
our inversion framework with different one-step image gen-
erators. As shown, these one-step models integrate well
with our framework, enabling effective edits. Addition-
ally, quantitative results are provided in Tab. 4. The re-
sults indicate that our inversion framework combined with
SBv2 (SwiftEdit) achieves the best editing performance
in terms of CLIP-Whole and CLIP-Edited scores, while
DMD2 demonstrates superior background preservation.
Two-stage training rationale. We provide additional abla-
tion study where we train our network in a single stage using
a mixed dataset of synthetic and real images. In particular,
we construct a mixed training dataset comprised of: 10,000
synthetic image samples (generated by SBv2 using COCOA
(a) Varying sedit scale at different levels of snon-edit with default sy= 2.
(b) Varying syscale at different levels of snon-edit with default sedit = 0.
Figure 10. Effects on background preservation and editing seman-
tics while varying sedit and syat different levels of snon-edit.
prompts), and 10,000 real samples of COCOA dataset. The
goal of this experiment is to understand the behavior and
advantage of two-stage training compared to single stage
training with mixed dataset. As shown in the third row of
Tab. 5, the combined training stage resulted in lower perfor-
mance across all metrics compared to our two-stage strat-
egy. This highlights the effectiveness of our two-stage strat-
egy.
Varying scales. To better understand the effect of vary-
ing scales used in Eq. (9) in the main paper, we present
two comprehensive plots evaluating the performance of
SwiftEdit on 100 random test samples from the PieBench
benchmark. Particularly, the plots depict results for vary-
ing sedit {0,0.2,0.4,0.6,0.8,1}(see Fig. 10a) or sy
{0.5,1,1.5,2,2.5,3,3.5,4}(see Fig. 10b) at different lev-
els of snon-edit {0.2,0.4,0.6,0.8,1}. As shown in
Fig. 10a, it is evident at different levels of snon-edit that
lower sedit generally improves editing semantics (CLIP-
Edited scores) but slightly compromises background preser-
vation (PSNR). Conversely, higher sycan enhance prompt-
image alignment (CLIP-Edited scores, Fig. 10b), but exces-
sive values (sy>2) may harm prompt-alignment result. In
all of our experiments, we use default choice of scale pa-
rameters setting where we set sedit = 0,snon-edit = 1, and
sy= 2.
10. More Quantitative Results
In Tab. 6, we provide full scores on PieBench of compar-
ison results in Tab. 1, with additional scores related to
background preservation such as Structure Distance (SDis),
12
Method SDisPSNRLPIPSMSESSIM CLIP-W CLIP-ETime (s)
DirectInv + SBv2 0.050 15.5 0.25 0.003 0.65 24.3 20.3 9.25
DDIMInv + SBv2 0.060 14.4 0.29 0.004 0.63 22.7 19.7 3.85
SwiftEdit (Mixed Training) 0.005 22.5 0.09 0.0008 0.79 23.5 19.3 0.23
SwiftEdit (Ours) 0.001 23.3 0.08 0.0006 0.81 25.2 21.3 0.23
Table 5. Comparison of SwiftEdit with other settings on PieBench.
Type Method SDis×103PSNRLPIPS×103MSE×104SSIM×102CLIP-W CLIP-ETime
Multi-step
(50 steps)
DDIM + P2P 69.43 17.87 208.80 219.88 71.14 25.01 22.44 25.98
NT-Inv + P2P 13.44 27.03 60.67 35.86 84.11 24.75 21.86 134.06
DDIM + MasaCtrl 28.38 22.17 106.62 86.97 79.67 23.96 21.16 23.21
Direct Inversion + MasaCtrl 24.70 22.64 87.94 81.09 81.33 24.38 21.35 29.68
DDIM + P2P-Zero 61.68 20.44 172.22 144.12 74.67 22.80 20.54 35.57
Direct Inversion + P2P-Zero 49.22 21.53 138.98 127.32 77.05 23.31 21.05 35.34
DDIM + PnP 28.22 22.28 113.46 83.64 79.05 25.41 22.55 12.62
Direct Inversion + PnP 24.29 22.46 106.06 80.45 79.68 25.41 22.62 12.79
InstructPix2Pix 57.91 20.82 158.63 227.78 76.26 23.61 21.64 3.85
InstructDiffusion 75.44 20.28 155.66 349.66 75.53 23.26 21.34 7.68
Few-steps
(4 steps)
ReNoise (SDXL Turbo) 78.44 20.28 189.77 54.08 70.90 24.30 21.07 5.10
TurboEdit 16.10 22.43 108.59 9.48 79.68 25.50 21.82 1.31
ICD (SD 1.5) 10.21 26.93 63.61 3.33 83.95 22.42 19.07 1.38
One-step SwiftEdit (Ours) 13.21 23.33 91.04 6.58 81.05 21.16 21.25 0.23
SwiftEdit (Ours with GT masks) 13.25 23.31 93.88 6.19 81.36 25.56 21.91 0.23
Table 6. Quantitative comparison of SwiftEdit against other editing methods with metrics employed from PieBench [11].
.
Source Image SwiftEdit Mask Edited Result Source Image SwiftEdit Mask Edited Result
 
 
Figure 11. Visualization of our extracted mask along with edited
results using guided text described under each image row.
LPIPS, and SSIM. We additionally compare with other
training-based image editing methods such as Instruct-
Pix2Pix (InstructP2P), and InstructDiffusion (InstructDiff).
Unlike these methods, which require multi-step sampling
and paired training data, SwiftEdit trains on source images
alone for one-step editing. As shown, SwiftEdit outper-
forms both in quality and speed, thanks to its efficient one-
step inversion and editing framework.
11. More Qualitative Results
Self-guided Editing Mask. In Fig. 11, we show more
editing examples along with self-guided editing masks ex-
tracted directly from our inversion network.
Flexible Prompting. As shown in Fig. 8, SwiftEdit con-
sistently reconstructs images with high fidelity, even with
minimal source prompt input. It operates effectively with
just a single keyword (last three rows) or no prompt at all
(first two rows). Notably, SwiftEdit performs complex edits
with ease, as demonstrated in the last row of Fig. 8, by sim-
ply combining keywords in the edit prompt. These results
highlight its capabilities as a lightning-fast and user-friendly
editing tool.
Facial Identity and Expression Editing. In Fig. 12, given
a simple source prompt “man” and a portrait image, SwiftE-
dit can achieve face identity and facial expression editing
via a simple edit prompt by just combining expression word
(denoted on each row) and identity word (denoted on each
column).
Additional Results on PieBench. In Figs. 13 to 15, we pro-
vide extensive editing results compared with other methods
13
“ronaldo” “tom cruise” “chris evans”
“beckham”
“smiling”
“angry”
Soure Image
“man”
Edited Image
Figure 12. Face identity and expression editing via simple prompts. Given a portrait input image, SwiftEdit can perform a variety of
facial identities along with expression editing scenarios guided by simple text within just 0.23 seconds.
on the PieBench benchmark.
12. Societal Impacts
As an AI-powered visual generation tool, SwiftEdit delivers
lightning-fast, high-quality, and customizable editing capa-
bilities through simple prompt inputs, significantly enhanc-
ing the efficiency of various visual creation tasks. How-
ever, societal challenges may arise as such tools could be
exploited for unethical purposes, including generating sen-
sitive or harmful content to spread disinformation. Address-
ing these concerns are essential and several ongoing works
have been conducted to detect and localize AI-manipulated
images to mitigate potential misuse.
14





Source
Image NT + P2P DDIM
+ P2P
Pix2Pix-
Zero MasaCtrl Plug-
and-Play ReNoise TurboEdit
ICD
(SD 1.5)
SwiftEdit
(Ours)




Figure 13. Comparative results on the PieBench benchmark
15





Source
Image NT + P2P DDIM
+ P2P
Pix2Pix-
Zero MasaCtrl Plug-
and-Play ReNoise TurboEdit
ICD
(SD 1.5)
SwiftEdit
(Ours)




Figure 14. Comparative results on the PieBench benchmark
16





Source
Image NT + P2P DDIM
+ P2P
Pix2Pix-
Zero MasaCtrl Plug-
and-Play ReNoise TurboEdit
ICD
(SD 1.5)
SwiftEdit
(Ours)




Figure 15. Comparative results on the PieBench benchmark
17