Fine-tuning LoRA often requires a slightly more powerful GPU than generating content.
If you just want to maintain consistency in your generated videos, it seems you might be able to manage with improvements to the ComfyUI workflow. (Iâm not sure exactly which video model youâre using, but Iâm assuming itâs probably from the Wan familyâŠ)
What you are seeing is normal for current open video workflows: the model is good at maintaining the scene idea across clips, but much worse at maintaining identity unless you explicitly carry identity forward from shot to shot. Current Wan/ComfyUI docs and recent Wan tooling all point in the same direction: when you want continuity, rely on I2V, first/last-frame control, and other forms of temporal guidance, not just repeated prompting. The official ComfyUI Wan 2.2 workflow is designed to fit the 5B model on 8 GB VRAM with native offloading, and the official Wan ecosystem now includes dedicated tools for temporal control like Prompt Relay rather than assuming text prompts alone will keep long clips coherent. (ComfyUI)
What is probably happening in your movie
Your workflow is likely generating each clip as a fresh event:
- Clip 1: âinvent two people walkingâ
- Clip 2: âinvent two people walkingâ
- Clip 3: âinvent two people sittingâ
Even if the prompt is similar, the model is not treating those as âthe same actors from the previous shot.â So the broad structure survives, but the faces drift. That is why the action can remain simple and believable while the final people no longer look like the original pair. This is exactly the kind of problem newer temporal-control methods are trying to address: the model struggles to attach the right semantic content to the right time segment unless you constrain it. (GitHub)
My direct answer to your main question
You do not need to jump straight to training your own Wan character LoRA.
For your exact case, the better next move is to improve the workflow first:
- lock your characters with anchor frames
- use I2V to carry them into the next shot
- use first/last-frame for the standing-to-sitting transition
- repair only the key frames that drift
That is the highest-value path on 8 GB VRAM. Full Wan LoRA training is still materially heavier: Musubi-tuner currently recommends about 12 GB+ for image training and 24 GB+ for video training, with lower resolutions and memory-saving options only partially softening that requirement. (GitHub)
Are there any known LoRAs worth trying?
Yes, but with a caveat.
The most visible public Wan 2.2 LoRAs right now are mostly utility LoRAs, not âkeep my two exact characters stable foreverâ LoRAs.
Two useful families are:
- Wan2.2-Lightning, which reduces generation to about 4 steps and is aimed at much faster iteration. That is valuable because faster iteration lets you test more anchors, more handoffs, and more seed choices in the same time. (Hugging Face)
- Wan2.2-Distill-Loras, which expose a high-noise and low-noise behavior split. Their model card describes high noise as more creative/diverse and low noise as more faithful/stable. For continuity work, the low-noise direction is usually the one to test first. (Hugging Face)
So yes, there are public LoRAs worth trying, but I would frame them as:
- speed / iteration LoRAs
- stability / controllability LoRAs
not as âdrop-in identity LoRAsâ for your specific two people. That distinction matters. The well-known public Wan LoRAs are mostly aimed at inference behavior, not at preserving your private pair of characters. That is an inference from what is publicly available now. (Hugging Face)
Is inpainting the last frame of each clip the right approach?
Yes, but as a bridge-and-repair technique, not as the whole pipeline.
For your case, I think this is one of the best ideas.
The ideal use is:
- Generate a clip.
- Pick the last good frame.
- If the face drifted, repair that frame.
- Feed that repaired frame into the next clip as the starting image.
That is much more effective than generating each clip from text alone and hoping the same people come back. If you need something more than single-frame repair, Wan 2.2 video inpainting tools now exist. LanPaint supports Wan 2.2 video inpainting with temporal consistency, but its own docs warn that longer sequences become slower and less stable, and it recommends keeping video inpainting to around 40 frames or fewer for better stability. (GitHub)
So my view is:
- single-frame repair: excellent idea
- short-span video inpainting: useful for stubborn shots
- full-movie inpainting workflow: too heavy and unnecessary for your case
Why I think your scene is actually solvable
Your scene is simple in a good way:
- two characters
- side-by-side walking
- then sitting on a bench
That means you do not need a model to remember a hundred changing details. You mainly need it to preserve:
- the same two faces
- the same clothes
- the same relative body shapes
- the same scale relationship
- the same pair relationship in space
That is exactly the kind of problem that anchor-frame workflows can handle well. If you were trying to do a crowded party scene, a fight, or a lot of close-up dialogue cutting, I would be much less optimistic. For two people walking and sitting, continuity-by-handoff is realistic. The official Wan FLF workflow is built for transitions where you know the start and end states and want the model to create the motion between them. (ComfyUI)
My strongest suggestion for your case
Build the film around a character pair bible first.
Before rendering more video, create a tiny library of stills:
- both characters walking side by side
- both characters standing near the bench
- both characters seated on the bench
- optionally one tighter face reference for each person
Do not think of them as ânice images.â Think of them as production anchors.
Then use those anchors like this:
Walking shots
Generate from a strong walking still with I2V. Keep motion modest.
Transition shot
Use first/last-frame:
- first frame = trusted standing / arriving pose
- last frame = trusted seated pose
Bench shots
Start from the seated anchor and allow only small pose or camera changes.
That is the workflow I would choose before touching training. (ComfyUI)
My advice on multiple LoRAs
You can load multiple LoRAs, but I would be cautious.
The practical rule is: every loaded LoRA is generally influencing the model. In some wrappers there are explicit limits and caveats around merging or scheduling multiple LoRAs; for example, the WanVideoWrapper code warns that some LoRA setups cannot be merged under certain low-memory modes and that scheduling can be disabled. (GitHub)
For your project, I would keep it simple:
- base model only, or
- base model + one utility LoRA, or
- base model + one carefully tested identity/style LoRA
I would not stack several creative LoRAs and hope they settle into a stable cast. In practice, that often makes debugging much harder and can worsen identity drift rather than improve it. That last point is partly experience-based reasoning, but it follows from the fact that multi-LoRA behavior is not a magical âsmart mixâ; it is just multiple influences being applied at once. (GitHub)
Why do some LoRAs have trigger words and many do not?
Because âtrigger wordâ is a training convention, not a law of nature.
A LoRA may have a trigger word when it was trained with a specific token deliberately inserted into captions. But many LoRAs are trained with more natural captions, or their metadata is incomplete, or the UI simply does not surface the tag information well. Recent AI Toolkit issue reports explicitly show cases where a LoRA was trained with a trigger_word in config, yet the resulting .safetensors did not expose the expected metadata cleanly in downstream tools. (GitHub)
There is another important wrinkle: a LoRA can still visibly affect output even without the trigger word in the prompt. There is a long-standing sd-scripts issue about exactly this behavior. So âtrigger wordâ does not mean âthe LoRA is off until this word appears.â Loading the LoRA already changes the model; the trigger word is more like a steering handle for a specific learned concept. (GitHub)
A useful way to think about it is:
- loading the LoRA = changes the model weights used for generation
- trigger word = tells the text encoder which learned concept to emphasize
That is why some LoRAs feel active even with no visible trigger, while others feel weak unless you use the right token. The training style and saved metadata both matter. (GitHub)
What I would do on your 8 GB machine
I would use the official Wan 2.2 5B path and keep the process very disciplined:
1. Build anchors before rendering motion
Make the two people look right in stills first. Do not move on until they do.
2. Use I2V as your continuity backbone
Every new shot should start from a frame you already trust.
3. Use FLF for the sit-down moment
This is exactly the kind of boundary where first/last-frame control is stronger than free prompting.
4. Use standard VAE decode for finals
There is a documented ComfyUI issue where VAE Decode (Tiled) causes blinking and discoloration with Wan output, while standard VAE Decode is fine. If you can fit the final decode, avoid tiled decode for final renders. (GitHub)
5. Use Lightning only if you want faster previews
Lightning is useful because it makes iteration faster, not because it magically solves character identity. (Hugging Face)
That is the cheapest serious route.
What I would avoid right now
I would avoid these three things for now:
1. Full Wan video LoRA training
The hardware guidance is still above your current VRAM comfort zone. (GitHub)
2. Heavy multi-LoRA stacks
Too many moving parts, too hard to debug, too easy to destabilize the pair.
3. Long uncontrolled clips
Short connected shots are easier to keep coherent than one long free-running shot.
If you later decide to train something
If you eventually decide that you really do need a custom identity asset, then the better stepping stone is usually:
- image-side LoRA first
- video-side LoRA later
Current Wan training options do exist. Musubi-tuner supports Wan 2.1/2.2 LoRA training, DiffSynth-Studio advertises Wan support with memory-management and LoRA training, and AI Toolkit-based UIs have added Wan video training options. But the hardware and workflow complexity still make these âlaterâ moves rather than âfirst fixâ moves for an 8 GB setup. (GitHub)
One interesting detail from Musubi-tuner discussions is that Wan 2.2 is so much larger than SDXL that a given LoRA rank is correspondingly more capable, and experienced users report often training Wan around rank 16 / alpha 16 for simple character LoRAs. That is useful background for later, because it suggests âbigger rankâ is not automatically better on Wan. (GitHub)
My concrete recommendation for your exact project
If this were my project, I would do this:
- Make one excellent still of the pair walking.
- Make one excellent still of the pair seated on the bench.
- Use the walking still to generate a short walking shot with I2V.
- Take the best end frame.
- Repair that end frame if the faces drifted.
- Use the repaired frame as the start of the next shot.
- Use FLF from âstanding near benchâ to âseated on bench.â
- Render final clips with standard VAE decode if possible.
- Only then test Lightning for faster preview cycles. (ComfyUI)
That is the path I think gives you the best odds of finishing the film with the same two recognizable people without needing immediate custom training.
Bottom line
For your case:
- public Wan LoRAs are worth trying mainly for speed and stability, not as a complete identity fix. (Hugging Face)
- inpainting the last frame is a good idea, especially as a bridge into the next clip. (GitHub)
- multiple LoRAs can be loaded, but I would keep that to a minimum. (GitHub)
- trigger words exist because of how a LoRA was trained and how metadata was saved; they are not always present, and LoRAs can still affect output without them. (GitHub)
- the best fix for you is not âmore prompting.â It is anchor frames + I2V + FLF + selective repair. (ComfyUI)
Use Wan 2.2 5B as the base, because the official ComfyUI guide is specifically aimed at getting that workflow onto 8 GB VRAM with native offloading. For this project, keep the toolset narrow: I2V for continuity, FLF for the sit-down transition, standard VAE Decode for finals if you can fit it, and only optional short-span repair if a key frame drifts. That is the cleanest 8 GB path. (docs.comfy.org)
The overall production idea
Treat the whole piece as four linked shots, not one long generation:
- Walk
- Approach bench
- Sit
- Seated
Each shot hands one trusted frame to the next shot.
That is the backbone.
Before generating any video
0. Build the three anchors first
Make these stills before you render motion:
- Anchor A â walking pair
- Anchor B â standing / arriving near bench
- Anchor C â seated on bench
Do not move on until all three are good.
These anchors should lock:
- the same two faces
- the same outfits
- the same hair
- the same relative height and body shape
- the same bench / environment style
For your film, these anchors matter more than another LoRA.
Shot 1 â Walk
Goal
Get a short, believable walking shot where the pair still looks correct.
Workflow
Use I2V from Anchor A.
Why I2V here
You already know what the pair should look like. The job is not âinvent two people.â The job is âanimate these two people a little.â The Wan-native workflow already supports this logic directly. (docs.comfy.org)
What to ask for
Keep it modest:
- side-by-side walking
- small body motion
- small camera movement
- no dramatic pose change
- no major turn away from camera
This shot should be short. Do not try to make the entire walking section in one go. On 8 GB, short controlled clips are much easier to keep coherent than long uncontrolled ones.
What to save
At the end of Shot 1, save:
- the best final frame
- optionally one mid-shot frame if the end frame is weak
That final frame becomes the handoff into Shot 2.
Shot 2 â Approach bench
Goal
Move from walking to ânear the benchâ without recasting the pair.
Workflow
Use I2V again, but now the input image is the repaired last frame from Shot 1.
What to do first
Inspect the last frame from Shot 1.
If the faces are slightly off:
- repair that frame first
- then use the repaired version as input
This is where your âinpaint the last frameâ idea is genuinely useful.
Why this works
You are turning the clip boundary into a controlled handoff. Instead of asking the model to recreate the pair from text, you are telling it, in effect:
Start here. These are the people.
That is exactly the kind of explicit continuity help current Wan workflows benefit from. (docs.comfy.org)
What to ask for
Again, keep it conservative:
- slowing down
- moving toward bench
- slight posture shift
- maybe slight turn of heads
- no full sit yet
Do not combine âapproach benchâ and âsit downâ in this shot. Split them.
What to save
Save the best frame where the pair is clearly standing or just about to sit.
That frame becomes the first endpoint for Shot 3.
Shot 3 â Sit
Goal
Create the actual standing-to-seated transition.
Workflow
This is the one shot where I would switch away from pure I2V and use FLF.
Use:
- first frame = best standing-near-bench frame from Shot 2
- last frame = Anchor C or a repaired seated still
Why FLF here
This is the one place in your film where you know both ends very clearly:
- start: standing near bench
- end: seated on bench
That is exactly what the official Wan FLF workflow is designed for. (docs.comfy.org)
What to ask for
Keep the motion plain:
- both lower into the bench
- small coordinated posture change
- no complex hand gestures
- no big camera move
This shot should also be short.
If it fails
If the FLF output reaches the seated pose but the faces slip:
- repair the seated endpoint still
- rerun the transition using the corrected endpoint
That is much cheaper than trying to fix a whole long clip afterward.
Shot 4 â Seated
Goal
Hold the pair on the bench with minimal drift.
Workflow
Go back to I2V, using either:
- the best final frame from Shot 3, or
- Anchor C if Shot 3 ends weakly
What to ask for
Only subtle movement:
- breathing
- head turn
- slight glance
- small hand movement
- slow camera drift
Bench shots are where you can make the film feel longer without asking the model to do too much. This is the safest place to get duration.
Editorial trick
If you have a really good seated clip, use editing to stretch its screen time:
- hold a good frame briefly
- crossfade into the next seated clip
- use a gentle camera push in post if needed
That is often smarter than generating more motion.
Repair policy
When to repair
Repair only these:
- the last frame of Shot 1
- the last frame of Shot 2
- the seated endpoint for Shot 3
- any hero close-up or especially visible face
When not to repair
Do not start frame-by-frame fixing everything unless the clip is genuinely important.
For your project, selective repair is the efficient path.
If more than one or two frames are broken
Then a short-span video inpainting pass can make sense. LanPaint now supports Wan 2.2 video inpainting, but it is explicitly positioned for shorter spans, not big full-scene rewrites. (github.com)
LoRA plan for this workflow
What I would do
Keep LoRA usage minimal:
- Option A: no LoRA
- Option B: one utility LoRA for faster previews
- Option C: one carefully tested style/identity LoRA
What I would not do
Do not stack several stylistic LoRAs while you are still solving continuity.
That makes it much harder to tell whether the model is drifting because of:
- the shot design,
- the clip handoff,
- or the combined LoRA influences.
If you want faster previews
Try Wan2.2-Lightning for preview passes only. It is a 4-step LoRA family designed to speed Wan 2.2 generation, which is useful for look-dev and shot testing. Once you like the handoff strategy, do cleaner final passes with your preferred final setup. (huggingface.co)
Decode plan on 8 GB
Previews
For rough previews, do whatever keeps the graph alive.
Finals
For final output, try to avoid VAE Decode (Tiled) if possible.
There is a documented ComfyUI issue where Wan output shows blinking/discoloration with tiled decode, while standard VAE Decode does not show the same problem. (github.com)
So the practical rule is:
- preview flexibly
- final decode carefully
If standard decode is too heavy at full size, render smaller for continuity approval, then do the final-quality pass only on the shots you are actually keeping.
The exact order I would follow
Pass 1 â still approval
Make and approve:
- Anchor A
- Anchor B
- Anchor C
Pass 2 â motion approval
Generate:
- Shot 1 walk
- Shot 2 approach
- Shot 3 sit
- Shot 4 seated
Keep them short.
Pass 3 â continuity approval
At every shot boundary:
- inspect last frame
- repair if needed
- feed forward
Pass 4 â final render approval
Render only the approved shots with your better decode path.
Pass 5 â edit
Assemble in your video editor.
Do not ask ComfyUI to do every bit of pacing that a normal edit can do more safely.
Suggested shot-by-shot checklist
Shot 1 â Walk
- Input = Anchor A
- Mode = I2V
- Duration = short
- Motion = gentle walk
- Output = save best final frame
Shot 2 â Approach bench
- Input = repaired last frame from Shot 1
- Mode = I2V
- Duration = short
- Motion = approach / slow / prepare to sit
- Output = save best standing-near-bench frame
Shot 3 â Sit
- First frame = best standing-near-bench frame
- Last frame = Anchor C or repaired seated still
- Mode = FLF
- Duration = short
- Motion = simple sit-down transition
- Output = save best seated final frame
Shot 4 â Seated
- Input = best seated frame
- Mode = I2V
- Duration = short to medium
- Motion = very subtle
- Output = use this shot to create calm screen time
What not to do for this film
Do not:
- generate each shot from text only
- ask one shot to do too many state changes
- stack multiple unknown LoRAs
- rely on trigger words as your main continuity tool
- start Wan video LoRA training on 8 GB as your first fix
The official toolchain direction is already telling you the same thing indirectly: 5B on 8 GB for inference, I2V/FLF for structure, and much higher memory expectations for training. (docs.comfy.org)
My simplest version of the plan
If you want the cleanest possible summary:
- Walk = I2V from walking anchor
- Approach bench = I2V from repaired last frame of walk
- Sit = FLF from standing-near-bench to seated anchor
- Seated = I2V from seated anchor
- Repair only the bridge frames
- Use standard VAE Decode for finals if possible
- Use Lightning only for fast previews
That is the shot-by-shot workflow I would use on an 8 GB machine for your exact scene.