Mega v2 GPU issues

#97
by cerebrohs - opened

I'm trying Mega v2 and copied everything on the screenshot (except I kept video combine and didn't use preview image like what was indicated on the screenshot), but so far, I've been unsuccessful as I'm running low on VRAM.

Can my GPU (3060 Ti) use Mega v2? If so, what do I need to do to not run out of VRAM?

TIA :)

Owner

8GB of VRAM is really limited for video generation. You should still be able to make videos, but at lower resolutions and lower frame counts. You might be able to use the "context window" ComfyUI node to do longer videos in chunks that your VRAM can handle.

I'll try and see how to use the context window node. Thanks for the tip!

Noob question: is Mega v2 more taxing than v10? I never had VRAM issues with v10, but mega v2 just drains my VRAM

Yeah for Mega v2 IV2 I'm getting error message that I don't have enough GPU VRAM - running on RTX 4090 card

Phr00t - unfortunately I couldn't get the context windows node to work with the checkpoint. It kept throwing tensor mismatch errors - I think it's because of the VACE elements of the Mega checkpoint (there seems to be quite a few reports of general issues running the context window node with VACE models)... Even when the I2V nodes are bypassed, the issue remains because of the inbuilt Vace components.

Though, I tried using the Wan context window node with a different model and it feels like another case of why developers need to stop releasing these things without any damned explanation of how to use them. It's like someone writing a codebook and then expecting that everyone should know how to understand and decipher their unique code. It's infuriating.

I have no idea how to best set it up. It took absolutely ages and when it generated the video, every context window felt like a different generation, like I had created several generations of a prompt and stitched them together, no continuity or speed benefit at all.

Perhaps I'm just using it wrong or perhaps like most stuff in comfy it's a glittering gift full of promise that turns out to be massively more underwhelming β€” and more useless for lower vram users β€” than advertised...

Like Torch Compile and Block Swapping. Great concepts for us VRAM peons with less than 16gb VRAM but when they just rocket VRAM usage to 98%, cause comfy to silently hang with VRAM overloads and destroy otherwise working workflows, they're really nothing more than a nice idea poorly executed ...

Sign up or log in to comment