Sign In

WAN 2.2 Image to video (I2V) 14B with SVI 2.0 (KJ Wrapper) for 12 GB VRAM workflow

Updated: Apr 4, 2026

toolrocmlow vrami2v12 gbwan2.2

Download

1 variant available

Archive Other

13.54 KB

Verified:

Type

Workflows

Stats

253

Reviews

Published

Mar 9, 2026

Base Model

Wan Video 2.2 I2V-A14B

Hash

AutoV2
AB271393CD

image used to create example video: https://civitai.com/images/123232086

WAN 2.2 14B I2V Workflow with SVI 2.0 (Long Video Generation)

I couldn't find a workflow based on ComfyUI-WanVideoWrapper nodes that uses the SVI 2.0 LoRA, so I decided to create my own.

This workflow includes the following optimizations:

  1. Block swap

  2. TeaCache

  3. TorchCompile

  4. Cached prompts, text embeddings, and image embeddings

  5. RifleXRope frames interpolation in the sampler

  6. SageAttention

  7. Smart image resizing to ensure the first frame resolution is divisible by 32 (better resolution for WAN 2.2)

  8. ApplyNAG (enable negative prompt for cfg=1.0)

Feel free to clone 'Extra segment' subgraph to increase total video length.

Tested on:

  • Kubuntu 25.10 (kernel 6.18)

  • ComfyUI 16.4

  • Pytorch 2.10+rocm7.1; Triton 3.6.0; SageAttention-1.0.6 (for ROCm)

  • AMD RX 6700XT 12 Gb

  • 32 Gb RAM

P.S. (not included in this workflow) If you want to double your output video FPS, you can add RIFE VFI node before VHS Video Combine node, then change frame_rate in VHS Video Combine node

πŸš€ Prerequisite

  • At least 12 Gb VRAM

  • SageAttention installed and enabled

  • ROCm 7.1 / CUDA 12+ compatible system

πŸ“₯ Model links:

🎬 WAN:

You can try Q5 or even Q6 versions

πŸ–ΌοΈ WAN Vae:

πŸ“ T5 Text encoder

You can use any T5 encoder, but don't use 'scaled' versions

πŸ‘οΈ Tae for WAN 2.1 (for previews, optional)

♾️ SVI 2.0 Pro LoRA

πŸ“ Folder structure

/path_to_ComfyUI/
β”œβ”€β”€ models/
β”‚   β”œβ”€β”€ unet/
β”‚   β”‚   β”œβ”€β”€ DasiwaWAN22I2V14BSynthseduction_q4High.gguf
β”‚   β”‚   └── DasiwaWAN22I2V14BSynthseduction_q4Low.gguf
β”‚   β”œβ”€β”€ text_encoders/
β”‚   β”‚   └── umt5-xxl-enc-fp8_e4m3fn.safetensors
β”‚   β”œβ”€β”€ vae/
β”‚   β”‚   └── wan_2.1_vae.safetensors
β”‚   β”œβ”€β”€ vae_approx/
β”‚   β”‚   └── taew2_1.safetensors
β”‚   └── loras/
β”‚       β”œβ”€β”€ SVI_v2_PRO_Wan2.2-I2V-A14B_HIGH_lora_rank_128_fp16.safetensors
β”‚       └── SVI_v2_PRO_Wan2.2-I2V-A14B_LOW_lora_rank_128_fp16.safetensors

πŸ“¦ Used nodes

πŸš€ Performance

For an image 480x608@49 (16x3+1) on AMD RX 6700XT 12 GB (AOTRITON+SageAttn+TunableOps)

  • First run: ~30-35 min (compilation + tuning) first segment

  • Subsequent runs: ~5-6 min per segment

⚠️ Known Issues & Fixes

  • OOM: Increase Swap block parameter, decrease Image resolution, FPS, Seconds

  • System freeze: Disable fun_or_fl2v_model (if enabled); Change linux kernel to 6.18 or higher

  • First run slow: Normal, compilation takes time for every new parameter set (resolution, frames, etc), especially when TUNABLE_OPS enabled.