Sign In

Video Frame Interpolation + Upscale Workflow

Updated: May 9, 2026

character

Download

1 variant available

Archive Other

3.09 KB

Verified:

Type

Workflows

Stats

25

Reviews

Published

May 9, 2026

Base Model

Qwen 2

Hash

AutoV2
7544008E8B
default creator card background decoration
AIKSK's Avatar

AIKSK

This ComfyUI workflow is designed for video frame interpolation, video upscaling, motion smoothing, and final high-resolution video enhancement. The main purpose of this workflow is to take an input video, increase its frame rate through AI interpolation, enhance the visual resolution through FlashVSR v1.1, and then export a finished MP4 video while preserving the original audio.

The workflow combines two important video post-production steps in one graph: frame interpolation and video super-resolution. Frame interpolation makes the motion smoother by generating new intermediate frames between existing frames. Video upscaling improves visual clarity by enlarging and enhancing the frames. Used together, these two steps are very useful for AI video creators who want smoother motion and sharper final output before publishing.

The workflow starts with VHS_LoadVideo. This node imports the source video, extracts the video frames, reads the audio track, and passes the frame sequence into the next processing stage. In the included setup, the video is loaded at a forced rate of 16 fps, with every frame selected and no frame skipping. This gives the interpolation stage a controlled input frame rate.

After loading the video, the workflow uses LayerUtility: ImageScaleByAspectRatio V2 for frame preprocessing. The aspect ratio is set to original, the fit mode is fill, the scaling method is Lanczos, and the longest side is controlled around 960 pixels. This preprocessing stage helps normalize the video frames before interpolation and upscaling. It also helps manage VRAM usage, because video processing can become heavy when the input resolution is too large.

The frame interpolation stage uses GIMM-VFI. The workflow loads gimmvfi_r_arb_lpips_fp32.safetensors through DownloadAndLoadGIMMVFIModel, then sends the preprocessed frames into GIMMVFI_interpolate. In the included setup, the interpolation factor is set to 2. This means the workflow generates intermediate frames and effectively doubles the frame rate. Since the source video is forced to 16 fps, the final exported video is configured at 32 fps, creating smoother motion while keeping the original timing structure practical.

This interpolation step is useful for AI-generated videos, low-frame-rate clips, animation previews, digital human videos, product motion shots, image-to-video outputs, and older videos that need smoother movement. Many AI video generation tools produce short clips with limited frame rates or slightly choppy motion. GIMM-VFI helps improve motion continuity by creating additional frames between the original frames.

After interpolation, the workflow sends the enhanced frame sequence into FlashVSRNode. The FlashVSR model is set to FlashVSR-v1.1, the mode is set to full, and the scale value is set to 3. This means the workflow performs a strong 3x video super-resolution pass after frame interpolation. The FlashVSR stage is responsible for improving frame clarity, restoring details, sharpening edges, and producing a larger final video.

FlashVSR is especially useful because it is designed for video super-resolution rather than simple still-image upscaling. A normal image upscaler may improve individual frames, but it can create inconsistent textures across frames, which may cause flicker in motion. A video super-resolution route is more suitable for moving footage because it is designed to process frame sequences more coherently.

The workflow enables tiled VAE and tiled DiT options inside FlashVSRNode. These tiled processing options are important for practical video enhancement because high-resolution video frames can consume significant GPU memory. Tiling helps reduce memory pressure and makes the workflow more usable for larger videos or cloud deployment.

The final stage uses VHS_VideoCombine. This node takes the processed frames from FlashVSR and combines them back into a video file. The original audio from VHS_LoadVideo is also connected into the final combine node, so the output video keeps the source audio track. This is important for videos that already contain narration, dialogue, music, sound effects, or lip-sync audio.

The output format is H.264 MP4 with yuv420p pixel format, CRF 19, metadata saving enabled, and save output enabled. This makes the final video easy to upload to YouTube, Bilibili, RunningHub, Civitai, and social media platforms. The output frame rate is set to 32 fps, matching the doubled frame rate after interpolation.

This workflow is especially useful as a final post-processing pipeline. After generating a video from text-to-video, image-to-video, InfiniteTalk, digital human, lip-sync, product animation, or AI short film workflows, users can pass the result through this graph to make it smoother and sharper. It can act as the final “polish stage” before publishing.

Main features:

- Video frame interpolation workflow

- Video upscaling workflow

- GIMM-VFI interpolation support

- Uses gimmvfi_r_arb_lpips_fp32 model

- 2x frame interpolation setup

- 16 fps input route to 32 fps output route

- FlashVSR v1.1 video super-resolution

- 3x upscale configuration

- Aspect-ratio-preserving preprocessing

- Lanczos frame scaling

- Tiled VAE support

- Tiled DiT support

- Original audio preserved

- H.264 MP4 final export

- Suitable for AI video post-production

Recommended use cases:

AI video frame interpolation, video upscaling, smoother motion generation, low-frame-rate video repair, AI animation enhancement, digital human video polishing, InfiniteTalk output enhancement, image-to-video post-processing, text-to-video enhancement, product video upscaling, social media video preparation, YouTube video publishing, Bilibili video publishing, RunningHub workflow showcase, Civitai demo video preparation, and final AIGC video polishing.

Suggested workflow:

Start by loading your source video into VHS_LoadVideo. Use a video with clear motion and acceptable source quality. The workflow can improve smoothness and resolution, but it cannot fully repair a video that is extremely compressed, heavily artifacted, or visually broken.

Use the forced frame rate carefully. In this workflow, the input is forced to 16 fps and then interpolated by a factor of 2, resulting in a 32 fps output. This is a practical setup for smoothing motion without creating an overly large frame count. If you change the input frame rate or interpolation factor, make sure the final output frame rate is adjusted accordingly.

Keep select_every_nth set to 1 for final output. This ensures every input frame is processed. Skipping frames may save time, but it can reduce motion continuity and weaken the interpolation result. For production-quality output, processing every frame is usually better.

Use the aspect-ratio scaling stage to control processing size. The included setup keeps the original aspect ratio and scales the longest side to around 960 before interpolation and FlashVSR. This helps balance quality, speed, and VRAM usage. If your source video is already very large, reduce the preprocessing size first. If the source is small, this stage helps prepare it for cleaner enhancement.

Run GIMM-VFI interpolation before upscaling. This order is important. Interpolating before upscaling means the workflow creates smoother motion first, then enhances the larger and smoother frame sequence through FlashVSR. If you upscale first and then interpolate, processing may become heavier and artifacts may become more visible.

Use FlashVSR v1.1 as the final visual enhancement stage. The included scale value is 3, which creates a strong enlargement. This is useful when the source video is small or soft. If the result looks too sharp or too heavy, you can adjust preprocessing size, output compression, or test different source resolutions.

Keep tiled VAE and tiled DiT enabled if your GPU memory is limited. Video enhancement is much heavier than still-image processing. Tiling helps the workflow handle larger frames more safely. If processing fails due to VRAM, reduce the input scale-to-length value or process a shorter test clip first.

Preserve the original audio through VHS_VideoCombine. This workflow connects the original audio from the input video directly into the final output. This is useful for music videos, narration videos, talking character videos, and lip-sync clips. Make sure the output frame rate and video duration remain consistent so the audio stays aligned.

For long videos, test a short section first. Video interpolation and upscaling can take time. Before processing a long video, use a short clip to confirm the motion smoothness, sharpness level, audio sync, and export settings. Once the test result is correct, run the full video.

When evaluating the result, do not only look at resolution. Check motion continuity, frame consistency, flicker, edge quality, face stability, hand stability, text/logo clarity, and audio sync. A good result should look smoother and sharper without introducing unstable artifacts.

For AI-generated videos, this workflow works best as a final polish stage. Generate the video first, then use this workflow to improve frame rate and resolution. For digital human videos, check mouth movement carefully after interpolation. For product videos, inspect logos and edges. For anime clips, check line stability and avoid over-sharpened outlines.

This workflow is designed as a practical video post-processing pipeline for ComfyUI users. It combines GIMM-VFI frame interpolation, FlashVSR v1.1 upscaling, audio preservation, and MP4 export into one clean graph. It is especially useful for creators who want smoother and sharper AI-generated videos before publishing them on YouTube, Bilibili, RunningHub, Civitai, or social media.

🎥 YouTube Video Tutorial

Want to know what this workflow actually does and how to start fast?

This video explains what the tool is, how to launch the workflow instantly, and shares my core design logic — no local setup, no complicated environment.

Everything starts directly on RunningHub, so you can experience it in action first.

👉 YouTube Tutorial: https://youtu.be/yp0dcysS4XY

Before you begin, I recommend watching the video thoroughly — getting the full context helps you understand the tool faster and avoid common detours.

⚙️ RunningHub Workflow

Try the workflow online right now — no installation required.

👉 Workflow: https://www.runninghub.ai/post/2021549838894108674/?inviteCode=rh-v1111

If the results meet your expectations, you can later deploy it locally for customization.

🎁 Fan Benefits: Register to get 1000 points + daily login 100 points — enjoy 4090 performance and 48 GB super power!

📺 Bilibili Updates (Mainland China & Asia-Pacific)

If you’re in the Asia-Pacific region, you can watch the video below to see the workflow demonstration and creative breakdown.

📺 Bilibili Video: https://www.bilibili.com/video/BV1REcbzdEdn/

☕ Support Me on Ko-fi

If you find my content helpful and want to support future creations, you can buy me a coffee ☕.

Every bit of support helps me keep creating — just like a spark that can ignite a blazing flame.

👉 Ko-fi: https://ko-fi.com/aiksk

💼 Business Contact

For collaboration or inquiries, please contact aiksk95 on WeChat.

🎥 YouTube 视频教程

想了解这个工作流到底是怎样的工具,以及如何快速启动?

视频主要介绍 工具定位、快速启动方法 和 我的构筑思路。

我们会直接在 RunningHub 上进行演示,让你第一时间看到实际效果。

👉 YouTube 教程: https://youtu.be/yp0dcysS4XY

开始前建议尽量完整地观看视频 —— 把握整体思路会更快上手,也能少走常见弯路。

⚙️ 在线体验工作流

现在就可以在线体验,无需安装。

👉 工作流: https://www.runninghub.ai/post/2021549838894108674/?inviteCode=rh-v1111

打开上方链接即可直接运行该工作流,实时查看生成效果。

如果觉得效果理想,你也可以在本地进行自定义部署。

🎁 粉丝福利: 注册即送 1000 积分,每日登录 100 积分,畅玩 4090 体验 48 G 超级性能!

📺 Bilibili 更新(中国大陆及南亚太地区)

如果你在中国大陆或南亚太地区,可以通过下方视频查看该工作流的实测效果与构思讲解。

📺 B站视频: https://www.bilibili.com/video/BV1REcbzdEdn/

我会在 夸克网盘 持续更新模型资源:

👉 https://pan.quark.cn/s/20c6f6f8d87b

这些资源主要面向本地用户,方便进行创作与学习。