Sign In

Flux2-Klein9B Image Upscale Workflow

Updated: May 10, 2026

character

Download

1 variant available

Archive Other

4.81 KB

Verified:

Type

Workflows

Stats

62

Reviews

Published

May 10, 2026

Base Model

Qwen

Hash

AutoV2
6AB6E021A0
default creator card background decoration
AIKSK's Avatar

AIKSK

This workflow is designed for Flux2-Klein9B image upscaling and detail refinement. Its main purpose is to take an existing image, enlarge it, preserve the original color and exposure as much as possible, and then use Flux2-Klein9B to rebuild high-resolution details through a controlled latent refinement process. It is not just a simple pixel upscale workflow. It combines traditional upscaling, reference-latent conditioning, diffusion-based refinement, color matching, and before-after comparison into one practical image enhancement pipeline.

The workflow uses flux-2-klein-9b.safetensors as the main generation model, qwen_3_8b.safetensors as the Flux2 text encoder, and flux2-vae.safetensors as the VAE. The source image is first loaded and normalized through image_scale_pixel_v2, keeping the image aligned to a model-friendly resolution. The prompt is intentionally simple: “High resolution image 1. Preserve exact color saturation and exposure from image 1.” This makes the workflow suitable for faithful upscaling instead of aggressive redesign.

A key part of the workflow is ReferenceLatent conditioning. The original image is encoded into latent space and used as both positive and negative reference context. This helps the model understand that the target is not a new creative image, but a higher-resolution version of the same image. The first KSampler pass uses an 8-step Flux2-Klein9B route with Euler sampling and beta scheduler, giving the model enough room to enhance structure and detail while still keeping the source image recognizable.

After the first refinement, the workflow includes an additional upscale section. It loads 4x_NMKD-Siax_200k.pth through UpscaleModelLoader and applies ImageUpscaleWithModel to enlarge the decoded image. Then ImageScaleBy reduces the upscaled result by a controlled factor, creating a cleaner high-resolution base for another latent refinement stage. This is useful because traditional upscalers can increase resolution quickly, but they may also introduce texture artifacts, edge noise, or artificial sharpness. The second Flux2-Klein9B pass helps clean and refine those details.

The second KSampler stage uses a much lower denoise value, around 0.1, which is important for preservation. At this stage, the goal is not to change the composition, face, object structure, color, or lighting. The goal is to polish the already-upscaled image with subtle detail recovery and texture stabilization. After decoding, ColorMatchV2 is used to match the final output back toward the original image color profile, reducing unwanted color drift.

The workflow also includes Image Comparer nodes, making it easy to compare the original image, the intermediate result, and the final upscaled output. This is especially useful for judging whether the enhancement is actually improving the image without over-sharpening or changing the original look.

This workflow is suitable for AI image finishing, Civitai showcase images, portraits, product images, social media covers, character art, concept art, and any image that needs cleaner high-resolution output. If you want to see how the reference-latent structure, two-stage refinement, traditional upscale model, and color matching are connected, watch the full tutorial from the YouTube link above.

⚙️ Try the Workflow Online

👉 Workflow: https://www.runninghub.ai/post/2027387337340096514?inviteCode=rh-v1111

Open the link above to run the workflow directly online and view the generation results in real time.

If the results meet your expectations, you can also deploy it locally for further customization.

🎁 Fan Benefits: Register now to get 1000 points, plus 100 daily login points — enjoy 4090-level performance and 48 GB of powerful compute!

📺 Bilibili Updates (Mainland China & Asia-Pacific)

If you are in Mainland China or the Asia-Pacific region, you can watch the video below for workflow demos and a detailed creative breakdown.

📺 Bilibili Video: https://www.bilibili.com/video/BV1DpADznEo2/

I will continue updating model resources on Quark Drive:

👉 https://pan.quark.cn/s/20c6f6f8d87b

These resources are mainly prepared for local users, making creation and learning more convenient.

⚙️ 在线体验工作流

👉 工作流: https://www.runninghub.ai/post/2027387337340096514?inviteCode=rh-v1111

打开上方链接即可直接运行该工作流,实时查看生成效果。

如果觉得效果理想,你也可以在本地进行自定义部署。

🎁 粉丝福利: 注册即送 1000 积分,每日登录 100 积分,畅玩 4090 体验 48 G 超级性能!

📺 Bilibili 更新(中国大陆及南亚太地区)

如果你在中国大陆或南亚太地区,可以通过下方视频查看该工作流的实测效果与构思讲解。

📺 B站视频: https://www.bilibili.com/video/BV1DpADznEo2/

我会在 夸克网盘 持续更新模型资源:

👉 https://pan.quark.cn/s/20c6f6f8d87b

这些资源主要面向本地用户,方便进行创作与学习。