Sign In

ComfyUI Omni Kontext Workflow | Identity-Aware Scene Composition

Updated: Apr 3, 2026

toolgenerate-images

Download

1 variant available

Archive Other

5.12 KB

Verified:

Type

Workflows

Stats

41

Reviews

Published

Apr 3, 2026

Base Model

Other

Hash

AutoV2
7084F5A7E3
default creator card background decoration
RunComfy's Avatar

RunComfy

Perfect scene fits. Unique style. Identity stays. Kontext keeps it real.

Who it's for: creators who want this pipeline in ComfyUI without assembling nodes from scratch. Not for: one-click results with zero tuning — you still choose inputs, prompts, and settings.

Open preloaded workflow on RunComfy

Open preloaded workflow on RunComfy (browser)

Why RunComfy first
- Fewer missing-node surprises — run the graph in a managed environment before you mirror it locally.
- Quick GPU tryout — useful if your local VRAM or install time is the bottleneck.
- Matches the published JSON — the zip follows the same runnable workflow you can open on RunComfy.

When downloading for local ComfyUI makes sense — you want full control over models on disk, batch scripting, or offline runs.

How to use (local ComfyUI)
1. Load inputs (images/video/audio) in the marked loader nodes.
2. Set prompts, resolution, and seeds; start with a short test run.
3. Export from the Save / Write nodes shown in the graph.

Expectations — First run may pull large weights; cloud runs may require a free RunComfy account.


Overview

Empower your scene composition with identity-preserving subject placement and creative control using context-enhanced prompts from the Omni Kontext method.

Important nodes:

Key nodes in Comfyui Omni Kontext workflow

OminiKontextModelPatch (#194)

Applies the Omni Kontext model modifications to the Flux backbone so reference context is honored during sampling. Leave it enabled whenever you want subject identity and spatial cues to carry into the generation. Pair with a moderate LoRA strength when using character or product LoRAs so the patch and LoRA do not compete.

OminiKontextConditioning (#193, #215)

Merges your text conditioning with reference latents from subject and scene. If identity drifts, increase the emphasis on the subject reference; if the scene is being overruled, decrease it slightly. This node is the heart of Omni Kontext composition and generally needs only small nudges once your inputs are clean.

FluxGuidance (#35, #207)

Controls how strictly the model follows the composite conditioning. Higher values push closer to prompt and reference at the cost of spontaneity; lower values allow more variety. If you see overbaked textures or loss of harmony with the scene, try a small reduction here.

NunchakuFluxDiTLoader (#217)

Loads a quantized Flux DiT variant for speed and lower memory. Choose INT4 for quick looks and FP16 or BF16 for final quality. Combine with NunchakuFluxLoraLoader when you need LoRA support in the Nunchaku lane.


Notes

ComfyUI Omni Kontext Workflow | Identity-Aware Scene Composition — see RunComfy page for the latest node requirements.