Sign In

All-in-One. Z-Image, Ernie, Klein, t2i, i2i, controlNet, inpaint, outpaint, LLM, WAN, Qwen image, SDXL, Chroma

Download

1 variant available

Archive Other

37.04 KB

Verified:

Type

Workflows

Stats

280

Reviews

Published

May 8, 2026

Base Model

Flux.2 Klein 9B

Hash

AutoV2
AAAE31A76A
default creator card background decoration
Style Fusion Contest Participant
TikFesku's Avatar

TikFesku

The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.

IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.

!!! Before installing custom nodes from manager you need to install this one from Git url

https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes

because manager for some reason installs it from wrong place

If manager fails to find LLM and lora nodes install them from git urls

https://github.com/willmiao/ComfyUI-Lora-Manager

https://github.com/KingManiya/ComfyUI-LLM-text-processor (Automatic setup currently supports Windows x64 CUDA 13 only. Other platforms require manual setup of llama.cpp release binaries.)

If you had older version of crt_nodes custom nodes it is better to remove it, restart comfyui and install again

This workflow support

  • Z-Image, Ernie image, Wan, Flux, Flux Kontext, Flux 2, Flux 2 Klein, Anima, Qwen Image, Qwen Image Edit, SDXL/Pony, Chroma and Lumina-Image 2.0

  • txt2img, img2img, Inpaint functionality.

  • Face swap (not that perfect atm)

  • SeedVR2 upscaler

  • LLM to describe images and enhance prompts

  • ControlNet only for Edit models in img2img group (just describe your image in prompt and add some instruction like "use input image as normal map reference")

  • Outpainting in Edit/Kontext mode

  • safetensor, gguf and svdq (Nunchaku) checkpoints

  • Powerful multi Lora loader (Lora manager)

  • Face After Detailer

  • Multi or single images load for Edit/Kontext

LLM Models:

VRAM 8 Gb:

VRAM 12+ Gb:

Put models and folder with system prompt here:

📂 ComfyUI/
├── 📂 models/
│   └── 📂 LLM/
│       ├── Qwen3.6-35B-A3B-Uncensored-HauhauCS-Aggressive-Q4_K_M.gguf
│       ├── mmproj-Qwen3.6-35B-A3B-Uncensored-HauhauCS-Aggressive-f16.gguf
│       └── 📂 prompts/
│           ├── LLM-system.txt

Starting from v8.0, it is possible to enhance T2I prompts on the fly using switches in the T2I group. You still need to manually copy and paste prompts from the LLM group to the I2I group, but you can directly describe Image 1 loaded in the I2I block.

Follow the short block notes for more information.

User prompts in both groups are now divided into two parts. The first part (Fixed) is never passed to the LLM block for processing and will always remain unchanged at the beginning of the prompt. The second part (User Prompt) can be passed to the LLM (if enabled) and improved.