Updated: May 10, 2026
characterThis ComfyUI workflow is designed for Z-Image Base fast generation testing, 8-step acceleration, 4-step acceleration, and visual comparison between standard and distilled routes. The main purpose of this workflow is to help creators test how Z-Image Base behaves under different sampling budgets, from a more traditional 20-step route to faster 8-step and 4-step routes.
The workflow is built around Z-Image Base, using z_image_bf16.safetensors as the main diffusion model, qwen_3_4b.safetensors as the Qwen Image text encoder, and ae.safetensors as the VAE. It also includes Z-Image-Fun-Lora-Distill-4-Steps-2602-ComfyUI.safetensors as the acceleration LoRA route. This makes the workflow useful for comparing quality, speed, prompt adherence, detail density, and image stability under different generation settings.
The core idea is simple: use the same prompt, same negative prompt structure, same latent canvas, and similar model configuration, then generate outputs through different step-count routes. One route uses a standard Z-Image Base setup with a higher step count. Other routes use accelerated settings such as 8 steps and 4 steps, with the distill LoRA applied to make low-step generation more practical. This gives users a direct way to see whether faster generation is good enough for their use case.
The workflow uses a 1280 x 720 empty latent image as the base canvas. This landscape format is suitable for cinematic illustrations, social media banners, YouTube thumbnails, Bilibili covers, workflow preview images, and general visual testing. Users can change the canvas size if they need square, portrait, or vertical cover outputs, but 1280 x 720 is a practical starting point for fast comparison testing.
The prompt route uses CLIPTextEncode with the Qwen 3 4B text encoder. The included example prompt describes a dramatic digital painting of a silhouetted warrior standing against a huge blazing orange sun, surrounded by red spider lilies and tall grass. This type of prompt is useful for testing the model because it includes strong lighting, high contrast, foreground detail, background atmosphere, silhouette structure, and cinematic composition. It is a good test case for checking whether low-step generation can preserve visual impact.
The negative prompt is designed to suppress common visual problems such as bad lighting, dark or gloomy results, overexposure, underexposure, low contrast, grayscale, monochrome, draft-like rendering, sketch effects, crayon texture, comic style, or cartoon-like results when realism is intended. This helps keep the comparison cleaner, because the workflow tests generation speed rather than allowing avoidable quality problems to dominate the result.
The standard route uses Z-Image Base with ModelSamplingAuraFlow and a higher step count. In the uploaded setup, the standard comparison route includes a 20-step KSampler configuration with CFG around 3, Euler sampler, simple scheduler, and full denoise. This route acts as the quality baseline. It is slower than the accelerated routes, but it gives users a reference point for what the model can produce with more sampling time.
The accelerated routes use the distill LoRA and lower CFG-style settings. The workflow includes 8-step and 4-step KSampler routes using Euler sampling, simple scheduler, CFG around 1, and full denoise. These routes are designed for speed. They are useful when users need quick drafts, fast prompt testing, batch exploration, online RunningHub deployment, or rapid visual iteration.
The 8-step route is a balance route. It is usually faster than the standard 20-step path while still leaving enough sampling room for image structure, lighting, and detail to form. This route can be useful when the user wants practical speed but does not want to push the model to the absolute minimum step count.
The 4-step route is the speed-focused route. It is useful for very fast preview generation, prompt exploration, or large-scale idea testing. The 4-step route may not always match the detail quality of a higher-step output, but it can be extremely useful when the goal is to quickly judge composition, color direction, subject design, or prompt feasibility.
This workflow is especially useful for creators who publish workflows online. Many users care not only about final image quality, but also about generation time. A workflow that takes too long may be difficult to use on cloud platforms or public workflow pages. By comparing standard, 8-step, and 4-step routes, creators can decide which version is best for public release.
The workflow also helps users understand the tradeoff between speed and quality. Higher steps can produce more stable detail, stronger refinement, and better texture in some cases. Lower steps can produce faster results, but may reduce fine detail, create softer edges, or make the result more dependent on prompt strength and seed choice. The best choice depends on the task.
For prompt testing, the 4-step or 8-step routes are usually more practical. Users can run many seeds quickly and find a good composition before moving to a slower route. For final output, the standard route or the 8-step route may be better if the user wants more texture and stability. For online demo workflows, the 4-step route can be useful because it gives users a fast first result.
Main features:
- Z-Image Base fast generation workflow
- Standard 20-step route for quality comparison
- 8-step accelerated route for balanced speed and quality
- 4-step accelerated route for fast preview generation
- Uses z_image_bf16.safetensors
- Uses qwen_3_4b.safetensors text encoder
- Uses ae.safetensors VAE
- Uses Z-Image Fun Distill 4-step LoRA
- ModelSamplingAuraFlow support
- 1280 x 720 latent canvas
- Multiple KSampler comparison routes
- Euler sampler and simple scheduler setup
- Shared prompt and negative prompt comparison
- Suitable for speed testing, prompt iteration, and RunningHub deployment
Recommended use cases:
Z-Image Base speed testing, 8-step generation comparison, 4-step generation comparison, prompt exploration, fast image drafting, online workflow publishing, RunningHub demo workflow creation, Civitai showcase examples, YouTube thumbnail testing, Bilibili cover generation, cinematic illustration testing, anime or fantasy image generation, and model acceleration research.
Suggested workflow:
Start with the same prompt across all routes. This keeps the comparison fair. If each route uses a different prompt, it becomes difficult to judge whether the difference comes from the model setting or from the prompt itself.
Use the standard 20-step route first if you want a quality reference. This output can act as the baseline. Check composition, lighting, subject clarity, color balance, detail, and prompt adherence. Then compare the 8-step and 4-step outputs against it.
Use the 8-step route when you want a practical balance. This route is faster than the standard path but usually gives the model more room to form details than the 4-step route. It is a good candidate for general-purpose workflow publishing if the output quality remains stable.
Use the 4-step route for rapid testing. This is useful when exploring many prompts or seeds. If the 4-step output already looks good, the prompt is probably strong. If the 4-step output is weak but the standard route is good, the prompt may need more sampling time or clearer structure.
Keep CFG moderate. The accelerated routes use lower CFG values because distill-style low-step generation often works better with lighter guidance. If CFG is too high in a low-step route, the image may become harsh, unstable, or over-constrained. If CFG is too low, the prompt may not be followed strongly enough.
Use seed testing. Fast routes are valuable because they let users test many seeds quickly. If one seed gives a weak composition, try another before rewriting the whole prompt. When a good seed appears, keep it fixed and adjust the prompt gradually.
Adjust the prompt based on the route. For low-step generation, clear prompt structure matters. Put the main subject first, then describe composition, lighting, environment, and style. Avoid overly vague prompts if you want the 4-step route to perform well.
Use the negative prompt to suppress obvious quality problems. The included negative prompt is suitable for general image quality control. For anime images, add bad anatomy, bad hands, extra fingers, deformed face, text, watermark, and logo if needed. For realistic images, add plastic skin, distorted face, unnatural lighting, and low-detail texture if needed.
Compare results visually. Do not judge only by speed. Look at subject accuracy, detail clarity, edge quality, lighting coherence, composition stability, background quality, and whether the final image follows the prompt. A faster result is only useful if it remains visually acceptable.
For online deployment, choose the route based on user experience. If users need instant previews, the 4-step route may be best. If users expect higher quality, the 8-step route may be a better default. If the workflow is meant for final output rather than preview, keep the standard route available.
This workflow is designed as a practical Z-Image Base acceleration comparison pipeline for ComfyUI users. It helps creators understand how Z-Image Base performs under standard, 8-step, and 4-step settings, and how the distill LoRA changes the speed-quality balance. It is especially useful for creators who need to build fast RunningHub workflows, prepare Civitai examples, test prompts quickly, or decide which Z-Image generation route is best for public release.
🎥 YouTube Video Tutorial
Want to know what this workflow actually does and how to start fast?
This video explains what the tool is, how to launch the workflow instantly, and shares my core design logic — no local setup, no complicated environment.
Everything starts directly on RunningHub, so you can experience it in action first.
👉 YouTube Tutorial: https://youtu.be/mYpdxdHGlQM
Before you begin, I recommend watching the video thoroughly — getting the full context helps you understand the tool faster and avoid common detours.
⚙️ RunningHub Workflow
Try the workflow online right now — no installation required.
👉 Workflow: https://www.runninghub.ai/post/2022636633476046849/?inviteCode=rh-v1111
If the results meet your expectations, you can later deploy it locally for customization.
🎁 Fan Benefits: Register to get 1000 points + daily login 100 points — enjoy 4090 performance and 48 GB super power!
📺 Bilibili Updates (Mainland China & Asia-Pacific)
If you’re in the Asia-Pacific region, you can watch the video below to see the workflow demonstration and creative breakdown.
📺 Bilibili Video: https://www.bilibili.com/video/BV1quZ7BpEPE/
☕ Support Me on Ko-fi
If you find my content helpful and want to support future creations, you can buy me a coffee ☕.
Every bit of support helps me keep creating — just like a spark that can ignite a blazing flame.
👉 Ko-fi: https://ko-fi.com/aiksk
💼 Business Contact
For collaboration or inquiries, please contact aiksk95 on WeChat.
🎥 YouTube 视频教程
想了解这个工作流到底是怎样的工具,以及如何快速启动?
视频主要介绍 工具定位、快速启动方法 和 我的构筑思路。
我们会直接在 RunningHub 上进行演示,让你第一时间看到实际效果。
👉 YouTube 教程: https://youtu.be/mYpdxdHGlQM
开始前建议尽量完整地观看视频 —— 把握整体思路会更快上手,也能少走常见弯路。
⚙️ 在线体验工作流
现在就可以在线体验,无需安装。
👉 工作流: https://www.runninghub.ai/post/2022636633476046849/?inviteCode=rh-v1111
打开上方链接即可直接运行该工作流,实时查看生成效果。
如果觉得效果理想,你也可以在本地进行自定义部署。
🎁 粉丝福利: 注册即送 1000 积分,每日登录 100 积分,畅玩 4090 体验 48 G 超级性能!
📺 Bilibili 更新(中国大陆及南亚太地区)
如果你在中国大陆或南亚太地区,可以通过下方视频查看该工作流的实测效果与构思讲解。
📺 B站视频: https://www.bilibili.com/video/BV1quZ7BpEPE/
我会在 夸克网盘 持续更新模型资源:
👉 https://pan.quark.cn/s/20c6f6f8d87b
这些资源主要面向本地用户,方便进行创作与学习。

