Updated: May 11, 2026
characterThis workflow is designed for LTX-2.3 IC LoRA Union video control, focusing on reference-guided video generation with stronger structure, identity, and motion consistency. Its main purpose is to give creators a more controllable LTX-2.3 video pipeline where the output is not only driven by text prompts, but also guided by image conditioning, IC LoRA control, video latent structure, and optional auxiliary control information.
The workflow uses the LTX-2.3 22B Dev transformer route as the main video generation backbone, with Gemma 3 12B and LTX-2.3 text projection components for text conditioning. It also loads the LTX-2.3 video VAE and audio VAE, allowing the graph to work with both visual and audio latent structures before combining them into a final video output. This makes the workflow more complete than a simple image-to-video setup.
The key part of this workflow is the IC LoRA Union control route. It applies an LTX-2.3 IC LoRA model to strengthen image-conditioned guidance, then uses LTXAddVideoICLoRAGuide to inject the reference image into the video latent process. This lets the generated video follow the input image more closely while still being animated through the LTX video model. For creators, this is useful when the subject design, character appearance, composition, or scene identity needs to stay more stable across the clip.
The workflow also includes LTXVImgToVideoConditionOnly, which helps condition the latent video generation from an input image. The source image is preprocessed through LTXVPreprocess, then passed into the video-conditioning path. This makes the workflow suitable for image-to-video generation, character animation, product motion previews, cinematic shot creation, stylized video control, and AI short-form content production.
Another important part of the workflow is the control-preprocessing section. It includes DepthCrafter and CannyEdgePreprocessor routes, which can help derive depth or edge structure from source frames or guide images. These control signals are useful when the creator wants more stable spatial relationships, stronger scene structure, or clearer object boundaries. In practical video work, this helps reduce uncontrolled deformation and improves the chance that the output follows the intended layout.
The sampling section uses a custom advanced sampler structure with manual sigma values, CFG guidance, random noise, and a res_2s_ode sampler route. This gives the workflow a more specialized LTX-2.3 sampling setup rather than a generic KSampler configuration. After sampling, the workflow separates audio and video latents, crops guide information, decodes the video with tiled VAE decoding, decodes audio through the audio VAE, and finally combines the frames and audio into a video output.
In short, this is a full LTX-2.3 IC LoRA Union video-control workflow for creators who want stronger reference guidance, better structure control, and more stable image-to-video generation. It is suitable for character video generation, controlled animation, cinematic short clips, product demonstrations, AI influencer videos, stylized motion tests, and RunningHub / Civitai workflow showcases. If you want to see how the IC LoRA guide, image conditioning, depth / edge controls, and final video export are connected, watch the full tutorial from the YouTube link above.
⚙️ Try the Workflow Online
👉 Workflow: https://www.runninghub.ai/post/2032312720623669250?inviteCode=rh-v1111
Open the link above to run the workflow directly online and view the generation results in real time.
If the results meet your expectations, you can also deploy it locally for further customization.
🎁 Fan Benefits: Register now to get 1000 points, plus 100 daily login points — enjoy 4090-level performance and 48 GB of powerful compute!
📺 Bilibili Updates (Mainland China & Asia-Pacific)
If you are in Mainland China or the Asia-Pacific region, you can watch the video below for workflow demos and a detailed creative breakdown.
📺 Bilibili Video: https://www.bilibili.com/video/BV186wEzRENT/
I will continue updating model resources on Quark Drive:
👉 https://pan.quark.cn/s/20c6f6f8d87b
These resources are mainly prepared for local users, making creation and learning more convenient.
⚙️ 在线体验工作流
👉 工作流: https://www.runninghub.ai/post/2032312720623669250?inviteCode=rh-v1111
打开上方链接即可直接运行该工作流,实时查看生成效果。
如果觉得效果理想,你也可以在本地进行自定义部署。
🎁 粉丝福利: 注册即送 1000 积分,每日登录 100 积分,畅玩 4090 体验 48 G 超级性能!
📺 Bilibili 更新(中国大陆及南亚太地区)
如果你在中国大陆或南亚太地区,可以通过下方视频查看该工作流的实测效果与构思讲解。
📺 B站视频: https://www.bilibili.com/video/BV186wEzRENT/
我会在 夸克网盘 持续更新模型资源:
👉 https://pan.quark.cn/s/20c6f6f8d87b
这些资源主要面向本地用户,方便进行创作与学习。

