Updated: May 12, 2026
characterThis workflow is designed for LTX 2.3 1.1 + VBVR multi-image first-frame / last-frame video generation. Its main purpose is to use multiple reference images as key visual anchors, then guide LTX 2.3 to generate a more coherent video sequence with stronger motion direction, better visual continuity, and more controllable scene development than a basic single-image-to-video workflow.
The workflow is built around an LTX 2.3 image-to-video pipeline with VBVR I2V LoRA enhancement, Gemma-style LTX text encoding, LTX video VAE, LTX audio VAE support, NAG enhancement, spatial latent upscaling, multi-stage sampling, image guide injection, and final video export. This makes it suitable for creators who want to animate more than one static reference and build a guided cinematic sequence instead of relying on one opening frame only.
The key advantage of this workflow is multi-image temporal control. A normal image-to-video workflow usually uses one image as the starting point, so the model has to guess how the scene should evolve. In this workflow, multiple images are loaded and aligned into the same video structure. These images can act as first-frame, middle-frame, and last-frame references, giving the model clearer visual targets across time. This helps reduce random drift and makes the output easier to direct.
The uploaded workflow uses LTXVAddGuideMulti-style logic to inject several image references into the video latent process. Each image can be assigned to a specific frame position and guide strength. This allows the creator to define where the video starts, how the visual action develops, and what kind of final state the animation should reach. For AI video production, this is especially useful when you want controlled action instead of random motion.
The workflow example is built around a cyber motorcycle loop storyboard. The prompt describes a high-speed cyberpunk motorcycle sequence with wheelie motion, aerial stunt movement, hard landing, water splash, sharp side-drift, sparks, smoke, and a return-to-start loop structure. This shows the intended strength of the workflow: it is not only a simple I2V graph, but a multi-keyframe action planning setup for cinematic motion, looping video, and storyboard-driven animation.
VBVR I2V LoRA is an important part of the pipeline. It helps strengthen image-to-video behavior, visual consistency, and motion adherence. For multi-image first-frame / last-frame generation, this matters because the model must connect several reference states while preserving the subject, style, environment, and action logic. Without stronger I2V guidance, multi-reference video can easily flicker, drift, or ignore the intended end frame.
The workflow also includes NAG guidance and manual sigma sampling. These components help improve control during generation and reduce unwanted randomness. The negative prompt suppresses common video failures such as low resolution, blurry frames, static output, no movement, watermarks, subtitles, scene cuts, scene transitions, warping, extra limbs, extra hands, and unstable body parts. This makes the workflow more suitable for publishable AI video output.
The pipeline is also structured with multiple refinement stages. The first pass builds the base motion from the input references and prompt. Later passes use latent upscaling and additional sampler routes to improve detail, texture, and final visual quality. The final video can then be exported through the video combine route, making it ready for RunningHub demos, Civitai previews, YouTube tutorials, Bilibili showcases, and social media short video production.
This workflow is ideal for creators who want to connect multiple AI images into one controlled video, build first-frame / last-frame animation, create cyberpunk action loops, test multi-image motion continuity, or produce more directed LTX 2.3 cinematic clips. If you want to see how the reference images, VBVR guidance, multi-image keyframe control, NAG enhancement, and final video export work together, watch the full tutorial from the YouTube link above.
⚙️ Try the Workflow Online
👉 Workflow: https://www.runninghub.ai/post/2045068937519439873/?inviteCode=rh-v1111
Open the link above to run the workflow directly online and view the generation results in real time.
If the results meet your expectations, you can also deploy it locally for further customization.
🎁 Fan Benefits: Register now to get 1000 points, plus 100 daily login points — enjoy 4090-level performance and 48 GB of powerful compute!
📺 Bilibili Updates (Mainland China & Asia-Pacific)
If you are in Mainland China or the Asia-Pacific region, you can watch the video below for workflow demos and a detailed creative breakdown.
📺 Bilibili Video: https://www.bilibili.com/video/BV17Td5BgETn/
I will continue updating model resources on Quark Drive:
👉 https://pan.quark.cn/s/20c6f6f8d87b
These resources are mainly prepared for local users, making creation and learning more convenient.
⚙️ 在线体验工作流
👉 工作流: https://www.runninghub.ai/post/2045068937519439873/?inviteCode=rh-v1111
打开上方链接即可直接运行该工作流,实时查看生成效果。
如果觉得效果理想,你也可以在本地进行自定义部署。
🎁 粉丝福利: 注册即送 1000 积分,每日登录 100 积分,畅玩 4090 体验 48 G 超级性能!
📺 Bilibili 更新(中国大陆及南亚太地区)
如果你在中国大陆或南亚太地区,可以通过下方视频查看该工作流的实测效果与构思讲解。
📺 B站视频: https://www.bilibili.com/video/BV17Td5BgETn/
我会在 夸克网盘 持续更新模型资源:
👉 https://pan.quark.cn/s/20c6f6f8d87b
这些资源主要面向本地用户,方便进行创作与学习。

