# 360° Equirectangular Outpainting — LTX-2.3 IC-LoRA · v0.1
**Proof-of-concept IC-LoRA** for [Lightricks LTX-2.3-22B](https://huggingface.co/Lightricks) that turns standard
**widescreen footage into a full 360° equirectangular** video you can view in a VR/360 player.
> Early v0.1 release. Expect rough edges outside the sweet spot below — a bigger, more diverse next version is planned.
## What it does
- **Input** — a flat 2.39:1 (cinemascope) clip, plus an equirectangular reference (your clip projected into the
equirect canvas with the unknown regions masked black).
- **Output** — the model fills the masked regions, giving you a plausible 360° equirect video viewable in a VR/360
player.
Designed for repurposing existing live-action or cinematic footage as immersive content.
## Sweet spot (v0.1)
The v0.1 model was tuned on a deliberately narrow domain to validate the approach:
- Semi-static establishing **city / urban** scenes (no heavy camera motion)
- **~100° horizontal FOV** on the source clip
- **2.39:1 source aspect** (standard cinemascope)
It will generalize poorly outside these conditions — fast action, extreme close-ups, heavily stylised imagery, or very
different FOVs are not reliably handled yet.
## Usage
Tested only with **ComfyUI + LTX-2 video_to_video** pipeline. Load on top of ltx-2.3-22b-dev.safetensors and set:
- **Trigger word**: equirectangular — optional. It works without a prompt, but a descriptive prompt lets you steer
the content of the outpainted region.
- **Reference video**: your source clip projected into the equirect canvas with unknown regions masked.
- **Resolution**: 1920×960, 121 frames, 24 fps.
A ready-to-run workflow Equirect-Outpaint.json) ships alongside this LoRA on the Hugging Face mirror:
<https://huggingface.co/TheBurgstall/VR-360-Outpaint-LTX2.3-IC-LoRA>. Note that the workflow's padding node crops your
input footage to 2.39:1 (center / top / bottom selectable). Other aspect ratios work poorly in this early version.
### Companion ComfyUI nodes
A small ComfyUI helper pack —
**[ComfyUI-EquirectProjector](https://github.com/Burgstall-labs/ComfyUI-EquirectProjector)** — was written alongside
this LoRA to produce the masked equirect reference from a flat clip. The included workflow shows the exact wiring.
## Training
| | |
|--|--|
| Base model | LTX-2.3-22B (dev) |
| Strategy | IC-LoRA video_to_video) |
| Rank / alpha | 128 / 128 |
| Target modules | video self + cross attention + FFN |
| Resolution | 1024×512, 41 frames @ 24 fps |
| Optimizer | Prodigy (D-Adaptation), lr=1.0, constant |
| Precision | bf16, gradient checkpointing |
| Steps | 3500 |
| Hardware | 1× NVIDIA H100 80GB |
| Dataset | Small curated POC set (not released) — semi-static city establishing clips |
The final **step 3500** checkpoint is what's uploaded here.
## What's next
A next version is planned on a significantly larger and more diverse dataset:
- Broader subject matter (interiors, landscapes, crowds, vehicles, …)
- Varied input FOVs and focal lengths
- A wider range of camera motion — not just static establishing shots
- Better handling of the polar regions (top / bottom caps of the equirect canvas)
## Limitations
- Does not model the top/bottom caps of the sphere well — expect stretching or repetition at the poles.
- Struggles with busy motion and fast cuts.
- Prompt adherence is weak; conditioning is dominated by the reference video.
- Outputs are a creative re-projection, not a reconstruction — not a substitute for natively captured 360° footage.
## Links
- **Hugging Face mirror**: <https://huggingface.co/TheBurgstall/VR-360-Outpaint-LTX2.3-IC-LoRA>
- **ComfyUI helper nodes**: <https://github.com/Burgstall-labs/ComfyUI-EquirectProjector>
## License
Apache-2.0. Inherits any base-model conditions from LTX-2.3-22B.
---

