Sign In

EvernightWiggle

0

Updated: Feb 20, 2026

posesposeevernightwiggle

Download

1 variant available

SafeTensor

588.37 MB

Verified:

Type

LoRA

Stats

399

0

343

Reviews

Published

Jan 26, 2026

Base Model

LTXV2

Hash

AutoV2
F0C20A8649

Trigger Words

A girl is dancing the Cry0EvernWig
Dancing the Cry0EvernWig.

LTX-2 T2V & I2V

EvernightWiggle

Welcome to try it out, and have fun everyone!Tell me what you think if you want!

If you'd like to quickly experience the model in the cloud or refer to my workflow, check it out here. The prompts are included as well. There is more detailed information about the model if you are interested.:

T2V quick experience:
https://www.runninghub.ai/ai-detail/2015780374596558849/?inviteCode=rh-v1395
T2V workflow:
https://www.runninghub.ai/post/2015424907605184514/?inviteCode=rh-v1395

I2V quick experience:
https://www.runninghub.ai/ai-detail/2015778443622879234/?inviteCode=rh-v1395
I2V workflow:
https://www.runninghub.ai/post/2015342019056508929/?inviteCode=rh-v1395

Applications

This model can be used for both T2V and I2V.

Trigger word

There is no designated separate trigger word.

For T2V: Please start with a description like "A girl is dancing the Cry0EvernWig."

For I2V: Please use a description like "Dancing the Cry0EvernWig."

For more examples, please refer to the prompts I use; they can be found in the workflow.

Workflow

During testing, I found that multiple samplings introduce unwanted noise into the audio. Therefore, I recommend decoding the audio after just a single sampling. For more specific details, please refer to my workflow.

Tips

For I2V: I included some anime materials during training, which gives the model a certain capability to handle anime-style images. However, it still tends to "realify" them, so results may vary (it involves a bit of luck).

For T2V: Regarding the use of multiple LoRAs in conjunction, we've achieved promising visual results so far. However, interference between audio tracks still requires further experimentation. Perhaps we need a model that excludes audio information entirely—I'm not entirely sure, though.