Updated: May 12, 2026
base modelRDBT [Anima]
Distilled anima (aka. "turbo"). It delivers faster speed and higher aesthetics with only 12 NFEs, 5x faster than base model (60 NFEs, 30 steps with CFG).
Example: Original img. Original prompt.

See Update Log section for version info.
See this page for original LoRA, this page for PoC LoRA.
All cover images are "raw" output, 1024px, no editing/upscale etc. Metadata included.
Sharing merges using this model is not allowed.
Usage:
Settings (different from anima base model):
Steps: 8, 12 or 24.
CFG scale: 1~4. Cover images are without CFG (CFG 1). You can enable CFG (CFG >1) if you need higher prompt adherence (e.g. need higher style effect).
Prompt
Specific style is required! This model does not provide a default style. You should always prompt specific style. Or use a style LoRA. Otherwise, you will get random/mixed style. This is a feature, not a bug. I use this model as a starting point to stack more style LoRA.
(v0.32+) There are some "rough" trigger words, they are trained so they have effect, but they are not "specific style":
@anime sketch: Low complexity. Rough outlines.
@digital anime illustration: Typical "anime". Clear and fine outlines. General complexity.
@digital art: More complex lighting, textures than typical "anime".
@cinematic digital art: More lighting, postprocess effects, semi-realistic, etc.
As well as some traditional media:
@pencil drawing and hatching: Also includes colored pencil.
@watercolor painting: Scanned. On paper. Not many samples. Need cfg.
@ink wash painting: Not many samples, didn't learn well.
@oil painting: Not many samples, didn't learn well.
FAQ: Base model built-in styles have less effect. This is a common problem for fringe content + distilled model + no cfg. You can:
Enable CFG. (2x slower)
Find/train a style LoRA.
Quality tags:
You can omit all the quality tags. 1) The quality of training data is higher than "masterpiece". 2) Quality tags have been reinforced during distillation. Thus they don't have noticeable effects.
Same as negative tags. If you use cfg, there is no need to dump "score_1, blurry, worst quality, jpeg artifacts, extra arms,... x100 words" in your negative prompt. Those things have been distilled out.
Update Logs
(ETA: 5/12/2026): v0.32.b: Same v0.32 base model. Less step distillation. Higher diversity (less stability). Reinforcement learning on built-in trigger words. So they have greater effects.
(5/10/2026): v0.32:
No more green-ish, color shifting.
Second-order sampling, should have improved quality.
Trigger words have been reclassified to avoid model learning a unified style. See updated "Usage" section.
Old trigger words for backup (v0.29 and before):
"digital anime illustration": common 2d anime.
"digital art", 2d art but not anime, mostly digital art.
"anime sketch": simplified/unfinished anime drawing.
(4/27/2026): v0.29: Distillation algorithm was almost completely rewritten.
Increased diversity. Different seeds now can generate more different images. This also improved lighting range, styles and LoRA compatibility.
Better details. This version can squeeze every single pixel out of the VAE.
(4/23/2026) v0.27: Improved stability, details.
(4/18/2026) v0.25: It's based on anima p3.
Previous testing versions, see this page

