Sign In

RDBT - Anima

Updated: Apr 17, 2026

base model

Verified:

SafeTensor

Type

LoRA

Stats

319

0

Reviews

Published

Apr 15, 2026

Base Model

Anima

Hash

AutoV2
4A9EDD3641

Latest: (4/14/2026) v0.24f dmd2 b:

Comparing with "v0.24f dmd2": Still 16 steps dmd2. Images should be sharper. Cover images are still using cfg 1 and 8 steps for demonstration.

See Update Log section for previous versions.


RDBT [Anima]

Finetuned circlestone-labs/Anima.

Dataset: ~60k images. Zero AI slop. Natural language from Gemini. Includes common enhancements such as eyes, faces, hands, clothes, lightings, backgrounds, etc.

No overfitted default style. Still creative, but more stable and aesthetic.

Trained as a LoRA for better training and distribution efficiency. Then cfg/dmd2 distilled for better stability and quality.

Sharing merges using this model is not allowed. If you think this LoRA is useful, please share the link or the LoRA file.


Usage:

Base ckpt:

The base ckpt that trained the LoRA is pretrained anima base ckpt. Official HF link.

If you just want a "finetuned base ckpt" and forget about LoRA, you can download this ckpt, which has merged the LoRA.

Prompt

Prefer natural language prompt. Prompt structure: style, subject, action, background.

LLM love logical natural language more than tags in random order.

There are two "rough" trigger words:

  • "digital anime illustration": 2d anime.

  • "digital art", 2d art but not anime, mostly digital art. (not many samples)

Style:

This model does not provide a stable default style, so you need to specify style in prompt.

If the style effect is weak without cfg (cfg 1), try cfg 2. Or use a style LoRA, which usually don't need trigger words thus always have full effect.

Quality tags:

You can omit all the quality tags. 1) The quality of training data is higher than "masterpiece". 2) Quality tags have been reinforced during distillation. Thus they don't have noticeable effects.

Same as negative tags. If you use cfg, there is no need to dump "score_1, blurry, worst quality, jpeg artifacts, extra arms,... x100 words" in your negative prompt. Those things have been distilled out.

More effects:

In order to make this model stable. I moved some training images to this LoRA. They have very cool effects/features, but too creative/chaotic to be descripted, so they must be trained as a separated LoRA.

Recommended settings:

  • sampler: "euler_a" "euler" "er_sde".

  • steps: dmd2 distilled: 8~16. cfg distilled: 20~30.

  • cfg scale: 1~2. Prefer cfg 1 (disable cfg, smooth sampling process, 2x faster). Enable cfg (cfg >1) if you need higher prompt adherence (e.g. style is too weak). High cfg is not necessary. Cover images are all using cfg 1.


FAQ:

cfg distillation and dmd2 :

I recommend dmd2 to most users. Which has higher stability and overall quality, also faster.

N-step dmd2:

N means the model can output an image without noise after N steps. It's not a mandatory fixed setting. It's a lower limit. You can always use >N steps. Lower N = stronger distillation.


Update log

[base model version] [finetuned model version] [distillation method] [distillation version]

f = finetuned, cfg = cfg distilled, dmd2 = distribution matching distillation.

Recommended versions:

  • p3 v0.24f dmd2 b: 16-step dmd2.

  • p3 v0.24f cfg b: close to vanilla anima.

  • p2 v0.23f dmd2 b: 4-step dmd2. +200% stability

===============

(4/14/2026) p3 v0.24f dmd2 b: Comparing with "v0.24f dmd2": Images should be sharper.

(4/11/2026) p3 v0.24f cfg b: Fixed bug in previous 0.24fd.

(4/10/2026) p3 v0.24f dmd2: roughly 12~16 steps distillation. This is intentional. Low steps distillation = ai slop style without complex texture.

(4/8/2026) p3 v0.24f cfg: Rebased on preview3. Finetuned base model has -40% steps than v0.23. Less overfitted (?).

Update: There is a bug in distillation that causes huge quality downgrade when using without cfg.

(4/4/2026) p2 v0.23f dmd2 b: Different distillation settings. Almost a 4 steps dmd2. Maximum stability.

(4/4/2026) p2 v0.23f dmd2: 8 steps dmd2. First dmd2 anima (?).

(3/28/2026) p2 v0.23f cfg: Rebased on preview2. Distillation: improved small details and stability (removed a regularization in distillation target and changed to second-order method).

Voting result: v0.20fd won. Thanks for the feedbacks.

(3/24/2026) preview1 v0.20f cfg b: Distillation: Different settings optimized for anime, high contrast and saturation.

(3/23/2026) p1 v0.20f cfg: Dataset: More furry. Finetuned base model: from v0.12 + more 100% steps. Distillation: Fixed noisy pixels this time, really.

(3/14/2026) preview v0.19f cfg b:

Updated dataset. Some private datasets have been dropped. You might notice the style changed.

Fixed high-freq artifacts in v0.12, now you should get a clear image without noisy pixels

b: Testing new distillation settings. Higher contrast. Aligned with common anime art.

(2/19/2026) preview v0.12f cfg:

Better stability and details, extended dataset.

(3/8/2026) preview v0.11f cfg 512px:

Prove of concept version for v0.12. Same dataset and settings as v0.12, except it was trained with 512px res.

Released by request, as it might be very useful. Running model in 512px and cfg1 is extremely fast (x10 faster, e.g. 30s -> 3s). If you don't have a beefy GPU. You can use this version to test your ideas/prompts in few seconds.

(2/12/2026) preview v0.6 cfg:

CFG distilled only. No finetuning. Cover images are using Animeyume v0.1.

(2/3/2026) preview v0.2f cfg:

Speedrun attempt, mainly for testing the training script. Limited training dataset. Only covered "1 person" images plus a little bit of "furry". But it works, and way better than what I expected.