A very quick article to explain how i train ANIMA LoRA with relative success. That's my own take on this, not an absolute truth 😆
1) The tool
I am using https://github.com/gazingstars123/Anima-Standalone-Trainer because it has a nice frontend. It is using Kohya sd-scripts under the hood but it does not expose all the parameters (some i feel could be added from my experience in training IXL LoRAs), but it still does a very nice job.
Installing the tool is out of scope of this tutorial, but please note that you'll need:
npm 10.9 and node 22
python 3.10
the proper GPU driver installed (cuda 12 or 13, rocm or whatever).
The only "hack" i did is, since i am using it not locally but on a remote server (in my garage), i had to change the "localhost" into the proper IP in the server.js file:
sed -i 's/localhost/192.168.1.81/g' training-ui/server.js2) The dataset
For this example, i'll be re-using AS-IS a dataset made for Illustrious. It was a bunch of generated pictures made for my Puffy Lips LoRA. (as an intermediary step, before using this un-released LoRA to get the concept locked-in in an other dataset). It's 50 pictures already tagged.

One thing i can add is that i have been using https://huggingface.co/SmilingWolf/wd-eva02-large-tagger-v3 now for tagging instead of https://huggingface.co/SmilingWolf/wd-v1-4-moat-tagger-v2
To do so, i have changed a file in the forge extension have been using:
In extensions/stable-diffusion-webui-wd14-tagger/tagger/utils.py, in the interrogators dict, i have added this:
'wd3-eva02-large': WaifuDiffusionInterrogator(
'WD3 EVA02 Large',
repo_id='SmilingWolf/wd-eva02-large-tagger-v3'
)3) Configuring the tool
The first step to do once after installing the tool is pointing it to the proper files and folders:

In the bottom left,, click on Global Settings and add the path to the ANIMA model components (DiT, LLM, VAE):

Also, don't forget to add the path to the python venv folder where the library have been installed:

Don't forget to save settings. (NB: you can also add a pretty picture as a background image, for the sake of readability, i did not do it here)
4) Creating a job
Now you can click on "New" in the top left to create a job. After the first one is created and configured, you will be able to clone it for easier setup.

Once a job is created, it will create a folder in the training_ui/jobs folder of the tools. It's here you will find the resultins logs/samples and safetensors models. (which will need cleanup afterward to avoid clogging your HDD).
5) The training tab

Here, you'll find the usual suspects: Learning Rate, LR Scheduler and Optimizer. To be honest, i am using the defaults for ANIMA. I tried Prodigy and Adafactor but they did not yield good results from my tests. You also set your "output name" that is going to be in the metadata and used for the LoRA filename (i put the same thing as the job name to avoid confusion).

You can set also the number of Epochs and max number of steps. I am sticking with Epochs out of habits (between 10 and 15), but settings the number of steps between 1500 and 2000 could be easier that doing some multiplication to get your intended target.
Remember the usual formula:
number of steps = number of epochs x number of repeats x number of pictures / batch sizeThe goal is usually to target 1000 to 2500 steps depending on what you try to achieve. NB: number of repeat and batch size are set elsewhere.
If you have manually installed flash attention (this can be done by just adding flash-attn in the requirements.txt file before running the setup command), you turn it on here to accelerate a bit the training.

It is also here where you can activate CPU offloading. In a previous version of the tool, it failed on my setup, and i did not test it again. But this should reduce VRAM consumption.

Last part in this tab (left as-is, but this is important) are the caching of latent and prompts:

This will add stuff in the folder were your dataset is to cleanup later too if needed (two folders: cache_text_encoder and latent_cache)
6) the dataset tab

Nothing really fancy here, you just point it to the folder where your pictures are without bothering with subfolders stuff like with the good ol' kohya_ss frontend (that was a pain to explain and use 😅). It is also here that you set the number of repeats. Since i had 50 pictures and usually goes for around 100 steps per epoch at least, 2 it is for the value 😊
PS: what's nice is that you can have several folders and group them by adding multiple folders. It could be great for multiple characters LoRA

Two important parameters where also a the top of the tab:
Resolution(s): By default it is set at 1536. I don't think ANIMA is able to handle this and if you are using pictures made for Illustrious LoRA, you probably have smaller images. So i set it to 1024 here but i also did some training with good results with larger pictures and 1280 as resolutions. Feel free to experiment.
Batch Size(s): If you are low on VRAM, keep it at 1. With 1, i usually consume around 6 GB of VRAM, i did a test at 2 and was almost short with my RTX 3060 at 11.7 GB of VRAM used.
7) The network tab
Here, i go for a classic 32/16 LoRA. You can also enable a full finetune of the model with the lastest version of the tool, but let's wait for a V1 of ANIMA instead of a preview 3 😉

I am also keeping the "Train UNet Only" to avoid messing the llm_adapter part.
NB: it is here where you can point to a previous version of the LoRA to continue training or finetuning it, for example after changing a bit the dataset

8) The prompt tab
That's the tab to get samples during training. You first setup the global negative and then you can add a few positive prompt using "Add prompt". You can set globally the resolution, seed, CFG and number of step either beforehand or afterward.

For reference here is the full neg prompt (i believe it was built-in):
worst quality, low quality, score_1, score_2, score_3, blurry, jpeg artifacts, sepia, low quality, worst quality, blurry, bad anatomy, extra limbs, deformed, watermark, text, signature, bareness, artifacts, hands, copyrights name, jpeg_artifacts, scan_artifacts, bad hands, missing fingers, extra digit, fewer digits, artistic error, ye-pop, deviantart, logo, patreon logoFor my positive prompt, i start them with:
masterpiece, best quality, score_9, detailed, newest,PS: those were not in my dataset captions by the way, i avoid those during training.
With everything setup, it is not time to start the training. Remember to hit save before anything (in the top right).

9) The training
You can now click on train to get it started. You can follow up in the console tab. You can check the steps advancing once everything is loaded and cached, and take a glance at your hardware in the bottom right (temp, load, power).
When a sample is generated, you'll see the "triangle" display in the screenshot:

In the sample tabs, you'll get the pretty pictures for each epoch (or every XXX steps if are using this method instead).

Once it has ended, you can go get your LoRAs for evaluation in the folder:

PS: the last epoch is simply named after your output name.
As you can see from the samples, after around 9-10 epochs, stuff stabilize (that's the LR scheduler cosine effect).

At this point, i usually select a few epochs and run a bunch of test using forge and XYZ plot. A good looking epoch in the sample could have strange behaviour on other prompts.
Also, to avoid outliers, i often take a few epoch and average them to smooth the values (if an epoch has a quirk, merging it with other could help). I am using sd_mecha to do so but I did not do it here.
Nonetheless, here is an example code to merge epoch 9/11/13 of a "demo" LoRA.
import sd_mecha as sdm
models = [ f"demo-{i:06d}.safetensors" for i in [9,11,13] ]
models = list(map(sdm.model,models))
sdm.merge(sdm.n_average(*models),output="demo-merge.safetensors")AND that's all folks. I'll upload the resulting LoRA and link it here. 😘
Don't forget to cleanup the various folder our you are going to wonder where did your storage go 🤣
Thanks for reading! 💜
10) The result!
You'll find the LoRA here: https://civitai.red/models/2599733/puffy-lips-and-more-for-anima

