Updated: May 8, 2026
characterDownload
1 variant available
This is a LoRA that stems off of one image that is processed and trained in one workflow. I am just going to leave it at that and let you figure the rest out with the exception of one thing; In the LoRA Trainer, there is where the only thing you should adjust if your computer is hanging up while training the LoRA; that would be the learning rate. I already have it set pretty low, but because of the processing it still has a huge learn ratio at 96.3 Gigabyte. If you need to adjust that setting just move the 1 back one space, so instead of 0.0010000, it becomes 0.0100000.

