i rewrote the rohitgandikota sliders repository for various reasons. you can now train 'sliders' with fractional 'scales', such that one subset of your data is optimized to appear at '<lora:name:0.15>' , another at '<lora:name:1.5>', and another at '<lora:name:-3>', purely as an example.
we suggest using this feature to explore with clustering subsets of your training data at 'scale' magnitudes near each other, such as 3 dataset splits at '0.8, 1.0, 1.2', and another 3 dataset splits, expressing a different visual idea, at '1.8, 2.0, 2.2'.
full training code and training dataset available at https://github.com/SQCU/sliders/tree/ambitious_batcher.
this adapter model was trained on an illustriousXL 'slopmerge' for ~4000 dataset at 4 batchsize, 8 gradient accumulation, dynamic gradient accumulation (see repository or read https://arxiv.org/abs/1812.06162 for reference), and responds very eagerly to the terms 'bracketed', 'simple background', and 'chalk drawing cartoon sketch'. if you are curious as to why the images shrink and turn green at high 'lora scale', i recommend constructing your own scale-labeled training dataset and giving the repository a try; at time of publication it costs under 33 cents to operate a single rtx 4090 for a full hour, far more time than it takes to train 1 batch-scaled 'lora'.
