Download
1 variant available
✦ Arthemy Merge Model/Clip ✦
INSTRUCTIONS & DOCUMENTATION
Welcome to the Arthemy SDXL workflow!
Standard merging often "flattens" the AI landscape because the resulting model just slides somewhere in between two extremes.
I wanted a way to surgically slice, tune, and repair my models (and their CLIPs) to get the exact aesthetic I had in mind. Since native ComfyUI nodes couldn't do exactly what I wanted, I spent hours coding and debugging this custom suite (with the help of an LLM, because I'm a noob... but they work!).
This workflow is designed to let you mix in memory, test live, and bake later. Here is a breakdown of how the workflow is structured, what the custom nodes do, and how you can use the values to cook your own ultimate model.
Installation
Unzip the file
Move the "Arthemy_SDXL_Suite.py" in your "[...]ComfyUI/custom_nodes/" folder.
Drag and Drop the workflow in your ComfyUI window and you're ready to go!
📦 The Setup: Live Testing & Saving Safely

The most effective way to do this: mix models in memory, test the combination live, and only save the version that actually improves your output.
The Loaders: Load your baked Checkpoint or model, CLIP, and VAE separately. Pro-tip: in the later stages of your workflow, it might be useful to experiment with how different iterations of your CLIP and Model behave between them.
The Live Test: This is for your quick-and-dirty test generations. Keep your Seed on 'Fixed' and pick your target prompt. Copy this entire group to test your Model and CLIP at different stages of the process, or to try out different settings and prompts on the fly.
These are the extremes of the workflow you’re going to assemble. Between them, you can stack as many of the following nodes as you want. When the output looks good, you can just wire the output to the “Save Model / CLIP” group and “bake” all of these changes into a new model.
Save Model / CLIP: I know, for many a simple “Save Checkpoint” would be enough, and it should be in theory. Unfortunately, that node doesn’t save the modifications made to the CLIP. So, we need to use a “split” save function to actually save the model. Crucial tip: you need to run ComfyUI with the instruction --force-fp32 in order to make sure the CLIP is being saved correctly.
LoRA Loaders (Model & CLIP)

This is one of the simplest groups.
You can use it to inject your LoRAs into a model, baking them into it. Since some LoRA with a very limited scope might ruin your model's flexibility, I highly suggest to use this group with LoRA trained for a Style (with a very wide scope).
To use this, I generally test a lot of LoRAs, one at the time, with extreme positive and negative values in both the Model and the CLIP department (just to see how they can be used in any scenario).
Then, I stack all of these LoRAs in the group and I start to tweak their weights in order to make them influence the model without being too destructive. In general I suggest to move between "-0,2" and "0.2".
Simple Merge (Model & CLIP)

The simplest way to change your model is to merge it with a secondary model.
1.0 -> 100% Your base Model or CLIP.
0.5 -> 50% Your base Model or CLIP / 50% Model or CLIP 2.
0.0 -> 100% Model or CLIP 2.
I know this, you know this, everybody knows this, but you can also use a model as an external ingredient by using weights ABOVE 1.0.
1.2 -> Your base Model or CLIP is now merging an inverse version of the other model, moving mathematically away from its values.
🧬 The Block Mergers (Model & CLIP)

What it does: Instead of blending whole models, you can slice them into semantic parts and choose how each slice is affected by the merge individually.
How the values work:
1.0 -> 100% Your Base Model.
0.5 -> 50% Your Base / 50% Secondary Model.
0.0 -> 100% Secondary Model.
Pro-Move: You aren't locked between 0 and 1! You can go above 1.0 or below 0.0 to do an inverse-merge, mathematically pushing your base model away from the secondary model's style.
✨ The Tuners (Model & CLIP)

What it does: You don't need a second model for this. The Tuner works as a Multiplier: it scales the internal math of your model up or down to boost or fade specific visual concepts (like Textures or Composition) or CLIP concepts (like Semantic Focus).
How the values work:
1.0 -> 100% (Leaves the concept exactly as it is).
0.8 -> 80% (Fades the concept, softening its intensity by 20%).
1.2 -> 120% (Boosts the concept, amplifying its intensity by 20%).
Pro-Move: Go way past these numbers (e.g., 2.0 or -0.5) to violently force a style or completely mute a concept out of existence.
🔧 The Restorers (Model & CLIP)

What it does: This is your Repair Tool. When you push a model or a CLIP too far with merges, it gets "fried" (deep-fried contrast, weird color hazes). Unlike the Tuner, the Restorer acts as a Ceiling and a Subtractor. It irons out the broken math without ruining healthy contrast, attacking only the extreme numbers.
1. Spike Flattening (Variance Ceiling): Use this if your model has burned, overly harsh contrast.
99.0 -> Off (Leaves all the spikes and extreme values untouched).
2.5 -> Mild Flattening (Tames only the worst, most broken spikes).
1.5 -> Aggressive Flattening (Heavily flattens the overall contrast).
2. Offset Flattening (Centering the Drift): Use this if your model is spitting out weird color tints or a foggy baseline.
0.0 -> 0% Flattening (Leaves the model's baseline exactly as it is).
0.5 -> 50% Flattening (Gently centers the wandering numbers).
1.0 -> 100% Flattening (Completely neutralizes the offset, pulling the math back to a healthy zero).
Pro-Move: You can go above these boundaries to over-correct a deeply fried model, or play with them to intentionally distort a healthy one.
This workflow is Modular!
You can create any combinations of this groups and make this workflow work in the way you prefere. Have fun breaking things, testing limits, and building the exact tools you need to get the images you want!




