Sign In

Workflows: A Beginners Tutorial & Hands on Walkthrough Part 5

3

This part five of a series of articles I will write on this subject. This one is about Image tweaking & Upscaling

You can find part one here:
workflows-a-beginners-tutorial-and-hands-on-walkthrough-part-1

You can find part two here:

workflows-a-beginners-tutorial-and-hands-on-walkthrough-part-2

You can find part three here:

workflows-a-beginners-tutorial-and-hands-on-walkthrough-part-3

You can find part four here:

workflows-a-beginners-tutorial-and-hands-on-walkthrough-part-4

Introduction:

I love building stuff. I'm an engineer and a carpenter and now build waterparks for a living. How things work and creating is my passion. I create workflows that get used by a lot of people and I get asked a lot of questions, especially about ComfyUi in general and how to build with them, so I thought I'd take the time to put together a few articles over the next few (weeks, month, however long) to help those understand the basics, the move on to more advanced workflows.

This part is all about Image tweaking and upscaling

  • Image adjustment

    • Introduction to quality modifiers

    • Latent & model tweakers

    • understanding post production

  • Style adjustment

    • Understanding LoRAs and their impacts on models

    • Understanding using different base models

Image adjustment

Getting started:

Quality Modifiers:

Quality modifiers are specific prompt tags that strongly influence the overall aesthetic, detail level, sharpness, and polish of generated images. some examples are

  • Core positive quality tags:

    • masterpiece, best quality, newest, very aesthetic / very awa (especially strong in related NoobAI/Illustrious variants)

  • Resolution & detail boosters:

    • highres / absurdres, highly detailed, intricate details, hyper-detailed, 8k, UHD, 4k

  • Negative quality tags:

    • worst quality, low quality, bad quality,blurry, jpeg artifacts, ugly, deformed, bad anatomy, watermark, censorship

These tags are typically placed at the start or end of prompts and work especially well with Illustrious due to its training on rated Danbooru-style datasets with quality/aesthetic scoring. They act like "magic words" to guide the model toward higher-quality outputs.

I've included a full table in the workflow. Feel free to cut & paste it to yours (leave the credits up top please)

Practical exercise:

  • Open the workflow. ⚠️ Make sure you have made the adjustments in the "getting started" section above

  • Run the image for reference. Our favorite little friend should pop out

    Italian Sqiurrel_00026_.png
  • 💡Here's a cool little trick. To save the reference image so it never disapperas, unhook the spaghetti from it. Now hold the ALT button and click on it. It will create a second save image node.

    Screenshot 2026-05-10 095631.png
  • Drag the image over and hook the new one up like this. as long as you make no modifications to that image, it will never disappear. Double click on the name and rename it to "base image"

  • Working with modifiers.

    Screenshot 2026-05-10 094719.png
  • As you can see, I have added a second Positive prompt and renamed it "Quality Tokens". Notice how I have used "Join Strings" with a "," as adelimiter and pay attention to how I have it hooked up to the CLIP Text Encode (Prompt) node.

    • This is extremely useful when you are running multiple iterations as you realy don;t mofidy this.

  • Cut & Paste the "Color Control" tags into the quality tokens prompt box (ctrl+c, ctrl+v), then hit RUN

    Screenshot 2026-05-10 100545.png
  • As you can see with just 3 words, the image is much more colorful and vibrant

    Screenshot 2026-05-10 100720.png
  • Before moving on, play around with differenrt modifers to get familiar with what they do. Make sure to remove them from the prompt box at the end for the next lesson.

Latent or model tweakers

There are many different nodes that will adjust an image without using LoRAs or other modifiers. in past lessons, I showed you the modelsamplingauraflow node as well as I2I. For this lesson, we are going to use a very powerful (and the OG adjuster) called FreeU_V2

  • Unbypass the node by using the switch. Note the basic settings in the image below.

    Screenshot 2026-05-10 101754.png
  • Run it once at these settings to see what the baseline is. Pay attention to the blacks taht seperate the fingers and the spaghetti

  • Now set s1 to 0.00 and s2 to 1.00. Notice the change. Now flip those settings and run it again. Notice the changes to the fine details? whenyou are done, set it back to 0.9 and 0.2 respectively.

  • Now to understand settings, let's break it. Set b1 to 0.00 and hit Run

    • every setting has a sweet spot. learning where it is and what each funtion does seperates "okay" images from "stellar" images

  • Now set b1 to 2.0 and hit Run. Notice the detals on the church pews in the back? Look at the sauce on the spaghetti.

  • Before moving on, play with these settings. Make sure to BYPASS the node when done for the next exercise

🎨 Post production

For any of you that know my workflows, I am heavily about post processing an image. making an image Pop, adding color, changing the hue, adding realism filters, film grain, etc. makes images POP!

Here's an example of one of my post production Suites:

workflow (3).png

For this exercise, we'll stick to ultra simple core nodes so you can understand the concept.

  • Making corrections to an image inline instead of dragging it out to a program or another workflow massively saves time. Do they make the workflow collosal and intimidating?

    ..... Ummmmmmm 🤔

  • But I digress....

  • When you take a picture on your phone, don't you make adjustments before you post it?

    • okay, guys maybe not 🙄, but women.....

  • There are many different nodes you can use. The most fundamental ones should do the following:

    • Denoise or JPEG remove

    • Sharpen

    • color correct

    • brightness/ contrast

    • hue/ saturation

  • I've placed 2 core nodes in the workflow for you to understand the concept. Play around with them before moving on.

image.png

Style Adjustment

In this section I am going to discuss the use of LoRAs and models to adjust your style and image. I will show you how LoRAs

Using LoRAs

  • ⚠️ Verify that you have "dramatic lighting slider" and "add micro details' in the lora loader nodes and they are set to 0 & make sure Juggernaut is loaded in the checkpoint loader

  • Run the image so that you get a baseline.

  • Now set the dramatic lighting slider weights to 1.0

    • Note:

      • Model weights affect style, composition, and struture

      • Clip weights affect prompt adherence and text comprehension

  • See how it changes the lighting a lttle bit? now adjust it to 4.0 and run it again

    Italian Sqiurrel_00058_.png
  • do the same with add detailes LoRA

  • ⚠️ set the LoRA weights to 0.0 before going to the next step

Changing Models

  • Let's change the model to Animosity. Hit Run

  • If you haven't made any other changes, you should get an image like this

    Italian Sqiurrel_00059_.png
  • Wait...dramatic lighting slider is turned off. (even I had to double check).

    • you will find on a lot of checkpoint merges, creators will "bake in" certain LoRAs. It adds to the total effect and style of the checkpoint. I have several different LoRAs baked in as well as a block merge and other things to pull out teh style I wanted.

Effects of LoRA models and weights

  • Set the "add micro details" MODEL WEIGHT ONLY to 1.0. Leave the clip at 0.0 and hit Run

    Italian Sqiurrel_00065_.png
  • Wierd, huh?

  • Now set both Model and Clip to 1.0 and hit Run

    Italian Sqiurrel_00066_.png
  • Why did this happen?

    • Most LoRAs are trained on relatively small datasets. Judging by this, I'd assume that there is no image of a squirrel in the dataset (which I would not expect). THIS is NORMAL. If I wanted to get an accurate image of a squirrel, I'd use a LoRA specifically trained on a squirrel.

    • disclaimer: This LoRA is solid. I could have chosen any LORA with the same result.

  • It is important for you to choose the right LoRAs for what you want to do, and understand how the clip and model weights affect the outcome.

Summary:

A game:

500 to the first person can correctly tell me the squirrel's name. There is only one correct answer. Comment below.💡It has something to do with AI

I hope that is was a valuable article and that you enjoyed it. Please leave me a feedback.

Please comment and let me know what you think

Instagram: https://www.instagram.com/synth.studio.models/

Buy me a☕ https://ko-fi.com/lonecatone

This represents Many of hours of work. If you enjoy it, please 👍like, 💬 comment , and feel free to ⚡tip 😉

3