Sign In

Lonecat's Simple Workflows Guide

5

May 6, 2026

(Updated: a day ago)

workflows
Lonecat's Simple Workflows Guide
  • Workflow Guide:

You can find the workflow grouping HERE. They are also attached to this article.

  • ⚠️ check the model page for the latest updates

Introduction:

I've been told numerous times that a lot of the workflows that I put out there are complex and intimidating. I wanted to put something out there that was easy to use and understand, but without sacrificing the quality and level of organization that has become my trademark.

These are simple versions of my workflows broken down into easy to understand sections with plenty of notes and limited options for the person new to Comfy.

If you are brand new to ComfyUI and are looking to learn or get a better uderstanding. Read my workflows-a-beginners-tutorial-and-hands-on-walkthrough to better understand some of the thing I am going to discuss

In this guide we will talk about the following:

  • Downloading the models

  • The Prompt assist section

  • I2I

  • Controlnet

  • Hi-res upscale

  • Detailers

Getting Started:

The different areas (from left to right)

  • Note: some models (such as Anima) will not use all of these functions

  • example.png

    Load Image: Needed for !2I, Prompt Assist, as well as controlnet.

  • Florence Prompt: analyzes te image and generates a prompt accordingly

  • Controlnet: using a mask (or map technically), it controls the image either through conditioning or model mofification.

  • Main Area: set your model, seed, steps, sampler, etc. here

  • Hi-Res fix: Acts as as both a detailer and an upscaler

  • Detailers: face, eyes, hands.

  • Post production: two of my most used image enhancement nodes, plus the save image information

Downloading the models, etc.

  • In the notes section, I have listed out the upscale models and controlnet models.

  • The base models can be take from the Comfy Manager Model area or from CivitAI

  • Under the detailer area you will find the ultralytics and Sam models.

Image Load/ Florence Prompt assist/ I2I

  • Screenshot 2026-05-06 171846.png

    The Load Image area is where you place your image for controlnet, Prompt assist, and I2I

    • Use the Aspect ratio node to adjust the size of the image you want to process

    • The "cropped image" area shows you what the Controlnet will be using to set its mask

  • Prompt Assist:

    • You will need to choose one of the prompt-gen models and the precision

    • Instructions are below

  • Aspect Ratio node:

    • This controle the size as well as the upscale size of the final image.

Controlnet:

  • Screenshot 2026-05-06 172439.png

    This varies slightly from model to model, but is basically the same funtion

  • AIO Aux Preprocessor: This generates the form of controlnet

    • I have listed below the node my favorites for each model

    • ⚠️ I cannot stress enough how important it is to have clear high quality images

    • 🛑 This is not Face Swap or a LoRA. you will only get poses and composition out of this.

⚙️ User settings

  • Screenshot 2026-05-06 172945.png

    Model/ clip/ Vae: this will vary between workflows (It will sometimes be a checkpoint or have a dual clip loader

    • Make sure to read the instructions below.

    • 💡Most errors regarding MAT sizes or tensors occur because of a mismatch right here. Make sure you use teh right combination of model/ clip/ vae

  • Seed: I like to use a fixed seed. That way I can make modifications without changing the entire thing. This is tied to every part of the workflow.

  • KSampler Settings: This controls the Ksampler & refiner

    • 💡These are over in the getset area

      Screenshot 2026-05-06 173545.png
    • I use two passes for the following reasons:

      • The first pass creates the image and has controlnet hooked up to it. Controlnet has a bad tendency to leave fuzzy or grainy images

      • The second pass does not have controlnet hooke dup to it and is set for refinement, leaveing a sugnificantly better quality image than one pass, even without using controlnet

    • Instructions for use and preferred settings are in the notes below.

  • Options switches; Pretty self explanatory. Turns each section on and off

    • 💡When turning off the image you will notice it automatically bypasses I2I, Prompt assist, and controlnet. When you turn it back on the switches sometimes need to be redone.

  • Prompts and LoRAs: Enter your ptompts and place your LoRAs here

    • Note: some models use a negative prompt as well., You will find that here.

Hi res fix:

  • image.png

    This does all of your upscale work as well as makes modifications liek sharpening or adding skin detailer, color, etc.

  • LOW VRAM: Slide the yellow slider below to the right. to 2x a 1080 image on 6gb VRAM you need to go all the way to 2. 8GB or above can handle 1.25-1.5. larger cards can leave this at 1.0

Detailers:

  • Screenshot 2026-05-06 175506.png

    If you have a character LoRA, place it in the main area as wll as here and dial this down to about 0.4-0.6

  • Screenshot 2026-05-06 175652.png

    Make sure you load the SAM model (for all) as well as the proper detector (for each section) Here

  • Screenshot 2026-05-06 175827.png

    denoise & feather:

    • denoise is the amount the detailer changes the image 0.4-0.5 is a good happy medium. lower it if you want less, raise if you want more.

    • feather is how much the image take around the area it details to blend into bteh other image. This usually is irrelevant, but when detailing the eyes, you should zoom in as this makes a difference to the area between them.

  • Screenshot 2026-05-06 180122.png

    These tile the image to save on VRAM. You will most liekly need to use them unless you have a 16gb card or above

Luts:

  • Screenshot 2026-05-06 182746.png

    Included in the documentation of this workflow (and in the folders on the model page) you will find a LUTS folder. Follow the instructions on where to place it (in the notes on each workflow).

  • These are basically Instagram filters on steriods. I use them all the time to give a high quality professional look to the image.

Saving Your image:

Screenshot 2026-05-06 180247.png

The top one is the name of the image. The one below is the name of the folder it goes into. By default all images will be stored in the "output" folder in ComfyUI

Summary:

I hope this article was helpful. Please leave any comments below on what you think or how I can improve it or the workflows. Feedback is always welcome.

Instagram: https://www.instagram.com/synth.studio.models/

Buy me a☕ https://ko-fi.com/lonecatone

This represents Many of hours of work. If you enjoy it, please 👍like, 💬 comment , and feel free to ⚡tip 😉

5