Sign In

πŸš€ Merging Models with PyTorch in Google Colab πŸŽ¨πŸš€

10

Yes, this is easier to do with Supermerge in A1111, but I'm practicing on Google Collab and this works but only if you are a pro user, so it came out 50/50 :/

PS: I haven't tested it with lighter models but it should work, I'll also see if I can refine it, bye~7w7)/

Open In Colab

πŸ›  Step 1: Install Dependencies

Run these commands in Google Colab to prepare the environment.

# 1️⃣ Clonar el repositorio de Stable Diffusion WebUI (opcional)
%cd /content/
!git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
%cd stable-diffusion-webui

# 2️⃣ Instalar PyTorch y Safetensors
!pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
!pip install safetensors

πŸ“‚ Step 2: Upload Models to Google Drive or Download

You can manually upload the checkpoint (.safetensors or .ckpt) to Google Drive or download it from a link.

If you want to download it from a link:

import requests

# ConfiguraciΓ³n
API_KEY = "Your_API_Key"  # πŸ”Ή Reemplaza con tu API Key de CivitAI
MODEL_ID = "1009529"  # πŸ”Ή Reemplaza con el ID del modelo que quieres descargar
OUTPUT_PATH = "/content/model1.safetensors"  # πŸ”Ή Ruta donde guardar el modelo

# URL de descarga
url = f"https://civitai.com/api/download/models/1009529"

# Encabezados con la API Key
headers = {"Authorization": f"Bearer {API_KEY}"}

# Descargar el archivo
response = requests.get(url, headers=headers, stream=True)

if response.status_code == 200:
    with open(OUTPUT_PATH, "wb") as f:
        for chunk in response.iter_content(chunk_size=8192):
            f.write(chunk)
    print(f"βœ… Descarga completa: {OUTPUT_PATH}")
else:
    print(f"⚠️ Error {response.status_code}: {response.text}")

This only works if the checkpoints have enabled downloads without your account being logged in. (I recommend the option above; it's more secure.)

!wget -O model1.safetensors "URL_DEL_MODELO_1"
!wget -O model2.safetensors "URL_DEL_MODELO_2"
!wget -O model3.safetensors "URL_DEL_MODELO_3"

If you already have it on Google Drive, mount it with:

from google.colab import drive
drive.mount('/content/drive')

And then move the file to the working folder:

!cp "/content/drive/My Drive/model.safetensors" "/content/model.safetensors"

πŸ”„ Step 3: Merging Models with PyTorch

  • πŸ”„ Merging Two Models with PyTorch

import torch
from safetensors.torch import load_file, save_file

# πŸ“Œ Rutas de los modelos (ajusta segΓΊn sea necesario)
modelo_1 = "/content/model1.safetensors"
modelo_2 = "/content/model2.safetensors"
modelo_salida = "/content/modelo_fusionado.safetensors"

# πŸ”„ Proporciones de fusiΓ³n (deben sumar 1.0)
peso_1 = 0.5  # 50% del primer modelo
peso_2 = 0.5  # 50% del segundo modelo

# πŸ“₯ Cargar modelos (usando `safetensors`)
try:
    model1 = load_file(modelo_1)
    model2 = load_file(modelo_2)
    print("βœ… Modelos cargados correctamente")
except Exception as e:
    print(f"❌ Error al cargar los modelos: {e}")
    raise

# πŸ” Verificar que las claves sean iguales en ambos modelos
keys1, keys2 = set(model1.keys()), set(model2.keys())

if keys1 != keys2:
    print("❌ Error: Los modelos tienen diferentes estructuras de pesos.")
    raise ValueError("Las claves de los modelos no coinciden.")

# πŸ”€ Fusionar los modelos con las proporciones definidas
merged_model = {k: peso_1 * model1[k] + peso_2 * model2[k] for k in model1.keys()}

# πŸ’Ύ Guardar el modelo fusionado
try:
    save_file(merged_model, modelo_salida)
    print(f"βœ… Modelo fusionado guardado en {modelo_salida}")
except Exception as e:
    print(f"❌ Error al guardar el modelo fusionado: {e}")
    raise

If you need to change the proportions (e.g., 70% and 30%), simply modify weight_1 and weight_2, ensuring they add up to 1.0. πŸš€πŸ”₯

  • πŸ”„ Merging Three Models with PyTorch

import torch
from safetensors.torch import load_file, save_file

# πŸ“Œ Rutas de los modelos (ajusta segΓΊn sea necesario)
modelo_1 = "/content/model1.safetensors"
modelo_2 = "/content/model2.safetensors"
modelo_3 = "/content/model3.safetensors"
modelo_salida = "/content/modelo_fusionado.safetensors"

# πŸ”„ Proporciones de fusiΓ³n (deben sumar 1.0)
pesos = [0.4, 0.4, 0.2]

# πŸ“₯ Cargar modelos (usando `safetensors`)
try:
    model1 = load_file(modelo_1)
    model2 = load_file(modelo_2)
    model3 = load_file(modelo_3)
    print("βœ… Modelos cargados correctamente")
except Exception as e:
    print(f"❌ Error al cargar los modelos: {e}")
    raise

# πŸ” Verificar que las claves sean iguales en los 3 modelos
keys1, keys2, keys3 = set(model1.keys()), set(model2.keys()), set(model3.keys())

if not (keys1 == keys2 == keys3):
    print("❌ Error: Los modelos tienen diferentes estructuras de pesos.")
    raise ValueError("Las claves de los modelos no coinciden.")

# πŸ”€ Fusionar los modelos con las proporciones definidas
merged_model = {
    k: pesos[0] * model1[k] + pesos[1] * model2[k] + pesos[2] * model3[k]
    for k in model1.keys()
}

# πŸ’Ύ Guardar el modelo fusionado
try:
    save_file(merged_model, modelo_salida)
    print(f"βœ… Modelo fusionado guardado en {modelo_salida}")
except Exception as e:
    print(f"❌ Error al guardar el modelo fusionado: {e}")
    raise

πŸ–₯ Step 4: Use the Model in AUTOMATIC1111

To test the merged model in Stable Diffusion WebUI:

# Mueve el modelo fusionado a la carpeta de modelos de SD WebUI
!mv /content/modelo_fusionado.safetensors /content/stable-diffusion-webui/models/Stable-diffusion/
!python launch.py --share

βœ… How to Adjust the Blending?

You can modify the values ​​of alpha1, alpha2, and alpha3 to change the influence of each model.

The total sum must be 1.0, for example:

alpha1 = 0.5, alpha2 = 0.3, alpha3 = 0.2 β†’ 50%, 30%, 20%

alpha1 = 0.6, alpha2 = 0.2, alpha3 = 0.2 β†’ 60%, 20%, 20%

You can add more models by repeating the same pattern in the code.

10