Sign In

Qwen-Image GGUF 3Q_K_M - 10G VRAM

Updated: Dec 31, 2025

base modelt2itext-to-imageqwen

Download

1 variant available

nf4 GGUF

9.01 GB

Verified:

Type

Checkpoint

Stats

683

Reviews

Published

Aug 8, 2025

Base Model

Other

Hash

AutoV2
FF96F80B90

License:

This is a 3-bit quantized GGUF conversion of the Qwen/Qwen-Image model, released by City96 and mirrored here for convenience. The Q3_K_M variant is optimized for GPUs with at least 10GB VRAM,

  • Tested VRAM Usage:

    • Ubuntu, Firefox (8 tabs): ~8.8 - 9.2 GB VRAM (CFG 3, 20 Steps, uni_pc, normal, 5.4s/it)

    • Windows 11, Brave (1 tab), MiniConda, GGUF excluded from Windows Defender: ~9.6 GB VRAM

    Tips: Offload monitor tasks to an integrated GPU to free up VRAM. Runs smoothly on Linux or Windows