Download
1 variant available
The FLUX.1 [dev] Model is licensed by Black Forest Labs. Inc. under the FLUX.1 [dev] Non-Commercial License. Copyright Black Forest Labs. Inc.
IN NO EVENT SHALL BLACK FOREST LABS, INC. BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH USE OF THIS MODEL.
FLUX.2 Dev GGUF Workflow for ComfyUI, tested on RTX 3060 (12GB)
Main Diffusion Model (GGUF) Model: FLUX.2-dev-gguf Download:
https://huggingface.co/city96/FLUX.2-dev-gguf
Put it here: ComfyUI/models/diffusion_models/
Note: Choose the quantization that matches your GPU VRAM:
Q2_K → ~13 GB file — 4GB VRAM (slow, ~3 hours on laptop)
Q3_K_M → ~16 GB file — 6–8 GB VRAM
Q4_K_M → ~20 GB file — 8–10 GB VRAM
Q5_K_M → ~24 GB file — 12 GB VRAM (recommended for RTX 3060)
Q8 → ~38 GB file — 16 GB+ VRAM (best quality)VAE flux2-vae.safetensors Download: https://huggingface.co/Comfy-Org/flux2-dev/resolve/main/split_files/vae/flux2-vae.safetensors
Put it here: ComfyUI/models/vae/
Text Encoder mistral_3_small_flux2_fp8.safetensors Download: https://huggingface.co/Comfy-Org/flux2-dev/resolve/main/split_files/text_encoders/mistral_3_small_flux2_fp8.safetensors
Put it here: ComfyUI/models/text_encoders/
Note: Use the fp8 version to save VRAM. Use bf16 if you have headroom and want slightly better quality.
Required Custom Nodes Install via ComfyUI Manager or clone manually into ComfyUI/custom_nodes/
ComfyUI-GGUF https://github.com/city96/ComfyUI-GGUF

