| Minimum VRAM | Recommended VRAM | Best For | |
|---|---|---|---|
| SD 1.5 | 4 GB | 6 GB | Fastest generation, largest LoRA library, older aesthetic |
| SDXL | 6 GB | 8 GB | Best balance of quality, speed, and community content |
| SD3 / 3.5 | 8 GB | 12 GB | Better text rendering and composition accuracy |
| FLUX.1 dev | 8 GB (quantized) | 16 GB | Highest quality prompt adherence and detail |
| FLUX.1 schnell | 8 GB (quantized) | 12 GB | Fast version of FLUX (4 step generation) |
What is VRAM?
VRAM (Video RAM) is the memory on your graphics card. AI models need to fit into VRAM to generate images quickly. If a model is too large for your VRAM, ComfyUI can still run it by offloading parts to regular RAM, but generation will be much slower. The VRAM numbers above assume you want reasonable generation speed (under 60 seconds per image). Quantized model versions reduce VRAM requirements by compressing the model with a small quality tradeoff.
104K+
GitHub stars
ComfyUI is one of the most popular open source AI projects in the world, with a massive and active community.
Choosing your first model
If you are new to ComfyUI, start with SDXL. It has the best balance of quality, speed, and community resources. The LoRA ecosystem for SDXL is enormous, and most tutorial workflows use SDXL. Once you are comfortable, try FLUX.1 dev for higher quality output, or SD 1.5 for faster iteration when experimenting.