NativeAIHub

Supported Models: FLUX, SDXL, SD3, and More

All plans1 min read
Minimum VRAMRecommended VRAMBest For
SD 1.54 GB6 GBFastest generation, largest LoRA library, older aesthetic
SDXL6 GB8 GBBest balance of quality, speed, and community content
SD3 / 3.58 GB12 GBBetter text rendering and composition accuracy
FLUX.1 dev8 GB (quantized)16 GBHighest quality prompt adherence and detail
FLUX.1 schnell8 GB (quantized)12 GBFast version of FLUX (4 step generation)

What is VRAM?

VRAM (Video RAM) is the memory on your graphics card. AI models need to fit into VRAM to generate images quickly. If a model is too large for your VRAM, ComfyUI can still run it by offloading parts to regular RAM, but generation will be much slower. The VRAM numbers above assume you want reasonable generation speed (under 60 seconds per image). Quantized model versions reduce VRAM requirements by compressing the model with a small quality tradeoff.

104K+

GitHub stars

ComfyUI is one of the most popular open source AI projects in the world, with a massive and active community.

Choosing your first model

If you are new to ComfyUI, start with SDXL. It has the best balance of quality, speed, and community resources. The LoRA ecosystem for SDXL is enormous, and most tutorial workflows use SDXL. Once you are comfortable, try FLUX.1 dev for higher quality output, or SD 1.5 for faster iteration when experimenting.