| NVIDIA (CUDA) | AMD (ROCm) | Apple Silicon (MPS) | |
|---|---|---|---|
| Support level | Best (first class) | Good on Linux, limited on Windows | Good on macOS |
| Minimum GPU | GTX 1060 6GB | RX 6700 XT | M1 (8GB unified) |
| Recommended GPU | RTX 3060 12GB or RTX 4070 | RX 7900 XTX | M2 Pro or M3 Max |
| SDXL performance | Fast (10 to 30 sec) | Moderate (20 to 60 sec) | Moderate (20 to 60 sec) |
| FLUX support | Yes (12GB+ recommended) | Yes (16GB+ recommended) | Yes (16GB+ unified) |
1
Check your GPU
Verify your GPU has at least 6GB VRAM (NVIDIA) or 8GB (AMD/Apple). Check with GPU-Z on Windows or System Information on Mac.
2
Download ComfyUI Desktop
Visit comfy.org and download the installer for your platform. The Desktop app handles all dependencies automatically.
3
Download your first model
Get SDXL or FLUX from CivitAI or Hugging Face. Place the checkpoint file in ComfyUI's models/checkpoints folder.
4
Launch and generate
Open ComfyUI, select your model in the Load Checkpoint node, type a prompt, and click Queue Prompt.
No GPU? No problem.
If you do not have a compatible GPU, you have two options: use Comfy Cloud ($9.99+/month) which provides remote GPUs, or run ComfyUI on CPU (very slow, 5 to 15 minutes per image, but functional for testing). Some users also rent cloud GPUs from services like RunPod or Vast.ai for a few cents per hour.