NativeAIHub

Local Setup: Hardware and Installation

Free (self hosted)1 min read
NVIDIA (CUDA)AMD (ROCm)Apple Silicon (MPS)
Support levelBest (first class)Good on Linux, limited on WindowsGood on macOS
Minimum GPUGTX 1060 6GBRX 6700 XTM1 (8GB unified)
Recommended GPURTX 3060 12GB or RTX 4070RX 7900 XTXM2 Pro or M3 Max
SDXL performanceFast (10 to 30 sec)Moderate (20 to 60 sec)Moderate (20 to 60 sec)
FLUX supportYes (12GB+ recommended)Yes (16GB+ recommended)Yes (16GB+ unified)
1

Check your GPU

Verify your GPU has at least 6GB VRAM (NVIDIA) or 8GB (AMD/Apple). Check with GPU-Z on Windows or System Information on Mac.

2

Download ComfyUI Desktop

Visit comfy.org and download the installer for your platform. The Desktop app handles all dependencies automatically.

3

Download your first model

Get SDXL or FLUX from CivitAI or Hugging Face. Place the checkpoint file in ComfyUI's models/checkpoints folder.

4

Launch and generate

Open ComfyUI, select your model in the Load Checkpoint node, type a prompt, and click Queue Prompt.

No GPU? No problem.

If you do not have a compatible GPU, you have two options: use Comfy Cloud ($9.99+/month) which provides remote GPUs, or run ComfyUI on CPU (very slow, 5 to 15 minutes per image, but functional for testing). Some users also rent cloud GPUs from services like RunPod or Vast.ai for a few cents per hour.