NativeAIHub

The Node Editor: How ComfyUI Works

All plans (Free self hosted, Comfy Cloud)2 min read
1

Load a Checkpoint

Select the AI model you want to use. FLUX, SDXL, SD3, or any compatible model file.

2

Encode Your Prompt

Type what you want to see (positive prompt) and what you want to avoid (negative prompt). CLIP converts your words into model inputs.

3

Configure the Sampler

Choose a sampler algorithm (Euler, DPM++, UniPC), the number of steps, and the CFG scale. Each combination produces different results.

4

Generate the Image

Hit Queue Prompt. The sampler runs the diffusion process step by step. You can watch a live preview as the image forms.

5

Post Process and Save

Optionally upscale, apply face restoration, adjust colors, or run any other post processing node. Then save your final image.

Key node categories

🧩
LoadersLoad checkpoints, LoRAs, VAEs, ControlNet models, and other assets into your workflow.
✏️
ConditioningEncode text prompts, apply ControlNet guidance, set regional prompts, and blend conditions.
🎲
SamplingThe core generation step. Configure sampler, scheduler, steps, CFG scale, and seed for reproducible results.
🖼️
Image ProcessingUpscale, crop, resize, blend, composite, mask, and transform images within the pipeline.
🎭
MaskingDraw masks to define regions for inpainting, outpainting, or selective processing.
💾
OutputPreview, save to disk, or send results to other nodes for further processing.

Starting with nodes

The default workflow that appears when you first open ComfyUI is the best starting point. Do not try to build complex workflows from scratch immediately. Instead, modify the default workflow one node at a time: try a different sampler, change the step count, add a LoRA. Learning by incremental modification is much faster than trying to understand every node type at once.