NativeAIHub

Core Generation: Text, Image, and Voice to UI

All users1 min read
1

Provide your input

Type a description, speak through voice mode, upload a sketch or wireframe, or combine multiple input types for richer context.

2

Gemini analyzes and plans

The model reasons about layout, components, colors, and typography. For complex apps, it outlines planned screens for your approval before generating.

3

Screens are generated on the canvas

Stitch renders polished UI screens on the infinite canvas with proper component structure, spacing, and visual hierarchy.

4

Iterate with the design agent

Refine through chat, voice, or direct edits. Generate variants, apply theme changes across screens, and let the agent suggest improvements.

Getting the best results from Stitch

Be specific about the user experience you want, not just the visual appearance. Instead of 'make a shopping app,' try 'a mobile shopping app for sustainable fashion where first-time visitors immediately understand what makes us different, with a homepage featuring product cards that emphasize material sourcing.' For image inputs, clear and well-lit wireframes with distinct sections work better than abstract sketches.