0M+
Total downloads
0K+
Community variants
0
Elo rating (27B on LMArena)
| VRAM (Q4) | Multimodal | Best For | |
|---|---|---|---|
| Gemma 3 270M | < 0.5GB (CPU) | Text only | Classification, extraction, edge devices |
| Gemma 3 1B | < 1GB (CPU) | Text only | Mobile, simple Q&A, fine tuned tasks |
| Gemma 3 4B | ~3.4GB | Text + Image | Local dev, conversation, coding, docs |
| Gemma 3 12B | ~8GB | Text + Image | Professional local, quality sensitive tasks |
| Gemma 3 27B | ~16GB | Text + Image | Frontier open model, complex reasoning |
Recommended hardware by model
📱
270M / 1B
Any modern device. Smartphones, Raspberry Pi, browsers via WebAssembly, or any CPU. No GPU needed.
💻
4B
Modern laptop or desktop. Apple M1+, any 8GB+ RAM machine. Optional GPU accelerates generation but is not required.
🖥️
12B
Dedicated GPU recommended. RTX 3060 12GB, RTX 4060 Ti 16GB, Apple M2 Pro+, or equivalent.
⚡
27B
High end GPU required. RTX 4090 24GB, A6000 48GB, Apple M3 Max, or multi GPU setup.
Start with 4B
For most developers exploring Gemma for the first time, the 4B model is the ideal starting point. It offers multimodal capabilities, fits on virtually any modern laptop at Q4 quantization, and provides strong enough quality for general tasks. Scale up to 12B or 27B only if you need higher quality and have the hardware to support it.