NativeAIHub

Building AI Agents in n8n

All plans (LLM API keys required)2 min read

n8n's AI capabilities are built on LangChain integration, exposed through ~70 AI specific nodes. The architecture follows a modular pattern: you compose agents by connecting a root AI Agent node with sub nodes for the LLM, memory, and tools.

1

Choose a Trigger

Chat trigger (conversational), webhook (API calls), schedule (periodic), or manual (testing).

2

Add an AI Agent Node

The root node that orchestrates the agent's behavior and reasoning loop.

3

Configure the LLM

Attach a language model sub node. Supported providers: OpenAI, Anthropic Claude, Google Gemini, AWS Bedrock, Azure OpenAI, Mistral, Ollama (local models), and more.

4

Add Memory

Attach memory sub nodes for conversation persistence: window memory, buffer memory, summary memory, or external via Zep.

5

Add Tools

Attach tool sub nodes that the agent can call during execution to interact with external services.

6

Connect to Data

Vector store nodes for RAG (Pinecone, Qdrant, Supabase, Zep), or any n8n integration as a data source.

7

Output Results

Send results via chat response, webhook, email, Slack, or any connected service.

The Workflow Tool: n8n's unique advantage

Any n8n workflow can be used as a custom tool for an AI agent. Your agent can trigger complex multi step automations (send emails, update databases, call APIs, process files) as part of its reasoning. No other visual automation platform offers this level of integration between AI agents and workflow automation.
1

Ingest Documents

Load documents using document loader nodes and split them into chunks with text splitters

2

Create Embeddings

Generate vector embeddings using OpenAI, Cohere, or HuggingFace embedding models

3

Store in Vector DB

Save embeddings to Pinecone, Qdrant, Supabase, or another supported vector database

4

User Asks a Question

A query arrives via chat trigger, webhook, or another entry point

5

Retrieve Context

The vector store returns the most relevant document chunks based on semantic similarity

6

LLM Generates Answer

The language model combines retrieved context with the question to produce a grounded response

Local LLM support for full data privacy

n8n supports running local models via Ollama. Combined with self hosting, this enables a fully air gapped AI automation setup where no data leaves your infrastructure. Ideal for teams in healthcare, finance, government, or any environment with strict data sovereignty requirements.