
n8n OpenAI Integration: Build AI-Powered Workflows (2026)
Quick Summary
- •The n8n OpenAI node authenticates via API key and calls Chat Completions, Assistants, or DALL-E models
- •Use gpt-4o-mini for cost-efficient production automations, gpt-4o for complex reasoning tasks
- •Temperature 0.0-0.3 for structured tasks like classification; 0.4-0.7 for summarisation and drafting
- •Synta can generate complete OpenAI workflow templates from plain English descriptions
n8n OpenAI Integration: Build AI-Powered Workflows (2026)
- The n8n OpenAI node authenticates via API key and can call Chat Completions, Assistants, and Fine-tuned models
- Always store your OpenAI API key in n8n credentials, never hardcode it
- Common uses: chatbots, content summarizers, text classifiers, and data extraction pipelines
- Synta can generate complete OpenAI workflow templates from plain English descriptions
Introduction
Connecting OpenAI to n8n unlocks a powerful combination: the flexibility of n8n's automation framework with the intelligence of large language models. Whether you want to automatically summarise customer emails, classify incoming support tickets, generate content based on a trigger, or build a chatbot that responds to form submissions, the n8n OpenAI node makes it possible without writing application code.
This guide walks you through setting up the OpenAI node in n8n, configuring your API credentials, and building three complete workflow examples you can deploy today.
Synta builds on top of n8n by generating production-ready workflows from plain English. If you want to skip the configuration entirely, describe the AI workflow you need at synta.io and Synta generates it for you.
How Do I Connect OpenAI to n8n?
**You connect OpenAI to n8n by creating an OpenAI API key in your OpenAI account and storing it as a credential in n8n. Once stored, the OpenAI node in n8n can authenticate and make API calls to any OpenAI model you have access to.**
Here is what you need before starting:
- An OpenAI account with API access (openai.com/api)
- An n8n instance (cloud or self-hosted)
- A workflow where you want to add AI capabilities
Step 1: Get Your OpenAI API Key
Log in to platform.openai.com and go to the API section. Click "Create new secret key" and copy the key. Give it a descriptive name like "n8n-production" so you can track usage.
**Important:** Never share this key or commit it to version control. It gives full access to your OpenAI account.
Step 2: Store the Key in n8n Credentials
In n8n, go to Credentials and click "New Credential". Select "OpenAI API". Paste your API key and save. Name it something recognizable like "OpenAI - Production".
Step 3: Add the OpenAI Node to Your Workflow
Add a new node and search for "OpenAI". You will see several options:
- **Chat Completions:** Send a prompt and get a text response (most common)
- **Assistants:** Interact with OpenAI Assistants with persistent context
- **Images:** Generate images using DALL-E
- **Audio:** Transcribe or translate audio
For most automation workflows, the Chat Completions node is what you need.
How Do I Configure the n8n Chat Completions Node?
**The n8n Chat Completions node sends a prompt (or message history) to your chosen OpenAI model and returns the model's text response. You configure the model (like gpt-4o or gpt-4o-mini), the temperature (creativity level), and the messages array.**
The messages array is the core of Chat Completions. Each message has a role and content:
- **System:** Sets the AI's behaviour and personality
- **User:** The actual prompt or question
- **Assistant:** Previous AI responses (for conversation context)
Basic Configuration Example
```
Model: gpt-4o-mini
Temperature: 0.7
Messages:
- Role: system, Content: "You are a helpful support assistant."
- Role: user, Content: "{{ $json.customer_email }}"
```
In this example, the workflow takes a customer's email from the previous node and sends it to GPT-4o-mini with a system prompt that sets the context. The AI response is then passed to the next node for further processing.
Temperature Settings Explained
Temperature controls how creative or random the model's output is:
- **0.0 - 0.3:** Deterministic, factual responses. Best for classification, extraction, and structured tasks.
- **0.4 - 0.7:** Balanced. Good for most general purpose uses like summarisation and drafting.
- **0.8 - 1.0:** Creative, varied outputs. Use for brainstorming, storytelling, or generating ideas.
For production automation, keep temperature between 0.0 and 0.5 for consistent, predictable results.
Use Case 1: Build an Email Summariser with n8n and OpenAI
**You can use n8n and OpenAI to automatically summarise incoming emails, letting your team quickly triage without reading full messages. The workflow triggers on a new email, sends it to GPT-4o for summarisation, and stores or forwards the summary.**
What You Need
- An email trigger (Gmail, IMAP, or any email service n8n supports)
- An OpenAI Chat Completions node
- A storage node (Notion, Google Sheets, Slack, etc.)
The Workflow
1. **Trigger:** Gmail node watches for new emails with a specific label or from a specific sender
2. **OpenAI node:** System prompt sets context ("You are an email triage assistant. Summarise the following email in 3 bullet points."), user message is the email body
3. **Output:** Send the summary to a Slack channel, add it to a Notion database, or reply to the sender with a summary
Example Workflow JSON
```json
{
"nodes": [
{
"name": "Gmail Trigger",
"type": "n8n-nodes-base.gmailTrigger",
"parameters": {
"labels": [" newsletters"]
}
},
{
"name": "Summarise Email",
"type": "openAiTrigger",
"parameters": {
"resource": "chat",
"operation": "complete",
"model": "gpt-4o-mini",
"messages": [
{
"role": "system",
"content": "You are an expert email triage assistant. Summarise the following email in exactly 3 bullet points. First point: the main request or topic. Second point: key details and context. Third point: recommended action or urgency."
},
{
"role": "user",
"content": "={{ $json.body }}"
}
],
"temperature": 0.3
}
},
{
"name": "Send to Slack",
"type": "n8n-nodes-base.slack",
"parameters": {
"channel": "#email-triage",
"text": "={{ $json.choices[0].message.content }}"
}
}
]
}
```
Use Case 2: Classify Incoming Leads with n8n and OpenAI
**You can use OpenAI's classification capabilities to automatically categorise incoming leads from web forms, CRMs, or emails. The AI assigns a score or category, and n8n routes the lead to the appropriate team or workflow.**
What You Need
- A trigger for new leads (webhook, form submission, CRM update)
- An OpenAI Chat Completions node with a structured output prompt
- Conditional routing based on the classification
The Workflow
1. **Trigger:** Webhook receives a new lead from your website form
2. **OpenAI node:** Prompt asks the model to classify the lead as Hot / Warm / Cold and suggest the best follow-up action
3. **IF node:** Route based on classification:
- Hot: Add to high-priority CRM list, notify sales Slack channel
- Warm: Add to nurture sequence, send follow-up email
- Cold: Add to long-term follow-up, update CRM status
Classification Prompt Template
```
You are a B2B sales analyst. Classify the following lead and return a JSON object with:
For lead scoring or routing, a simple output like hot, warm, or cold is usually enough for downstream automation logic.
- "reason": a 1-sentence explanation of the classification
- "next_action": a specific recommended next step
Lead info:
Company: {{ company_name }}
Industry: {{ industry }}
Team size: {{ team_size }}
Message: {{ message }}
Budget mentioned: {{ budget }}
Return ONLY the JSON object, no additional text.
```
Setting temperature to 0.0 gives consistent, structured outputs for reliable automation.
Use Case 3: Build a Content Generator with n8n and OpenAI
**Use n8n and OpenAI to automatically generate social media posts, product descriptions, or blog outlines from a content brief. The workflow triggers on a schedule or a new entry in a spreadsheet, and OpenAI generates the content for review or direct posting.**
What You Need
- A trigger (Schedule node for recurring content, Google Sheets for bulk briefs, or a webhook for on-demand)
- An OpenAI Chat Completions node with a detailed content generation prompt
- An output node (Slack for review, Notion for publishing queue, or direct to social media)
The Workflow
1. **Trigger:** Google Sheets row added with a content brief (title, keywords, target audience, tone)
2. **OpenAI node:** Generate a LinkedIn post, Twitter thread, or blog outline based on the brief
3. **Output:** Post to a review Slack channel for approval, or directly to the social platform
Example Content Generation Prompt
```
Write a LinkedIn post based on the following brief.
Title: {{ title }}
Target audience: {{ audience }}
Tone: {{ tone }} (choose from: professional, casual, witty, educational)
Key message: {{ key_message }}
Call to action: {{ cta }}
Requirements:
- 150-200 words
- Start with a hook that grabs attention in the first line
- Include 2-3 short paragraphs
- End with the call to action
- No emojis in the main text (add 2-3 relevant hashtags at the end)
- Match the specified tone
```
How Do I Handle OpenAI Errors in n8n?
**The most common OpenAI errors in n8n are rate limit errors (429), invalid API key (401), context window exceeded (400), and model not found. Handle these by adding an Error Trigger node to your workflow and setting up retry logic or fallback responses.**
Error Handling Workflow
Add an Error Trigger node connected to your OpenAI node. Configure it to:
1. Catch specific error codes (429, 401, 400)
2. Retry on rate limit errors (wait and retry up to 3 times)
3. Log all errors to a monitoring system
4. Return a user-friendly error response
Rate Limit Best Practices
OpenAI has per-minute and per-day API limits depending on your plan. For high-volume automations:
- Use gpt-4o-mini instead of gpt-4o for cost and rate limit efficiency
- Add a Wait node between requests if processing many items
- Implement a queue system for large batches
- Monitor your usage at platform.openai.com/usage
FAQ
**Q: Which OpenAI model should I use for n8n automations?**
For most automation workflows, gpt-4o-mini is the best choice. It offers GPT-4-level intelligence at a fraction of the cost and with higher rate limits. Use gpt-4o when you need the highest quality reasoning for complex tasks. For high-volume, simple tasks, consider the o3-mini model.
**Q: How do I avoid high costs with OpenAI in n8n?**
Set a max token limit on every request, use gpt-4o-mini for routine tasks, and add monitoring to track token usage per workflow run. Review your OpenAI usage dashboard weekly to identify any runaway workflows.
**Q: Can I use OpenAI Assistants with the n8n OpenAI node?**
Yes, n8n supports the Assistants API. This lets you use Assistants with persistent memory across conversations. However, for simple single-request automations, the Chat Completions node is easier and more cost-effective.
**Q: How do I stream OpenAI responses in n8n?**
The n8n OpenAI node supports streaming for Chat Completions. Enable the "Streaming" option in the node settings. The output will be a stream of tokens rather than a single response. Streaming is useful for chatbot UIs but adds complexity for pure automation workflows.
**Q: Can Synta generate an OpenAI workflow for me automatically?**
Yes. Describe what you want to build in plain English to Synta. For example: "Build a workflow that triggers on new Gmail emails and uses OpenAI to classify each email as urgent, normal, or spam, then routes them accordingly." Synta generates the full n8n workflow with all node configurations. Try it at synta.io.