n8n Self-Hosted Deployment Guide
Status: Complete Deployment Guide
Version: 1.0
Purpose: Step-by-step n8n self-hosting procedures on Google Cloud Run
Applicable To: Any workflow automation deployment requiring cost efficiency
Overview
This guide provides comprehensive procedures for deploying n8n as a self-hosted workflow automation engine on Google Cloud Run. The approach delivers unlimited workflow executions at ~$8/month base cost by leveraging Cloud Run's scale-to-zero capabilities.
Key Benefits
- Cost Efficiency: ~$8/month vs $50-500/month for n8n Cloud
- Unlimited Executions: No workflow limits
- Auto-scaling: Scales from 0 to N instances automatically
- Production Ready: Enterprise-grade reliability and monitoring
Prerequisites
Before beginning the deployment:
Required Resources
- Google Cloud Project with billing enabled
- Cloud SQL instance (can use existing database)
- Domain or subdomain for n8n interface
- Service account with appropriate permissions
Required Permissions
# Grant necessary IAM roles
gcloud projects add-iam-policy-binding PROJECT_ID \
--member="serviceAccount:n8n-sa@PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/cloudsql.client"
gcloud projects add-iam-policy-binding PROJECT_ID \
--member="serviceAccount:n8n-sa@PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/secretmanager.secretAccessor"
ποΈ Database Setup
Step 1: Create n8n Database
Connect to your existing Cloud SQL instance and create the n8n database:
-- Create dedicated database for n8n
CREATE DATABASE n8n_prod;
-- Create dedicated user
CREATE USER n8n_user WITH PASSWORD 'your-secure-password';
-- Grant permissions
GRANT ALL PRIVILEGES ON DATABASE n8n_prod TO n8n_user;
-- Switch to n8n database
\c n8n_prod;
-- Grant schema permissions
GRANT ALL ON SCHEMA public TO n8n_user;
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO n8n_user;
GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO n8n_user;
Step 2: Database Connection Configuration
# Get your Cloud SQL instance connection name
gcloud sql instances describe INSTANCE_NAME --format='value(connectionName)'
# Note the private IP for VPC connections
gcloud sql instances describe INSTANCE_NAME --format='value(ipAddresses[0].ipAddress)'
Secret Management
Store sensitive configuration in Google Secret Manager:
# Store database password
echo -n "your-secure-password" | gcloud secrets create n8n-db-password --data-file=-
# Generate and store encryption key
openssl rand -base64 32 | gcloud secrets create n8n-encryption-key --data-file=-
# Store admin password
echo -n "your-admin-password" | gcloud secrets create n8n-admin-password --data-file=-
# Grant Cloud Run access to secrets
for secret in n8n-db-password n8n-encryption-key n8n-admin-password; do
gcloud secrets add-iam-policy-binding $secret \
--member="serviceAccount:n8n-sa@PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/secretmanager.secretAccessor"
done
π³ Container Configuration
Step 1: Create Dockerfile
# Dockerfile for n8n Cloud Run deployment
FROM n8nio/n8n:latest
# Install additional nodes if needed
# RUN npm install -g n8n-nodes-postmark
# Set ownership for security
USER node
# Health check for Cloud Run
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:5678/healthz || exit 1
# Expose port
EXPOSE 5678
# Use exec form for proper signal handling
CMD ["n8n", "start"]
Step 2: Build and Push Container
# Build container image
docker build -t gcr.io/PROJECT_ID/n8n:latest .
# Push to Google Container Registry
docker push gcr.io/PROJECT_ID/n8n:latest
Cloud Run Service Configuration
Step 1: Create Service Definition
Create a comprehensive service configuration:
# service.yaml - Complete Cloud Run configuration
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: n8n-automation
annotations:
run.googleapis.com/ingress: all
run.googleapis.com/launch-stage: GA
spec:
template:
metadata:
annotations:
# Scaling configuration
autoscaling.knative.dev/minScale: "0"
autoscaling.knative.dev/maxScale: "10"
# Performance optimization
run.googleapis.com/cpu-throttling: "true"
run.googleapis.com/startup-cpu-boost: "true"
run.googleapis.com/execution-environment: gen2
# Connection timeout for long workflows
run.googleapis.com/timeout: "900s"
spec:
containerConcurrency: 10
serviceAccountName: n8n-sa
containers:
- image: gcr.io/PROJECT_ID/n8n:latest
ports:
- containerPort: 5678
resources:
limits:
cpu: "1"
memory: "1Gi"
requests:
cpu: "0.1"
memory: "256Mi"
env:
# Core n8n configuration
- name: NODE_ENV
value: "production"
- name: N8N_PROTOCOL
value: "https"
- name: N8N_HOST
value: "n8n.your-domain.com"
- name: N8N_PORT
value: "5678"
- name: WEBHOOK_URL
value: "https://n8n.your-domain.com"
# Database configuration
- name: DB_TYPE
value: "postgresdb"
- name: DB_POSTGRESDB_HOST
value: "YOUR_CLOUD_SQL_IP"
- name: DB_POSTGRESDB_PORT
value: "5432"
- name: DB_POSTGRESDB_DATABASE
value: "n8n_prod"
- name: DB_POSTGRESDB_USER
value: "n8n_user"
# Execution settings
- name: EXECUTIONS_PROCESS
value: "main"
- name: EXECUTIONS_TIMEOUT
value: "300"
- name: EXECUTIONS_DATA_SAVE_ON_ERROR
value: "all"
- name: EXECUTIONS_DATA_SAVE_ON_SUCCESS
value: "none"
- name: EXECUTIONS_DATA_PRUNE
value: "true"
- name: EXECUTIONS_DATA_MAX_AGE
value: "168"
# Performance tuning
- name: N8N_PAYLOAD_SIZE_MAX
value: "16"
- name: NODE_OPTIONS
value: "--max-old-space-size=1024"
# Security
- name: N8N_BASIC_AUTH_ACTIVE
value: "true"
- name: N8N_BASIC_AUTH_USER
value: "admin"
# Database connection optimization
- name: DB_POSTGRESDB_POOL_SIZE
value: "5"
- name: DB_POSTGRESDB_POOL_CONNECTION_TIMEOUT_MILLIS
value: "3000"
- name: DB_POSTGRESDB_POOL_IDLE_TIMEOUT_MILLIS
value: "10000"
# Secrets from Secret Manager
- name: DB_POSTGRESDB_PASSWORD
valueFrom:
secretKeyRef:
name: n8n-db-password
key: latest
- name: N8N_ENCRYPTION_KEY
valueFrom:
secretKeyRef:
name: n8n-encryption-key
key: latest
- name: N8N_BASIC_AUTH_PASSWORD
valueFrom:
secretKeyRef:
name: n8n-admin-password
key: latest
Step 2: Deploy Service
# Deploy the service
gcloud run services replace service.yaml --region=us-central1
# Get the service URL
gcloud run services describe n8n-automation --region=us-central1 --format='value(status.url)'
Domain Configuration
Step 1: Set up Custom Domain
# Map custom domain to Cloud Run service
gcloud run domain-mappings create \
--service=n8n-automation \
--domain=n8n.your-domain.com \
--region=us-central1
# Verify domain mapping
gcloud run domain-mappings list --region=us-central1
Step 2: DNS Configuration
Add DNS records for your domain:
# Add CNAME record in your DNS provider
CNAME n8n ghs.googlehosted.com.
Database Optimization
Step 1: Create Indexes
Optimize n8n database performance:
-- Connect to n8n database
\c n8n_prod;
-- Create performance indexes
CREATE INDEX CONCURRENTLY idx_executions_finished_at
ON execution_entity (finished_at)
WHERE finished_at IS NOT NULL;
CREATE INDEX CONCURRENTLY idx_executions_workflow_id
ON execution_entity (workflow_id, finished_at);
CREATE INDEX CONCURRENTLY idx_executions_status
ON execution_entity (status, started_at);
Step 2: Automated Cleanup
-- Create cleanup function
CREATE OR REPLACE FUNCTION cleanup_old_executions()
RETURNS void AS $
BEGIN
DELETE FROM execution_entity
WHERE finished_at < NOW() - INTERVAL '7 days'
AND status IN ('success', 'failed', 'error');
RAISE NOTICE 'Cleaned up old executions older than 7 days';
END;
$ LANGUAGE plpgsql;
-- Create cleanup procedure for scheduled execution
-- (Run this from a scheduled job or Cloud Function)
Workflow Optimization
Step 1: Efficient Workflow Patterns
Create optimized workflow templates:
// Example: Optimized batch processing workflow
{
"name": "Efficient Batch Processor",
"nodes": [
{
"name": "Webhook Trigger",
"type": "webhook",
"parameters": {
"path": "batch-process",
"responseMode": "immediately"
}
},
{
"name": "Get Data Batch",
"type": "postgres",
"parameters": {
"operation": "executeQuery",
"query": "SELECT * FROM items WHERE processed = false LIMIT 500"
}
},
{
"name": "Process in Batches",
"type": "splitInBatches",
"parameters": {
"batchSize": 100
}
},
{
"name": "Process Items",
"type": "code",
"parameters": {
"language": "javascript",
"code": `
// Efficient processing without external calls
const items = $items[1].json;
return items.map(item => ({
json: {
id: item.id,
processed: true,
result: processItem(item),
processed_at: new Date().toISOString()
}
}));
`
}
},
{
"name": "Update Status",
"type": "postgres",
"parameters": {
"operation": "executeQuery",
"query": "UPDATE items SET processed = true WHERE id = ANY($1)",
"additionalFields": {
"queryParams": "={{ $json.map(item => item.id) }}"
}
}
}
]
}
Step 2: Error Handling
// Robust error handling node
{
"name": "Error Handler",
"type": "code",
"parameters": {
"language": "javascript",
"code": `
const error = $items[0].json.error;
// Classify error types
if (error.message.includes('rate limit')) {
// Exponential backoff for rate limits
const delay = Math.min(300000, Math.pow(2, error.retryCount || 0) * 1000);
return [{ json: { action: 'retry', delay, error } }];
}
if (error.message.includes('timeout')) {
// Simple retry for timeouts
return [{ json: { action: 'retry', delay: 30000, error } }];
}
if (error.message.includes('invalid')) {
// Skip invalid data
return [{ json: { action: 'skip', reason: error.message } }];
}
// Default: retry once then fail
if ((error.retryCount || 0) < 1) {
return [{ json: { action: 'retry', delay: 10000, error } }];
} else {
return [{ json: { action: 'fail', error } }];
}
`
}
}
Monitoring Setup
Step 1: Cloud Monitoring Dashboard
Create monitoring dashboard for n8n metrics:
# Create custom dashboard via CLI or Console
# Monitor these key metrics:
# - Container instances count
# - Request count and latency
# - Error rate
# - Memory and CPU utilization
# - Cold start frequency
Step 2: Alerting Configuration
# Alert policy examples
alerting:
policies:
- displayName: "n8n High Error Rate"
conditions:
- displayName: "Error rate > 5%"
conditionThreshold:
filter: 'resource.type="cloud_run_revision" AND resource.label.service_name="n8n-automation"'
comparison: COMPARISON_GREATER_THAN
thresholdValue: 0.05
duration: 300s
- displayName: "n8n High Memory Usage"
conditions:
- displayName: "Memory > 80%"
conditionThreshold:
filter: 'resource.type="cloud_run_revision"'
comparison: COMPARISON_GREATER_THAN
thresholdValue: 0.8
Cost Optimization
Step 1: Right-size Resources
# Optimize container resources based on usage
gcloud run services update n8n-automation \
--memory=512Mi \
--cpu=0.5 \
--concurrency=5 \
--max-instances=5 \
--region=us-central1
Step 2: Execution Data Management
# Environment variables for cost optimization
env:
- name: EXECUTIONS_DATA_PRUNE
value: "true"
- name: EXECUTIONS_DATA_MAX_AGE
value: "72" # 3 days retention
- name: EXECUTIONS_DATA_SAVE_ON_SUCCESS
value: "none" # Don't save success data
- name: EXECUTIONS_DATA_SAVE_ON_ERROR
value: "all" # Keep errors for debugging
Deployment Checklist
Pre-Launch Verification
- Database created with proper permissions
- Secrets stored in Secret Manager
- Container image built and pushed
- Service deployed and accessible
- Custom domain configured
- SSL certificate active
- Basic authentication working
Post-Launch Monitoring
- Service health check passing
- Workflows executing successfully
- Database connections stable
- Error rates within acceptable limits
- Cost monitoring alerts configured
- Backup procedures in place
Performance Optimization
- Database indexes created
- Execution cleanup scheduled
- Resource limits optimized
- Cold start times acceptable (<5s)
- Workflow patterns optimized
- Error handling implemented
Troubleshooting
Common Issues
Service Won't Start
# Check service logs
gcloud run services logs tail n8n-automation --region=us-central1
# Check container build logs
gcloud builds list --limit=5
Database Connection Issues
# Test database connectivity
gcloud sql connect INSTANCE_NAME --user=n8n_user --database=n8n_prod
Secret Access Issues
# Verify secret access
gcloud secrets versions access latest --secret=n8n-db-password
Domain Issues
# Check domain mapping status
gcloud run domain-mappings describe n8n.your-domain.com --region=us-central1
This guide provides a complete production-ready n8n deployment on Google Cloud Run with cost optimization and monitoring. Customize the configuration based on your specific workflow requirements and scaling needs.