Docker Compose for Development: Local Infrastructure
The difference between a pleasant onboarding experience ("clone and run pnpm dev:infra") and a painful one ("first set up PostgreSQL, then configure Redis, then...") comes down to how well your Docker Compose setup captures your infrastructure requirements. A well-crafted docker-compose.dev.yml lets any developer on any machine have the exact same infrastructure running in minutes.
This guide covers the patterns for a production-quality local development stack: service configuration, health checks, networking, volume management, and the integration with your application's startup sequence.
Key Takeaways
- Use a non-default port for PostgreSQL locally (5433) to avoid conflicts with system installations
- Health checks on service dependencies prevent "connection refused" startup errors
- Named volumes persist database data between container restarts — bind mounts don't work reliably on Windows
- Use
env_fileto load environment variables from your.env.localfile into containers- Separate
docker-compose.dev.ymlfromdocker-compose.prod.yml— they serve different purposes- The
depends_on.condition: service_healthypattern waits for actual readiness, not just container start- Use
profilesto make optional services (email, monitoring) opt-in- Run
docker compose(v2) notdocker-compose(v1) — the plugin syntax is current
The Complete Development Stack
# infrastructure/docker-compose.dev.yml
name: ecosire-dev
services:
# ─── PostgreSQL ─────────────────────────────────────────────────
postgres:
image: postgres:17-alpine
container_name: ecosire-postgres
environment:
POSTGRES_DB: ecosire_dev
POSTGRES_USER: ecosire
POSTGRES_PASSWORD: dev_password_change_in_prod
ports:
- "5433:5432" # 5433 externally — avoids conflicts with system PostgreSQL
volumes:
- postgres_data:/var/lib/postgresql/data
- ./init-scripts:/docker-entrypoint-initdb.d # Run SQL on first start
command: >
postgres
-c shared_buffers=256MB
-c effective_cache_size=1GB
-c work_mem=16MB
-c maintenance_work_mem=128MB
-c checkpoint_completion_target=0.9
-c wal_buffers=16MB
-c max_connections=100
-c log_min_duration_statement=100
-c log_statement=ddl
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ecosire -d ecosire_dev"]
interval: 10s
timeout: 5s
retries: 5
start_period: 10s
restart: unless-stopped
# ─── Redis ──────────────────────────────────────────────────────
redis:
image: redis:7-alpine
container_name: ecosire-redis
ports:
- "6379:6379"
volumes:
- redis_data:/data
command: >
redis-server
--maxmemory 512mb
--maxmemory-policy allkeys-lru
--appendonly yes
--appendfsync everysec
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
# ─── Authentik (Identity Provider) ──────────────────────────────
authentik-server:
image: ghcr.io/goauthentik/server:2024.12
container_name: ecosire-authentik
command: server
environment:
AUTHENTIK_REDIS__HOST: redis
AUTHENTIK_POSTGRESQL__HOST: postgres
AUTHENTIK_POSTGRESQL__USER: ecosire
AUTHENTIK_POSTGRESQL__PASSWORD: dev_password_change_in_prod
AUTHENTIK_POSTGRESQL__NAME: authentik
AUTHENTIK_SECRET_KEY: dev-secret-key-change-in-production-32chars
AUTHENTIK_ERROR_REPORTING__ENABLED: "false"
AUTHENTIK_DISABLE_STARTUP_ANALYTICS: "true"
volumes:
- authentik_media:/media
- authentik_certs:/certs
ports:
- "9000:9000" # HTTP
- "9443:9443" # HTTPS
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
healthcheck:
test: ["CMD-SHELL", "ak healthcheck"]
interval: 30s
timeout: 10s
retries: 5
start_period: 60s # Authentik takes time to initialize
restart: unless-stopped
authentik-worker:
image: ghcr.io/goauthentik/server:2024.12
container_name: ecosire-authentik-worker
command: worker
environment:
AUTHENTIK_REDIS__HOST: redis
AUTHENTIK_POSTGRESQL__HOST: postgres
AUTHENTIK_POSTGRESQL__USER: ecosire
AUTHENTIK_POSTGRESQL__PASSWORD: dev_password_change_in_prod
AUTHENTIK_POSTGRESQL__NAME: authentik
AUTHENTIK_SECRET_KEY: dev-secret-key-change-in-production-32chars
volumes:
- authentik_media:/media
- authentik_certs:/certs
- /var/run/docker.sock:/var/run/docker.sock # For Authentik's proxy
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
restart: unless-stopped
# ─── Mailpit (Email Testing) ─────────────────────────────────────
mailpit:
image: axllent/mailpit:latest
container_name: ecosire-mailpit
ports:
- "1025:1025" # SMTP
- "8025:8025" # Web UI
environment:
MP_MAX_MESSAGES: 200
MP_SMTP_AUTH_ACCEPT_ANY: true
MP_SMTP_AUTH_ALLOW_INSECURE: true
restart: unless-stopped
profiles:
- email # Optional — use `docker compose --profile email up`
# ─── pgAdmin (Database GUI) ─────────────────────────────────────
pgadmin:
image: dpage/pgadmin4:latest
container_name: ecosire-pgadmin
environment:
PGADMIN_DEFAULT_EMAIL: [email protected]
PGADMIN_DEFAULT_PASSWORD: admin
PGADMIN_CONFIG_SERVER_MODE: "False"
PGADMIN_CONFIG_MASTER_PASSWORD_REQUIRED: "False"
ports:
- "5050:80"
volumes:
- pgadmin_data:/var/lib/pgadmin
depends_on:
postgres:
condition: service_healthy
restart: unless-stopped
profiles:
- tools # Optional
networks:
default:
name: ecosire-dev-network
volumes:
postgres_data:
name: ecosire-postgres-data
redis_data:
name: ecosire-redis-data
authentik_media:
name: ecosire-authentik-media
authentik_certs:
name: ecosire-authentik-certs
pgadmin_data:
name: ecosire-pgadmin-data
Package.json Scripts
Wire the Docker Compose commands into your monorepo scripts:
{
"scripts": {
"dev:infra": "docker compose -f infrastructure/docker-compose.dev.yml up -d",
"dev:infra:down": "docker compose -f infrastructure/docker-compose.dev.yml down",
"dev:infra:logs": "docker compose -f infrastructure/docker-compose.dev.yml logs -f",
"dev:infra:reset": "docker compose -f infrastructure/docker-compose.dev.yml down -v && pnpm dev:infra",
"dev:infra:email": "docker compose -f infrastructure/docker-compose.dev.yml --profile email up -d",
"dev:infra:tools": "docker compose -f infrastructure/docker-compose.dev.yml --profile tools up -d"
}
}
The --profile flag lets optional services (email testing with Mailpit, database GUI with pgAdmin) stay dormant until explicitly requested.
Database Initialization Scripts
Place SQL files in infrastructure/init-scripts/ — they run on the first container start:
-- infrastructure/init-scripts/01-create-databases.sql
-- Create all databases Authentik needs separately from the app DB
CREATE DATABASE authentik;
GRANT ALL PRIVILEGES ON DATABASE authentik TO ecosire;
-- Create test database for CI
CREATE DATABASE ecosire_test;
GRANT ALL PRIVILEGES ON DATABASE ecosire_test TO ecosire;
-- infrastructure/init-scripts/02-extensions.sql
-- Enable PostgreSQL extensions
\c ecosire_dev;
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "pg_trgm"; -- Trigram search
CREATE EXTENSION IF NOT EXISTS "btree_gin"; -- Composite GIN indexes
\c ecosire_test;
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
Initialization scripts run in alphabetical order. The \c database_name psql metacommand switches the active database.
Environment Variables Integration
Your application reads from .env.local in the monorepo root. The Docker services need to know how to connect to each other using service names (not localhost):
# .env.local (monorepo root)
# PostgreSQL — use 5433 externally (host) or 5432 internally (container network)
DATABASE_URL=postgresql://ecosire:dev_password_change_in_prod@localhost:5433/ecosire_dev
# Redis
REDIS_URL=redis://localhost:6379
# Authentik — use 9000 for external calls from your dev machine
AUTHENTIK_URL=http://localhost:9000
# Use service name for server-to-server calls within Docker network
AUTHENTIK_INTERNAL_URL=http://authentik-server:9000
# Email (Mailpit SMTP)
SMTP_HOST=localhost
SMTP_PORT=1025
SMTP_SECURE=false
# Application
NODE_ENV=development
For applications running inside Docker that need to talk to other services, use service names. For applications running on your host machine (NestJS, Next.js in dev mode), use localhost with the host-mapped ports.
Health Checks Deep Dive
Health checks prevent cascading startup failures. The depends_on.condition: service_healthy waits for the actual readiness, not just container start:
# Without health checks — can fail because PostgreSQL isn't ready
depends_on:
- postgres
# With health checks — waits for PostgreSQL to accept connections
depends_on:
postgres:
condition: service_healthy
Custom health checks for your own services:
// apps/api/src/health/health.controller.ts
@Get()
@Public()
@HealthCheck()
async check() {
return this.health.check([
() => this.db.isHealthy('database'),
() => this.redis.isHealthy('redis'),
]);
}
# If your API is also dockerized
api:
image: ecosire-api:latest
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3001/api/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
Volume Management
Named volumes persist data between restarts. Understand when to use each volume type:
| Type | Persistence | Performance | Use For |
|---|---|---|---|
| Named volume | Yes | Excellent | Database data |
| Bind mount | Yes | Good (Linux), Poor (macOS) | Source code hot-reload |
| tmpfs | No | Excellent | Temporary files, secrets |
# Use bind mounts for source code (enables hot reload)
volumes:
- ./apps/api/src:/app/src # Code changes reflected immediately
# Use named volumes for data
volumes:
- postgres_data:/var/lib/postgresql/data
# Use tmpfs for ephemeral data
volumes:
- type: tmpfs
target: /tmp
On macOS with Docker Desktop, bind mounts use gRPC FUSE which is significantly slower than on Linux. For the NestJS and Next.js dev servers, run them directly on your host machine (not in Docker) to get native file system performance.
Production Docker Compose
The production compose file is structurally different — no local ports, restart policies, production resource limits:
# infrastructure/docker-compose.prod.yml
name: ecosire-prod
services:
postgres:
image: postgres:17-alpine
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
# No port mapping — only accessible within Docker network
restart: always
deploy:
resources:
limits:
memory: 2G
reservations:
memory: 512M
redis:
image: redis:7-alpine
command: redis-server --requirepass ${REDIS_PASSWORD} --maxmemory 1gb
volumes:
- redis_data:/data
restart: always
# No port mapping — internal only
volumes:
postgres_data:
redis_data:
Production doesn't expose ports externally — applications connect via the internal Docker network. Nginx handles external traffic.
Common Pitfalls and Solutions
Pitfall 1: Port conflicts with system services
PostgreSQL, Redis, and other services often run as system services. Always map to non-standard ports in development:
- PostgreSQL:
5433:5432(not5432:5432) - Redis: keep
6379:6379(rarely conflicts) - Run
lsof -i :5432to check what's using the default port
Pitfall 2: Volume permission issues on Linux
Docker volumes on Linux use root ownership by default. If your container user is non-root, set the correct ownership:
postgres:
image: postgres:17-alpine
user: "999:999" # postgres user's UID:GID
# Or use init container to fix permissions
Pitfall 3: Authentik initialization takes 60+ seconds
Authentik runs database migrations on first start. The start_period: 60s in the health check gives it time. If dependent services start before Authentik is ready, they'll fail. Use the service_healthy condition and give it enough start_period.
Pitfall 4: Docker Desktop resource limits on Mac
Default Docker Desktop allocates 2 CPUs and 2GB RAM — not enough for PostgreSQL + Redis + Authentik running simultaneously. Increase in Docker Desktop Settings > Resources to at least 4 CPUs and 6GB RAM.
Pitfall 5: docker-compose vs docker compose
The old docker-compose (v1, written in Python) is deprecated. Use docker compose (v2, the plugin). Check your version: docker compose version. If you see Docker Compose version v2.x.x, you're using v2.
Frequently Asked Questions
Should I run my application services (NestJS, Next.js) in Docker during development?
Generally no — for active development, run your application services on your host machine for faster hot-reload and easier debugging. Use Docker only for infrastructure services (databases, caches, identity providers) that are stable and don't need frequent restarting. The exception is if your application has native dependencies that differ between your development OS and the production environment.
How do I handle database migrations in the Docker Compose workflow?
Run migrations from your host machine after starting the infrastructure: pnpm dev:infra && pnpm db:migrate. Don't run migrations inside a Docker container during development — you lose the type checking and IDE integration that makes Drizzle migrations safe. For the initial database creation, use Docker's initdb.d scripts.
How do I back up and restore my local Docker volumes?
Use docker run --rm -v postgres_data:/data -v $(pwd):/backup alpine tar czf /backup/postgres-backup.tar.gz /data to back up. Restore with the same approach using tar xzf. For development, you can also dump with pg_dump and restore with psql since you have the port exposed.
How do I share Docker Compose state with other team members?
The Docker Compose file is shared via git, but the data in volumes is local. Each developer starts with an empty database and runs migrations/seeds to populate it. Use seed scripts (committed to the repo) to create consistent test data. The shared docker-compose.dev.yml ensures everyone uses the same service versions and configuration.
Why use Mailpit instead of real email in development?
Mailpit is a local SMTP server that captures all outgoing email and provides a web UI to view them. It prevents accidentally sending real emails to real users during development, doesn't require SMTP credentials, and lets you verify email templates without checking your inbox. Configure your app to use SMTP_HOST=localhost SMTP_PORT=1025 and visit http://localhost:8025 to see captured emails.
Next Steps
A well-crafted Docker Compose setup for local development is an investment that pays dividends every time a new developer joins or you spin up a new machine. ECOSIRE runs PostgreSQL 17, Redis 7, and Authentik in Docker Compose for local development across the entire team.
Need help designing your local development infrastructure or containerizing your application for production? Explore our DevOps services to see how we can help.
Written by
ECOSIRE TeamTechnical Writing
The ECOSIRE technical writing team covers Odoo ERP, Shopify eCommerce, AI agents, Power BI analytics, GoHighLevel automation, and enterprise software best practices. Our guides help businesses make informed technology decisions.
ECOSIRE
Grow Your Business with ECOSIRE
Enterprise solutions across ERP, eCommerce, AI, analytics, and automation.
Related Articles
Odoo Python Development: Complete Guide for Beginners and Pros
Master Odoo Python development with this complete guide covering module structure, ORM API, views, controllers, inheritance patterns, debugging, and testing.
AWS EC2 Deployment Guide for Web Applications
Complete AWS EC2 deployment guide: instance selection, security groups, Node.js deployment, Nginx reverse proxy, SSL, auto-scaling, CloudWatch monitoring, and cost optimization.
Cloud vs On-Premise ERP in 2026: The Definitive Guide
Cloud vs on-premise ERP in 2026: total cost analysis, security comparison, scalability, compliance, and the right deployment model for your business.