Docker for Production ERP Deployment: A Complete Operations Guide

Deploy ERP systems with Docker in production. Covers multi-stage builds, Docker Compose orchestration, volume management, networking, and scaling strategies.

E
ECOSIRE Research and Development Team
|March 16, 20268 min read1.8k Words|

Docker for Production ERP Deployment: A Complete Operations Guide

Organizations running ERP systems in Docker containers report 73% faster deployment cycles and 45% fewer environment-related incidents compared to traditional bare-metal deployments. Docker transforms ERP deployment from a multi-day, error-prone process into a repeatable, version-controlled operation that any team member can execute.

This guide covers the full lifecycle of running enterprise ERP systems --- including Odoo, custom NestJS backends, and Next.js frontends --- in production Docker environments.

Key Takeaways

  • Multi-stage Docker builds reduce ERP container image sizes by 60-80%, improving deployment speed
  • Docker Compose orchestrates ERP, database, reverse proxy, and cache services as a single deployable unit
  • Named volumes and bind mounts ensure data persistence across container restarts and upgrades
  • Health checks and restart policies provide automatic recovery from transient failures

Architecture of a Dockerized ERP Stack

A production ERP deployment typically involves five or more interconnected services. Docker Compose defines these services declaratively, ensuring consistent deployment across environments.

Service Topology

The standard Dockerized ERP stack:

  1. Application server: The ERP runtime (Odoo, NestJS, or similar)
  2. Database: PostgreSQL with persistent volume storage
  3. Reverse proxy: Nginx handling SSL termination, static files, and request routing
  4. Cache layer: Redis for session storage, job queues, and application caching
  5. Background workers: Async job processors for emails, reports, and integrations

Optional services include backup containers (pg_dump on cron), monitoring sidecars (Prometheus exporters), and log shippers (Fluent Bit).


Multi-Stage Builds for ERP Applications

Multi-stage builds are essential for production Docker images. They separate build-time dependencies from runtime, producing lean, secure images.

NestJS Backend Build

# Stage 1: Install dependencies and build
FROM node:20-alpine AS builder
WORKDIR /app

# Install pnpm
RUN corepack enable

# Copy workspace configuration
COPY pnpm-lock.yaml pnpm-workspace.yaml package.json ./
COPY packages/ ./packages/
COPY apps/api/package.json ./apps/api/

# Install dependencies
RUN pnpm install --frozen-lockfile

# Copy source and build
COPY apps/api/ ./apps/api/
RUN pnpm --filter @ecosire/db build
RUN pnpm --filter @ecosire/types build
RUN pnpm --filter @ecosire/validators build
RUN pnpm --filter @ecosire/api build

# Stage 2: Production runtime
FROM node:20-alpine AS runner
WORKDIR /app

RUN addgroup -g 1001 -S appgroup && \
    adduser -S appuser -u 1001 -G appgroup

COPY --from=builder --chown=appuser:appgroup /app/apps/api/dist ./dist
COPY --from=builder --chown=appuser:appgroup /app/node_modules ./node_modules
COPY --from=builder --chown=appuser:appgroup /app/apps/api/package.json ./

USER appuser
EXPOSE 3001

HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
  CMD wget --no-verbose --tries=1 --spider http://localhost:3001/health || exit 1

CMD ["node", "dist/main.js"]

Next.js Frontend Build

FROM node:20-alpine AS builder
WORKDIR /app
RUN corepack enable

COPY pnpm-lock.yaml pnpm-workspace.yaml package.json ./
COPY packages/ ./packages/
COPY apps/web/package.json ./apps/web/

RUN pnpm install --frozen-lockfile

COPY apps/web/ ./apps/web/
RUN pnpm --filter @ecosire/web build

FROM node:20-alpine AS runner
WORKDIR /app

RUN addgroup -g 1001 -S appgroup && \
    adduser -S appuser -u 1001 -G appgroup

COPY --from=builder --chown=appuser:appgroup /app/apps/web/.next/standalone ./
COPY --from=builder --chown=appuser:appgroup /app/apps/web/.next/static ./.next/static
COPY --from=builder --chown=appuser:appgroup /app/apps/web/public ./public

USER appuser
EXPOSE 3000
ENV NODE_ENV=production
CMD ["node", "server.js"]

Image Size Comparison

Build TypeImage SizeBuild Time
Single-stage (full node image)1.8 GB4 min
Single-stage (Alpine)650 MB3.5 min
Multi-stage (Alpine)180 MB5 min
Multi-stage + pruned deps120 MB5.5 min

The 5.5 minute build time is acceptable because it happens in CI, not on developer machines.


Docker Compose for Production

version: "3.8"

services:
  api:
    build:
      context: .
      dockerfile: apps/api/Dockerfile
    environment:
      - DATABASE_URL=postgresql://app:${DB_PASSWORD}@db:5432/ecosire
      - REDIS_URL=redis://redis:6379
      - NODE_ENV=production
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_healthy
    restart: unless-stopped
    networks:
      - backend
      - frontend

  web:
    build:
      context: .
      dockerfile: apps/web/Dockerfile
    environment:
      - API_URL=http://api:3001
      - NODE_ENV=production
    depends_on:
      - api
    restart: unless-stopped
    networks:
      - frontend

  db:
    image: postgres:17-alpine
    environment:
      POSTGRES_USER: app
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_DB: ecosire
    volumes:
      - postgres-data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U app -d ecosire"]
      interval: 10s
      timeout: 5s
      retries: 5
    restart: unless-stopped
    networks:
      - backend

  redis:
    image: redis:7-alpine
    command: redis-server --requirepass ${REDIS_PASSWORD} --maxmemory 256mb --maxmemory-policy allkeys-lru
    volumes:
      - redis-data:/data
    healthcheck:
      test: ["CMD", "redis-cli", "-a", "${REDIS_PASSWORD}", "ping"]
      interval: 10s
      timeout: 5s
      retries: 5
    restart: unless-stopped
    networks:
      - backend

  nginx:
    image: nginx:alpine
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./infrastructure/nginx/production.conf:/etc/nginx/conf.d/default.conf:ro
      - ./certbot/conf:/etc/letsencrypt:ro
      - ./certbot/www:/var/www/certbot:ro
    depends_on:
      - web
      - api
    restart: unless-stopped
    networks:
      - frontend

volumes:
  postgres-data:
  redis-data:

networks:
  frontend:
  backend:

Network Isolation

The configuration above uses two networks:

  • frontend: Nginx, web, and API (Nginx proxies to both)
  • backend: API, database, and Redis

The database and Redis are not accessible from the Nginx container or the external network. This network segmentation is a critical security practice.


Volume Management and Data Persistence

Volumes are the most critical part of a Dockerized ERP deployment. Lose your volumes and you lose your data.

Volume Types

TypeUse CasePersistencePerformance
Named volumesDatabase, RedisSurvives container removalNative filesystem speed
Bind mountsConfig files, logsTied to host filesystemNative filesystem speed
tmpfs mountsTemp files, secretsMemory only, lost on restartMemory speed

Backup Strategy for Docker Volumes

#!/bin/bash
# backup-volumes.sh - Run via cron every 6 hours

TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR="/opt/backups"

# Stop the application briefly for consistent backup
docker compose stop api web

# Backup PostgreSQL
docker compose exec -T db pg_dump -U app ecosire | gzip > "$BACKUP_DIR/db_$TIMESTAMP.sql.gz"

# Backup Redis
docker compose exec -T redis redis-cli -a "$REDIS_PASSWORD" BGSAVE
sleep 5
docker cp $(docker compose ps -q redis):/data/dump.rdb "$BACKUP_DIR/redis_$TIMESTAMP.rdb"

# Restart services
docker compose start api web

# Upload to S3
aws s3 sync "$BACKUP_DIR" "s3://company-backups/docker-volumes/" --exclude "*.tmp"

# Retain 30 days locally
find "$BACKUP_DIR" -name "*.gz" -mtime +30 -delete
find "$BACKUP_DIR" -name "*.rdb" -mtime +30 -delete

Health Checks and Restart Policies

Production containers must self-report their health and recover from failures automatically.

Application Health Check Endpoint

// health.controller.ts
@Controller('health')
export class HealthController {
  constructor(
    private readonly db: DatabaseService,
    private readonly redis: RedisService,
  ) {}

  @Get()
  @Public()
  async check() {
    const checks = {
      database: await this.checkDatabase(),
      redis: await this.checkRedis(),
      uptime: process.uptime(),
      memory: process.memoryUsage(),
    };

    const healthy = checks.database && checks.redis;
    return { status: healthy ? 'ok' : 'degraded', checks };
  }

  private async checkDatabase(): Promise<boolean> {
    try {
      await this.db.execute('SELECT 1');
      return true;
    } catch {
      return false;
    }
  }

  private async checkRedis(): Promise<boolean> {
    try {
      await this.redis.ping();
      return true;
    } catch {
      return false;
    }
  }
}

Restart Policy Selection

PolicyBehaviorUse Case
noNever restartDevelopment, one-off tasks
on-failureRestart only on non-zero exitWorkers, batch jobs
alwaysAlways restart (including on docker daemon restart)Production services
unless-stoppedLike always but respects manual stopsMost production services

Use unless-stopped for production services. This ensures containers restart after server reboots or Docker daemon restarts, but respects manual docker compose stop commands during maintenance.


Deployment Workflow

Rolling Updates with Docker Compose

#!/bin/bash
# deploy.sh - Zero-downtime deployment

set -e

echo "Pulling latest code..."
git pull origin main

echo "Building new images..."
docker compose build --no-cache api web

echo "Rolling update - API first..."
docker compose up -d --no-deps api
sleep 10

# Verify API health
if ! curl -sf http://localhost:3001/health > /dev/null; then
  echo "API health check failed, rolling back..."
  docker compose up -d --no-deps api
  exit 1
fi

echo "Rolling update - Web..."
docker compose up -d --no-deps web
sleep 5

# Verify Web health
if ! curl -sf http://localhost:3000 > /dev/null; then
  echo "Web health check failed, rolling back..."
  docker compose up -d --no-deps web
  exit 1
fi

echo "Deployment complete!"
docker compose ps

Database Migration Safety

Never run migrations inside the application startup. Instead, run them as a separate step:

# Run migrations before deploying new containers
docker compose run --rm api npx drizzle-kit push

# Then deploy the new version
docker compose up -d

This pattern ensures that if a migration fails, the old version continues running unaffected.


Logging and Debugging

Centralized Logging

# Add to docker-compose.yml
services:
  api:
    logging:
      driver: json-file
      options:
        max-size: "10m"
        max-file: "5"
        labels: "service"
    labels:
      service: "ecosire-api"

Common Debugging Commands

# View logs for a specific service
docker compose logs -f api --tail 100

# Execute a shell inside a running container
docker compose exec api sh

# View resource usage
docker stats --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.NetIO}}"

# Inspect container networking
docker compose exec api ping db

# View container environment variables
docker compose exec api env | sort

Frequently Asked Questions

How do we handle database migrations in Docker?

Run migrations as a separate step before deploying new application containers. Use docker compose run --rm api npx drizzle-kit push (or your ORM's migration command) as a pre-deployment step. Never embed migration execution in the container startup command --- a failed migration should not prevent the current version from continuing to run.

What is the performance overhead of Docker?

On Linux, Docker's performance overhead is negligible --- typically less than 2% for CPU-bound workloads and no measurable difference for I/O-bound workloads. On macOS and Windows, Docker runs inside a virtual machine, adding 5-15% overhead. For production (which should be Linux), Docker is not a meaningful performance concern.

How do we manage secrets in Docker?

Never put secrets in Dockerfiles or docker-compose.yml files. Use environment variable files (.env) excluded from version control, Docker secrets (for Swarm mode), or external secret managers (AWS Secrets Manager, HashiCorp Vault). For Docker Compose, an .env file at the project root is the simplest approach.

Should we use Docker Swarm or Kubernetes?

For most SMB ERP deployments, Docker Compose is sufficient. Docker Swarm adds multi-host orchestration with minimal complexity overhead. Kubernetes is appropriate when you need auto-scaling, complex networking policies, or service mesh capabilities. See our Kubernetes scaling guide and microservices architecture guide for decision frameworks.

How do we handle Odoo custom modules in Docker?

Mount custom modules as a bind mount volume pointing to your addons directory. In the Dockerfile, ensure the addons path is configured in odoo.conf. For CI/CD, build a custom Docker image that bakes in your modules, ensuring version consistency. See our existing Docker Odoo deployment guide for Odoo-specific configuration.


What Comes Next

Docker is the foundation for modern ERP deployment. Once your containerized stack is stable, explore zero-downtime deployment strategies, production monitoring, and infrastructure as code to build a fully automated operations pipeline.

Contact ECOSIRE for Docker deployment consulting, or explore our Odoo implementation services for fully managed containerized ERP deployment.


Published by ECOSIRE -- helping businesses deploy enterprise software with confidence.

E

Written by

ECOSIRE Research and Development Team

Building enterprise-grade digital products at ECOSIRE. Sharing insights on Odoo integrations, e-commerce automation, and AI-powered business solutions.

Chat on WhatsApp