Docker Compose in production: what works and what doesn't

Practical tips for running Docker Compose in production environments. From health checks to logging and zero-downtime deploys.

Jean-Pierre Broeders

Freelance DevOps Engineer

February 24, 20265 min. read

Docker Compose in production: what works and what doesn't

The common wisdom says Docker Compose isn't suited for production. Partially true. For large distributed systems with hundreds of services, Kubernetes makes more sense. But for smaller projects — an API with a database, Redis, and a reverse proxy — Compose works just fine. When set up properly.

Getting the basics right

A common mistake: copying the development docker-compose.yml to the server as-is. That breaks things. Volumes pointing to local directories, ports exposed directly, no resource limits. All convenient during development, all problematic in production.

A production compose file looks different:

services:
  api:
    image: registry.example.com/myapp:${VERSION:-latest}
    restart: unless-stopped
    deploy:
      resources:
        limits:
          memory: 512M
          cpus: '0.5'
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s
    environment:
      - NODE_ENV=production
    networks:
      - internal
      - web

  db:
    image: postgres:16-alpine
    restart: unless-stopped
    volumes:
      - pgdata:/var/lib/postgresql/data
    environment:
      POSTGRES_PASSWORD_FILE: /run/secrets/db_password
    secrets:
      - db_password
    networks:
      - internal

  redis:
    image: redis:7-alpine
    restart: unless-stopped
    command: redis-server --maxmemory 128mb --maxmemory-policy allkeys-lru
    networks:
      - internal

volumes:
  pgdata:

secrets:
  db_password:
    file: ./secrets/db_password.txt

networks:
  internal:
  web:
    external: true

A few things stand out. No build steps — production uses pre-built images. The database runs on a named volume, not a bind mount. Secrets go through Docker secrets, not plaintext environment variables. And there are separate networks: internal for inter-service communication, web as an external network for the reverse proxy.

Health checks aren't optional

Without health checks, Docker has no idea whether a container is actually functional. The container might be running while the application inside has crashed or is hanging. With a health check, Docker detects this and restarts the container automatically (thanks to restart: unless-stopped).

The start_period matters a lot. Many applications need startup time — database migrations, cache warming, that sort of thing. Without a start period, Docker marks the container as unhealthy before it even finishes booting.

Get logging right

By default, Docker logs to JSON files that grow without limit. On a server with 50GB of disk, that's a ticking time bomb.

services:
  api:
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"

This rotates logs: maximum 3 files of 10MB each. For serious setups, an external logging driver like fluentd or gelf that forwards logs to a central system works better.

Zero-downtime deploys

This is where things get tricky. Docker Compose doesn't have built-in rolling update functionality like Kubernetes. But with a workaround it's still possible:

#!/bin/bash
set -e

VERSION=$1

# Pull new image
docker compose pull api

# Scale to 2 instances
docker compose up -d --no-deps --scale api=2 --no-recreate api

# Wait for new container to become healthy
sleep 15

# Stop old container
docker compose up -d --no-deps --scale api=1 --no-recreate api

This only works when the reverse proxy (Traefik, nginx) is health-aware and automatically routes traffic to healthy containers. With Traefik and Docker labels, that's fairly straightforward.

Managing secrets

Environment variables in an .env file on the server is common but not ideal. Those files get forgotten, never rotated, and sometimes accidentally end up in version control.

Better options:

  • Docker secrets (as shown above) — works out of the box
  • Hashicorp Vault with an init script that fetches secrets at startup
  • SOPS for encrypted secrets in Git — decrypt at deploy time

The choice depends on the complexity of the setup. For a single server with three services, Docker secrets is more than enough.

Don't forget backups

Named volumes are nice, but they don't back themselves up. A simple script running periodically:

#!/bin/bash
BACKUP_DIR="/opt/backups"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)

docker compose exec -T db pg_dump -U postgres mydb | \
  gzip > "${BACKUP_DIR}/db_${TIMESTAMP}.sql.gz"

# Clean up old backups (older than 30 days)
find "$BACKUP_DIR" -name "*.sql.gz" -mtime +30 -delete

Put this in a cron job and regularly test whether the backups can actually be restored. A backup that doesn't restore isn't a backup.

When to move on to something else

Docker Compose on a single server has limits. When any of these situations arise, it's time to consider alternatives:

  • More than ~10 interdependent services
  • Horizontal scaling across multiple servers needed
  • Complex network policies required
  • Multiple teams deploying independently

At that point, Docker Swarm (simpler) or Kubernetes (more powerful) are better options. But for plenty of projects, that point is never reached, and that's perfectly fine. The best tooling is tooling that fits the scale of the problem.

Want to stay updated?

Subscribe to my newsletter or get in touch for freelance projects.

Get in Touch