Docker Compose: Run Multi-Container Apps Without Losing Your Mind
Learn Docker Compose from scratch. Build a Node.js + PostgreSQL + Redis stack, manage services, networks, volumes, and environment configs.
You know how to run a single container. You can docker run postgres and feel pretty good about yourself. But then your project needs PostgreSQL, Redis, a Node.js API, and maybe a worker process. Suddenly you're juggling four terminal tabs, remembering port numbers, and wondering why the API can't connect to the database.
Docker Compose exists because managing multi-container applications by hand is a nightmare. One YAML file, one command, everything starts together. Let's learn how it actually works.
Why Compose Exists
Without Compose, running a typical web app looks like this:
docker network create myapp
docker run -d --name db --network myapp -e POSTGRES_PASSWORD=secret postgres:16
docker run -d --name cache --network myapp redis:7-alpine
docker run -d --name api --network myapp -p 3000:3000 \
-e DATABASE_URL=postgresql://postgres:secret@db:5432/myapp \
-e REDIS_URL=redis://cache:6379 \
myapp-api
Three commands, each with a dozen flags. Now multiply that by every developer on your team who needs to remember these exact incantations. Compose replaces all of this with a declarative file.
The docker-compose.yml File
Here's the same setup as a Compose file:
version: "3.8"
services:
api:
build: .
ports:
- "3000:3000"
environment:
DATABASE_URL: postgresql://postgres:secret@db:5432/myapp
REDIS_URL: redis://cache:6379
depends_on:
- db
- cache
db:
image: postgres:16
environment:
POSTGRES_PASSWORD: secret
POSTGRES_DB: myapp
volumes:
- pgdata:/var/lib/postgresql/data
cache:
image: redis:7-alpine
volumes:
pgdata:
Then you run:
docker compose up
That's it. All three services start, connected on the same network, with the database volume persisted between restarts.
Understanding the Structure
A Compose file has three main top-level sections that matter: services, volumes, and networks.
Services
Each service is a container. The key is the service name (like api, db, cache), and that name becomes the hostname on the internal network. Your API can reach the database at db:5432 because Compose creates DNS resolution for service names automatically.
A service either builds from a Dockerfile or pulls a pre-built image:
services:
# Build from local Dockerfile
api:
build:
context: .
dockerfile: Dockerfile
# Pull from Docker Hub
db:
image: postgres:16
Volumes
Volumes persist data beyond the container lifecycle. Without a volume, your database data disappears every time you docker compose down.
services:
db:
image: postgres:16
volumes:
- pgdata:/var/lib/postgresql/data # Named volume
- ./init.sql:/docker-entrypoint-initdb.d/init.sql # Bind mount
volumes:
pgdata: # Declare the named volume
Named volumes (like pgdata) are managed by Docker. Bind mounts (like ./init.sql:...) map a file or folder from your host machine directly into the container. Bind mounts are great for development because changes on your host appear inside the container instantly.
Networks
Compose creates a default network for your project automatically. Every service joins it. You usually don't need to configure networks unless you're doing something specific, like isolating a frontend from a database:
services:
frontend:
build: ./frontend
networks:
- frontend-net
api:
build: ./api
networks:
- frontend-net
- backend-net
db:
image: postgres:16
networks:
- backend-net
networks:
frontend-net:
backend-net:
Here the frontend can talk to the API, and the API can talk to the database, but the frontend cannot reach the database directly.
Building a Real Stack: Node.js + PostgreSQL + Redis
Let's build something practical. A task management API with a database and a caching layer.
Project Structure
task-api/
src/
index.js
db.js
cache.js
Dockerfile
docker-compose.yml
package.json
The Dockerfile
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
EXPOSE 3000
CMD ["node", "src/index.js"]
The Application Code
// src/index.js
const express = require('express');
const { pool } = require('./db');
const { redis } = require('./cache');
const app = express();
app.use(express.json());
app.get('/tasks', async (req, res) => {
// Check cache first
const cached = await redis.get('tasks');
if (cached) {
return res.json(JSON.parse(cached));
}
const result = await pool.query('SELECT * FROM tasks ORDER BY created_at DESC');
await redis.set('tasks', JSON.stringify(result.rows), 'EX', 60);
res.json(result.rows);
});
app.post('/tasks', async (req, res) => {
const { title } = req.body;
const result = await pool.query(
'INSERT INTO tasks (title) VALUES ($1) RETURNING *',
[title]
);
await redis.del('tasks'); // Invalidate cache
res.status(201).json(result.rows[0]);
});
app.listen(3000, () => console.log('API running on port 3000'));
// src/db.js
const { Pool } = require('pg');
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
});
module.exports = { pool };
// src/cache.js
const Redis = require('ioredis');
const redis = new Redis(process.env.REDIS_URL);
module.exports = { redis };
The Compose File
version: "3.8"
services:
api:
build: .
ports:
- "3000:3000"
environment:
DATABASE_URL: postgresql://postgres:devpassword@db:5432/taskdb
REDIS_URL: redis://cache:6379
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
volumes:
- ./src:/app/src # Hot reload in development
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: devpassword
POSTGRES_DB: taskdb
volumes:
- pgdata:/var/lib/postgresql/data
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- "5432:5432" # Expose for local DB tools
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
cache:
image: redis:7-alpine
ports:
- "6379:6379" # Expose for local debugging
volumes:
pgdata:
And the initialization SQL:
-- init.sql
CREATE TABLE IF NOT EXISTS tasks (
id SERIAL PRIMARY KEY,
title TEXT NOT NULL,
completed BOOLEAN DEFAULT false,
created_at TIMESTAMP DEFAULT NOW()
);
Run docker compose up and you have a working API with a database and cache.
Environment Variables: The Right Way
Hardcoding passwords in your Compose file works for tutorials, but in practice you want to use .env files.
# docker-compose.yml
services:
db:
image: postgres:16-alpine
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: ${DB_NAME}
# .env (same directory as docker-compose.yml)
DB_PASSWORD=devpassword
DB_NAME=taskdb
Compose automatically reads .env from the same directory. You can also specify a different env file per service:
services:
api:
build: .
env_file:
- .env
- .env.api
Add .env to your .gitignore and commit a .env.example with placeholder values. Every developer copies it and fills in their own values.
depends_on and Health Checks
depends_on controls startup order, but there's a catch. By default, it only waits for the container to start, not for the service inside to be ready. Your API might start before PostgreSQL has finished initializing.
The fix is health checks:
services:
db:
image: postgres:16-alpine
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
api:
build: .
depends_on:
db:
condition: service_healthy # Wait until healthy
Now Compose waits until pg_isready succeeds before starting the API. This eliminates the race condition.
Development vs Production Configs
Your development Compose file needs things like bind mounts for hot reload, exposed database ports, and debug environment variables. Production needs none of that.
The pattern is to use a base file and an override file:
# docker-compose.yml (base)
version: "3.8"
services:
api:
build: .
environment:
DATABASE_URL: postgresql://postgres:${DB_PASSWORD}@db:5432/taskdb
depends_on:
db:
condition: service_healthy
db:
image: postgres:16-alpine
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
# docker-compose.override.yml (development - loaded automatically)
version: "3.8"
services:
api:
ports:
- "3000:3000"
volumes:
- ./src:/app/src
environment:
NODE_ENV: development
db:
ports:
- "5432:5432"
# docker-compose.prod.yml (production)
version: "3.8"
services:
api:
restart: always
environment:
NODE_ENV: production
db:
restart: always
When you run docker compose up, it automatically merges docker-compose.yml and docker-compose.override.yml. For production:
docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d
Essential Commands
Here's what you'll use daily:
# Start everything
docker compose up
# Start in background (detached)
docker compose up -d
# Rebuild images (after changing Dockerfile or dependencies)
docker compose up --build
# Stop everything
docker compose down
# Stop and remove volumes (WARNING: deletes database data)
docker compose down -v
# View logs
docker compose logs
docker compose logs api # Specific service
docker compose logs -f api # Follow logs
# Run a one-off command in a service
docker compose exec api sh # Shell into running container
docker compose run api npm test # Run tests
# See what's running
docker compose ps
# Restart a specific service
docker compose restart api
Common Patterns
Wait-for-it Scripts
Even with health checks, sometimes you need the application itself to retry connections. A simple retry loop in Node.js:
async function connectWithRetry(pool, maxRetries = 5) {
for (let i = 0; i < maxRetries; i++) {
try {
await pool.query('SELECT 1');
console.log('Database connected');
return;
} catch (err) {
console.log(Database not ready, retrying (${i + 1}/${maxRetries})...);
await new Promise(resolve => setTimeout(resolve, 2000));
}
}
throw new Error('Could not connect to database');
}
Running Migrations
Add a migration service that runs once and exits:
services:
migrate:
build: .
command: npx knex migrate:latest
environment:
DATABASE_URL: postgresql://postgres:${DB_PASSWORD}@db:5432/taskdb
depends_on:
db:
condition: service_healthy
Run it with docker compose run migrate.
Multiple Dockerfiles
If your frontend and backend live in the same repo:
services:
frontend:
build:
context: .
dockerfile: frontend/Dockerfile
api:
build:
context: .
dockerfile: api/Dockerfile
Setting context: . means both Dockerfiles can access the full repo (useful for shared code).
Watching for File Changes
Docker Compose v2.22+ supports watch mode for development:
services:
api:
build: .
develop:
watch:
- action: sync
path: ./src
target: /app/src
- action: rebuild
path: package.json
docker compose watch
File changes in ./src sync to the container instantly. Changes to package.json trigger a full rebuild.
Common Mistakes
Forgetting to rebuild after dependency changes. If you add a new npm package, you needdocker compose up --build. The old image still has the old node_modules.
Not using volumes for database data. Without a named volume, docker compose down wipes your database. You'll learn this the hard way exactly once.
Exposing database ports in production. Those ports: - "5432:5432" lines are for development convenience. In production, only the API should be able to reach the database through the internal network.
Using latest tags. Always pin your image versions. postgres:16-alpine, not postgres:latest. Today's latest is tomorrow's breaking change.
Putting secrets in the Compose file. Use .env files or Docker secrets. Never commit passwords to version control.
What's Next
You now know enough Compose to handle most development workflows. The natural next steps are learning about Docker Compose profiles for optional services, using Compose with a reverse proxy like Traefik or Nginx, and eventually understanding when you've outgrown Compose and might need Kubernetes (spoiler: later than you think).
For hands-on practice building containerized applications with real deployment pipelines, check out CodeUp.