March 27, 202610 min read

Dockerfiles Explained: Write Containers That Actually Work in Production

Master Dockerfiles with multi-stage builds, layer caching, security best practices, and real Node.js and Python examples for production.

docker dockerfile containers devops tutorial
Ad 336x280

You can use Docker without understanding Dockerfiles. Pull an image, run a container, move on. But the moment you need to containerize your own application -- and you will -- you need to write a Dockerfile. And the difference between a Dockerfile that "works" and one that works well in production is significant. Image sizes, build times, security, and reliability all depend on how you write it.

This isn't a reference manual. It's a guide to writing Dockerfiles that don't embarrass you when someone reviews them. We'll cover the essential instructions, multi-stage builds, layer caching, security, and real examples for Node.js and Python apps.

What a Dockerfile Does

A Dockerfile is a recipe for building a Docker image. Each instruction creates a layer. Layers are cached and reused. The final image is a stack of read-only layers that contains your application and everything it needs to run.

Here's the simplest possible Dockerfile:

FROM node:20-alpine
WORKDIR /app
COPY . .
RUN npm install
EXPOSE 3000
CMD ["node", "server.js"]

This works. But it has problems we'll fix throughout this tutorial.

Essential Instructions

FROM -- Your Starting Point

Every Dockerfile starts with FROM. It sets the base image:

FROM node:20-alpine     # Node.js on Alpine Linux (~50MB)
FROM python:3.12-slim   # Python on Debian slim (~150MB)
FROM ubuntu:22.04       # Full Ubuntu (~80MB)
FROM scratch            # Empty image (for compiled binaries)

Always use specific tags, never latest. FROM node:latest means your image changes unpredictably whenever the Node.js team pushes a new version. Pin to a specific version.

Alpine images are much smaller but use musl instead of glibc, which can cause issues with some native modules. Slim images (Debian-based) are a good middle ground.

WORKDIR -- Set the Working Directory

WORKDIR /app

This creates the directory if it doesn't exist and sets it as the working directory for all subsequent instructions. Don't use RUN mkdir -p /app && cd /app -- that's what WORKDIR is for.

COPY vs ADD

COPY package.json .           # Copy file from host to container
COPY . .                      # Copy everything
ADD archive.tar.gz /app/      # Copy AND extract archives
ADD https://example.com/f .   # Download from URL (don't use this)

Use COPY almost always. ADD has two extra features (auto-extraction and URL downloads), but both are surprising behaviors. If you need to extract an archive, use COPY + RUN tar. If you need to download something, use RUN curl or RUN wget. Be explicit.

RUN -- Execute Commands

RUN npm install
RUN apt-get update && apt-get install -y curl

Each RUN creates a new layer. Combine related commands to reduce layers:

# Bad -- 3 layers, and apt cache persists
RUN apt-get update
RUN apt-get install -y curl
RUN rm -rf /var/lib/apt/lists/*

# Good -- 1 layer, clean cache in same layer
RUN apt-get update \
    && apt-get install -y --no-install-recommends curl \
    && rm -rf /var/lib/apt/lists/*

The rm -rf /var/lib/apt/lists/* must be in the same RUN instruction as apt-get update. If it's in a separate RUN, the apt cache still exists in the previous layer, bloating your image.

EXPOSE -- Document the Port

EXPOSE 3000

This doesn't actually publish the port. It's documentation -- it tells anyone reading the Dockerfile which port the app listens on. You still need -p 3000:3000 when running the container.

CMD vs ENTRYPOINT

CMD ["node", "server.js"]                    # Default command, can be overridden
ENTRYPOINT ["node", "server.js"]              # Fixed command, args are appended
ENTRYPOINT ["node"]  CMD ["server.js"]        # Combine: fixed executable, default args

Use CMD for most applications. Use ENTRYPOINT when your container is meant to behave like a single executable.

Always use the exec form (JSON array) instead of the shell form:

# Good -- exec form, PID 1, receives signals properly
CMD ["node", "server.js"]

# Bad -- shell form, runs as /bin/sh -c "node server.js"
# Node.js is not PID 1, doesn't receive SIGTERM, can't gracefully shut down
CMD node server.js

ENV and ARG

# Build-time variable (not in final image)
ARG NODE_ENV=production

# Runtime environment variable (persists in image)
ENV NODE_ENV=production
ENV PORT=3000

Use ARG for things needed only during the build (version numbers, build flags). Use ENV for things your application reads at runtime.

Layer Caching -- The Key to Fast Builds

Docker caches each layer. If a layer's instruction and all its inputs haven't changed, Docker reuses the cache. But here's the rule: if one layer changes, all subsequent layers are invalidated.

This is why instruction order matters enormously:

# Slow -- any code change invalidates npm install cache
FROM node:20-alpine
WORKDIR /app
COPY . .                    # Code changes? Cache busted from here down.
RUN npm ci                  # Re-installs ALL dependencies every time.
CMD ["node", "server.js"]

# Fast -- dependencies cached unless package.json changes
FROM node:20-alpine
WORKDIR /app
COPY package.json package-lock.json ./   # Only these two files
RUN npm ci                               # Cached until dependencies change
COPY . .                                 # Code changes only invalidate from here
CMD ["node", "server.js"]

The rule of thumb: copy things that change less frequently first. Dependencies change less often than source code, so install dependencies before copying source code.

The .dockerignore File

Just like .gitignore, .dockerignore prevents files from being sent to the Docker daemon during builds:

node_modules
.git
.env
*.md
dist
coverage
.nyc_output
.DS_Store
Dockerfile
docker-compose.yml

This is critical for two reasons:


  1. Build speed. Sending node_modules (often 500MB+) to the daemon is slow and pointless since you're installing fresh inside the container.

  2. Security. You don't want .env files with secrets ending up in your image.


Multi-Stage Builds

This is the single most important Dockerfile technique for production. Multi-stage builds let you use one image for building and a different (smaller) image for running:

Node.js Multi-Stage Build

# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build   # TypeScript compilation, bundling, etc.

# Stage 2: Production
FROM node:20-alpine AS production
WORKDIR /app
ENV NODE_ENV=production

# Only copy what's needed to run
COPY --from=builder /app/package.json /app/package-lock.json ./
RUN npm ci --omit=dev   # Production dependencies only
COPY --from=builder /app/dist ./dist

# Don't run as root
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser

EXPOSE 3000
CMD ["node", "dist/server.js"]

The builder stage has TypeScript, dev dependencies, build tools -- everything needed to compile. The production stage has only the compiled output and production dependencies. All the build-time bloat stays behind.

Python Multi-Stage Build

# Stage 1: Build
FROM python:3.12-slim AS builder
WORKDIR /app

# Install build dependencies
RUN apt-get update && apt-get install -y --no-install-recommends gcc \
    && rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install --user --no-cache-dir -r requirements.txt

# Stage 2: Production FROM python:3.12-slim AS production WORKDIR /app # Copy installed packages from builder COPY --from=builder /root/.local /root/.local ENV PATH=/root/.local/bin:$PATH

COPY . .

# Don't run as root RUN useradd --create-home appuser USER appuser

EXPOSE 8000
CMD ["gunicorn", "app:app", "--bind", "0.0.0.0:8000", "--workers", "4"]

Reducing Image Size

Image size affects pull times, storage costs, and attack surface. Here's how to keep it small:

1. Use slim or Alpine base images:
node:20          → ~1.1GB
node:20-slim     → ~250MB
node:20-alpine   → ~140MB
2. Multi-stage builds (covered above) -- don't ship build tools. 3. Minimize layers and clean up in the same layer:
RUN apt-get update \
    && apt-get install -y --no-install-recommends build-essential \
    && pip install --no-cache-dir -r requirements.txt \
    && apt-get purge -y build-essential \
    && apt-get autoremove -y \
    && rm -rf /var/lib/apt/lists/*
4. Use --no-cache-dir for pip:
RUN pip install --no-cache-dir -r requirements.txt
5. Use npm ci instead of npm install:
RUN npm ci --omit=dev   # Exact versions from lock file, no dev deps

Check your image size:

docker images myapp
docker history myapp:latest # See size of each layer

Security Best Practices

Don't run as root:
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser

If your process runs as root inside the container and an attacker exploits it, they have root access to the container filesystem. Running as a non-root user limits the blast radius.

Don't store secrets in the image:
# NEVER do this
ENV DATABASE_URL=postgres://user:password@host/db
COPY .env .

# Instead, pass secrets at runtime
# docker run -e DATABASE_URL=... myapp
# Or use Docker secrets / environment files

Secrets baked into image layers are visible to anyone with docker history or access to the image.

Use specific image digests for maximum reproducibility:
FROM node:20-alpine@sha256:abc123...
Scan your images:
docker scout cves myapp:latest
# or
trivy image myapp:latest

Complete Node.js Production Dockerfile

FROM node:20-alpine AS builder

WORKDIR /app

COPY package.json package-lock.json ./
RUN npm ci

COPY tsconfig.json ./
COPY src ./src
RUN npm run build

# ---

FROM node:20-alpine

WORKDIR /app
ENV NODE_ENV=production

COPY package.json package-lock.json ./
RUN npm ci --omit=dev && npm cache clean --force

COPY --from=builder /app/dist ./dist

RUN addgroup -S app && adduser -S app -G app
USER app

EXPOSE 3000

HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD wget --quiet --tries=1 --spider http://localhost:3000/health || exit 1

CMD ["node", "dist/server.js"]

Complete Python Production Dockerfile

FROM python:3.12-slim AS builder

WORKDIR /app

RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc libpq-dev \
&& rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install --user --no-cache-dir -r requirements.txt

# ---

FROM python:3.12-slim

WORKDIR /app

RUN apt-get update \
&& apt-get install -y --no-install-recommends libpq5 \
&& rm -rf /var/lib/apt/lists/*

COPY --from=builder /root/.local /root/.local
ENV PATH=/root/.local/bin:$PATH

COPY . .

RUN useradd --create-home app
USER app

EXPOSE 8000

HEALTHCHECK --interval=30s --timeout=3s --start-period=10s --retries=3 \
CMD python -c "import urllib.request; urllib.request.urlopen('http://localhost:8000/health')" || exit 1

CMD ["gunicorn", "app.main:app", "--bind", "0.0.0.0:8000", "--workers", "4"]

Debugging Builds

When a build fails, use --target to stop at a specific stage:

docker build --target builder -t myapp:debug .
docker run -it myapp:debug sh   # Explore the filesystem

Check what's in each layer:

docker history myapp:latest --no-trunc

Use docker build --progress=plain to see full output (BuildKit hides output by default):

docker build --progress=plain -t myapp .

Common Mistakes

Copying node_modules from host. Your host's node_modules might have different binaries (compiled for macOS when the container runs Linux). Always install inside the container. Use .dockerignore to exclude node_modules. Using npm install instead of npm ci. npm install can modify package-lock.json and install different versions. npm ci installs exactly what's in the lock file. Use npm ci in Dockerfiles. Not pinning base image versions. FROM python:3 could be Python 3.12 today and 3.13 next month, with breaking changes. Pin to python:3.12-slim. Ignoring HEALTHCHECK. Without a health check, Docker considers a container healthy as long as the process is running -- even if it's deadlocked and not serving requests. Add a HEALTHCHECK that actually verifies your app is responding. Building images with Docker Compose. docker-compose build is fine for development, but for production, build images in CI with docker build, tag them properly, and push to a registry.

What's Next

Dockerfiles are the foundation. Once you're comfortable with them:

  • Docker Compose -- Multi-container applications (app + database + cache)
  • CI/CD integration -- Build and push images in GitHub Actions or GitLab CI
  • Registry management -- Push to Docker Hub, GitHub Container Registry, or ECR
  • Kubernetes -- Deploy your containers at scale with orchestration
  • BuildKit features -- Cache mounts, secret mounts, SSH forwarding during builds
  • Distroless images -- Google's minimal images without a shell or package manager
The goal is images that are small, secure, fast to build, and reproducible. Start with the production Dockerfiles in this tutorial, adjust for your stack, and iterate from there.

For more DevOps and containerization tutorials, check out CodeUp.

Ad 728x90