March 26, 20265 min read

Docker for Developers: You Don't Need to Be a DevOps Person

Docker explained from a developer's perspective. Images, containers, Dockerfile basics, docker-compose for local dev, and why you should care even if you never touch production infrastructure.

docker devops containers tools backend
Ad 336x280

There's a misconception that Docker is a DevOps tool. That it's for deployment pipelines and Kubernetes clusters and things you don't need to think about. Wrong. Docker solves a very direct developer problem: setting up development environments sucks, and Docker makes it suck less.

Think about onboarding onto a new project. You clone the repo, then spend two hours installing PostgreSQL, Redis, the right version of Node, some Python tool, maybe Elasticsearch. Now imagine: docker-compose up and everything is running. That's the pitch.

Images vs Containers

This trips everyone up at first, so let's be clear:

  • An image is a blueprint. It's a read-only snapshot of a filesystem plus some metadata (what command to run, what ports to expose, etc.). Think of it like a class definition.
  • A container is a running instance of an image. You can have multiple containers from the same image. Think of it like an object instantiated from a class.
docker pull python:3.12       # Download an image
docker run python:3.12        # Create and start a container from it
docker ps                     # See running containers
docker ps -a                  # See ALL containers (including stopped ones)

When you docker run, Docker creates a new container every time. Old containers pile up. Clean them with docker container prune.

Writing a Dockerfile

A Dockerfile is a recipe for building an image. Here's one for a Node.js app:

FROM node:20-alpine

WORKDIR /app

COPY package*.json ./
RUN npm ci

COPY . .

EXPOSE 3000
CMD ["node", "server.js"]

Line by line:

  • FROM -- start from an existing base image (Node 20 on Alpine Linux)
  • WORKDIR -- set the working directory inside the container
  • COPY package*.json -- copy just the package files first
  • RUN npm ci -- install dependencies (this layer gets cached)
  • COPY . . -- copy the rest of the source code
  • EXPOSE -- document which port the app uses
  • CMD -- the command to run when the container starts
The order matters for caching. Docker caches each layer. By copying package.json and running npm ci before copying source files, Docker only re-installs dependencies when package.json changes. Your source code changes (which happen constantly) only rebuild the last few layers. This makes rebuilds fast.

Build and run:

docker build -t myapp .
docker run -p 3000:3000 myapp

Common Dockerfiles

Python (Flask/FastAPI):
FROM python:3.12-slim

WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .

EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

Go:
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o server .

FROM alpine:3.19
COPY --from=builder /app/server /server
EXPOSE 8080
CMD ["/server"]

That Go example uses a multi-stage build -- it compiles in a full Go image, then copies just the binary into a tiny Alpine image. The final image is maybe 15MB instead of 800MB.

docker-compose for Local Development

This is where Docker really shines for developers. Instead of installing PostgreSQL, Redis, and your app's dependencies separately, you define everything in one file:

# docker-compose.yml
services:
  app:
    build: .
    ports:
      - "3000:3000"
    volumes:
      - .:/app
      - /app/node_modules
    environment:
      - DATABASE_URL=postgres://postgres:password@db:5432/myapp
    depends_on:
      - db
      - redis

db:
image: postgres:16
environment:
POSTGRES_PASSWORD: password
POSTGRES_DB: myapp
volumes:
- pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"

redis:
image: redis:7-alpine
ports:
- "6379:6379"

volumes:
pgdata:

Now docker-compose up starts your entire stack. docker-compose down tears it all down. New developer joins the team? They run one command instead of following a 47-step setup guide that's already outdated.

Volumes for Hot Reload

Notice the volumes section for the app:

volumes:
  - .:/app              # Mount your local source code into the container
  - /app/node_modules   # But keep node_modules from the container

The first line means your local file changes appear inside the container immediately. Combined with a file watcher (nodemon, Next.js dev server, etc.), you get hot reload working inside Docker. You edit files in your IDE, the container picks up changes, the app reloads. It feels like developing locally.

The second line is important -- it prevents the container's node_modules from being overwritten by your local (possibly empty or OS-mismatched) node_modules.

Why You Should Care

Even if you never write a Kubernetes manifest, Docker is worth learning because:

Consistent environments. "It works on my machine" stops being a thing. Everyone runs the same OS, same versions, same configuration inside containers. Easy service dependencies. Need PostgreSQL 16 for one project and MySQL 8 for another? Two separate compose files, no conflicts, no need to install either database on your actual machine. Disposable environments. Messed up your database? docker-compose down -v && docker-compose up. Fresh database in seconds. Onboarding speed. Going from "I just cloned this repo" to "the entire app is running" should take minutes, not hours. Docker makes that possible.

Tips From Actual Usage

Use .dockerignore. Just like .gitignore, it tells Docker what NOT to copy. Always exclude node_modules, .git, .env, and build artifacts.
node_modules
.git
.env
*.log
dist
Don't run as root. Add a non-root user in your Dockerfile:
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
Use specific image tags. node:20-alpine is better than node:latest. You want builds to be reproducible, not silently change when a new version drops. Keep images small. Use -alpine or -slim variants. Less stuff installed means faster pulls and smaller attack surface.

If you're learning backend development or want to practice building apps that use databases and APIs, CodeUp can help you get comfortable with the fundamentals before adding Docker to your workflow. Once you know what your app needs, containerizing it is straightforward.

Ad 728x90