Kubernetes for Developers: The Concepts That Actually Matter
Pods, Deployments, Services, ConfigMaps, and kubectl essentials — Kubernetes from a developer's perspective, without the ops deep-dive.
Kubernetes has a reputation problem. The docs read like they were written for infrastructure engineers, the YAML is verbose, and every tutorial wants to teach you about etcd and the control plane before you've deployed a single container.
Here's the thing: as a developer, you don't need to understand the control plane. You need to understand about six concepts, a dozen kubectl commands, and how to write a deployment manifest. That's it.
Why Kubernetes Exists
You've got containers (Docker). They work great on one machine. But when you need to run 50 containers across 10 machines, you need something to handle placement (which container goes where?), scaling (spin up more when traffic spikes), networking (how do containers find each other?), and recovery (restart crashed containers automatically).
That's container orchestration. Kubernetes does it. It's the industry standard because Google open-sourced it in 2014 and every cloud provider adopted it.
The Six Concepts You Need
Pods
A Pod is the smallest deployable unit. It wraps one or more containers that share networking and storage. In practice, most Pods contain a single container.
You rarely create Pods directly. But understanding them helps you read logs and debug issues.
Deployments
A Deployment manages a set of identical Pods. You tell it "I want 3 replicas of my web server" and it makes sure 3 Pods are always running. If one crashes, it spins up a replacement. If you push a new image, it does a rolling update — replacing Pods one at a time so there's zero downtime.
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api
spec:
replicas: 3
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: my-api
image: myregistry/my-api:1.2.0
ports:
- containerPort: 8080
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-credentials
key: url
That's a complete Deployment. Three replicas of your API, pulling the database URL from a Secret. Apply it with kubectl apply -f deployment.yaml and you're running.
Services
Pods get random IP addresses that change when they restart. A Service gives your Pods a stable network endpoint. Other services in the cluster reach your API through the Service name, not individual Pod IPs.
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
selector:
app: my-api
ports:
- port: 80
targetPort: 8080
type: ClusterIP
Now anything in the cluster can reach your API at http://my-api:80. The Service load-balances across all three Pods automatically.
ClusterIP is internal only. Use LoadBalancer to get an external IP (on cloud providers) or NodePort to expose on a specific port on every node.
ConfigMaps and Secrets
ConfigMaps hold non-sensitive configuration. Secrets hold sensitive data (base64-encoded, not encrypted by default — use something like Sealed Secrets or an external secrets manager for real security).
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
LOG_LEVEL: "info"
MAX_CONNECTIONS: "100"
Reference them in your Deployment as environment variables or mount them as files.
Namespaces
Namespaces are virtual clusters within your cluster. Use them to separate environments (dev, staging, prod) or teams. Resources in different namespaces are isolated by default.
kubectl: The Commands You'll Actually Use
# See what's running
kubectl get pods
kubectl get deployments
kubectl get services
# Deploy or update
kubectl apply -f deployment.yaml
# Check why something isn't working
kubectl describe pod my-api-7d9f8b6c4-x2kl9
kubectl logs my-api-7d9f8b6c4-x2kl9
kubectl logs my-api-7d9f8b6c4-x2kl9 --previous # logs from crashed container
# Shell into a running container
kubectl exec -it my-api-7d9f8b6c4-x2kl9 -- /bin/sh
# Port-forward for local debugging
kubectl port-forward svc/my-api 8080:80
# Scale up or down
kubectl scale deployment my-api --replicas=5
# Watch pods in real-time
kubectl get pods -w
kubectl describe and kubectl logs are your debugging bread and butter. When a Pod won't start, describe shows you events — image pull failures, resource limits, crash loops. logs shows you application output.
Local Development
You don't need a cloud cluster to learn Kubernetes. Two good options:
minikube runs a single-node cluster in a VM or container on your machine. Simple to set up, good for learning.minikube start
kubectl apply -f deployment.yaml
minikube service my-api # opens in browser
kind (Kubernetes in Docker) runs cluster nodes as Docker containers. Faster than minikube, supports multi-node clusters, and is what many CI pipelines use for testing.
kind create cluster
kubectl apply -f deployment.yaml
kubectl port-forward svc/my-api 8080:80
Both are free, both run locally, both are good enough for development and testing.
When You Don't Need Kubernetes
This is the part most tutorials skip.
If you're running a single application with a database and maybe a Redis instance, Kubernetes is overkill. A single VPS with Docker Compose does the same thing with a fraction of the complexity. Seriously — docker-compose up and you're done.
Kubernetes makes sense when you have:
- Multiple services that need to scale independently
- Teams deploying different services on different schedules
- Enough traffic that you need horizontal scaling and automated failover
- Requirements for zero-downtime deployments
For a startup with two developers and one API? Docker Compose on a $20/month VPS. For a company with 15 microservices and three teams? Kubernetes starts earning its keep.
Managed Kubernetes (EKS, GKE, AKS) removes the operational burden of running the control plane, but you still need to understand the concepts above to use it effectively.
If you want to practice deploying containers and writing Kubernetes manifests without cloud costs, CodeUp has interactive exercises that walk you through real cluster operations in a sandbox environment.
The best way to learn Kubernetes is to deploy something you've built. Start with one Deployment, one Service, and iterate from there. You don't need to understand the entire system before it's useful.