March 27, 202611 min read

Microservices Architecture: A Practical Introduction (Not Just Theory)

Learn microservices architecture hands-on -- service boundaries, communication patterns, API gateways, Docker Compose example, and anti-patterns to avoid.

microservices architecture backend docker tutorial
Ad 336x280

Every few months, a blog post goes viral arguing that microservices are a mistake. Then another one goes viral saying they're essential. The truth is boring: microservices are a tool. Like any tool, they solve specific problems and create new ones. The question isn't whether microservices are good or bad -- it's whether your situation justifies the complexity.

This guide is going to be practical. We'll cover the concepts, but we'll also build something: three services communicating with each other, running in Docker Compose, doing something realistic. No hand-waving.

Monolith vs Microservices

A monolith is a single deployable unit. One codebase, one database, one deployment. Everything lives together.

[Monolith]
  - User management
  - Product catalog
  - Order processing
  - Payment handling
  - Email notifications
  - All share one database

Microservices split these into independent services, each with its own database, deployed separately:

[User Service]     -> Users DB
[Product Service]  -> Products DB
[Order Service]    -> Orders DB
[Payment Service]  -> Payments DB
[Email Service]    -> (no DB, consumes events)
Start with a monolith. This isn't a cop-out -- it's genuinely good advice. Microservices add operational complexity (networking, deployment, monitoring, data consistency). That complexity is worth it when you have specific problems a monolith can't solve. Not before.

When to Split

Good reasons to move to microservices:

  • Independent scaling: Your product search handles 100x the traffic of order processing. Scaling the entire monolith to handle search load wastes resources.
  • Team autonomy: You have 50 developers and they keep stepping on each other's code. Separate services let teams own and deploy independently.
  • Technology diversity: One service needs Python for ML, another needs Go for performance. Microservices let you pick the right tool per problem.
  • Fault isolation: A bug in email sending shouldn't bring down the payment system.
Bad reasons:
  • "Netflix does it" (you're not Netflix)
  • "It's modern" (complexity isn't modern, it's just complex)
  • "We might need to scale" (premature optimization)
  • Resume-driven development

Service Boundaries

The hardest part of microservices isn't the technology. It's deciding what goes where.

Domain-Driven Design (DDD) provides the best framework for this. Look for "bounded contexts" -- areas of your business that have clear boundaries and their own language.

In an e-commerce system:

  • User Service: authentication, profiles, preferences
  • Catalog Service: products, categories, search, inventory counts
  • Order Service: shopping carts, orders, order history
  • Payment Service: charges, refunds, payment methods
  • Notification Service: emails, push notifications, SMS
Each service owns its data and its business logic. The Order Service doesn't query the Users table directly -- it asks the User Service for what it needs.

A good boundary test: if you can describe what a service does in one sentence without using "and," you've probably got a good boundary. "Manages user authentication and profiles" is fine. "Handles products, orders, and sends emails" is three services pretending to be one.

Communication Patterns

Services need to talk to each other. There are two fundamental approaches:

Synchronous (Request/Response)

One service calls another and waits for a response. Like REST or gRPC.

Order Service --HTTP POST /payments--> Payment Service
              <--200 OK, payment_id--
REST is the most common. Simple, well-understood, debuggable. Use it when:
  • You need an immediate response
  • The calling service can't proceed without the result
gRPC is faster (binary serialization, HTTP/2). Use it for:
  • High-throughput service-to-service calls
  • Streaming data between services

Asynchronous (Event-Driven)

One service publishes an event. Other services consume it when they're ready. Uses a message broker (RabbitMQ, Kafka, Redis Streams).

Order Service --"order.created" event--> Message Broker
                                            |
                                            +--> Payment Service (charges card)
                                            +--> Inventory Service (reserves stock)
                                            +--> Email Service (sends confirmation)

Use async when:


  • The caller doesn't need an immediate response

  • Multiple services need to react to the same event

  • You want loose coupling (services don't need to know about each other)

  • Operations can tolerate eventual consistency


Most real systems use both. Synchronous for queries ("get user details"), asynchronous for commands that trigger workflows ("order placed").

API Gateway

Clients (browsers, mobile apps) shouldn't call individual microservices directly. An API gateway sits in front of your services and provides:

  • Single entry point: clients hit one URL, the gateway routes to the right service
  • Authentication: verify tokens once at the gateway, not in every service
  • Rate limiting: protect services from abuse
  • Request aggregation: combine data from multiple services into one response
Client --> API Gateway --> User Service
                      --> Product Service
                      --> Order Service

Popular options: Kong, Traefik, AWS API Gateway, or a custom Node.js/Express gateway for simpler setups.

Let's Build: Three Services with Docker Compose

Enough theory. Here's a practical example: a simplified order system with three services.

Project structure:
microservices-demo/
  user-service/
    app.py
    Dockerfile
    requirements.txt
  product-service/
    app.py
    Dockerfile
    requirements.txt
  order-service/
    app.py
    Dockerfile
    requirements.txt
  docker-compose.yml

User Service

# user-service/app.py
from flask import Flask, jsonify

app = Flask(__name__)

users = {
1: {"id": 1, "name": "Alice Johnson", "email": "alice@example.com"},
2: {"id": 2, "name": "Bob Smith", "email": "bob@example.com"},
}

@app.route("/users/<int:user_id>")
def get_user(user_id):
user = users.get(user_id)
if not user:
return jsonify({"error": "User not found"}), 404
return jsonify(user)

@app.route("/health")
def health():
return jsonify({"status": "healthy", "service": "user-service"})

if __name__ == "__main__":
app.run(host="0.0.0.0", port=5001)

# user-service/Dockerfile
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
EXPOSE 5001
CMD ["python", "app.py"]
# user-service/requirements.txt
flask==3.1.0

Product Service

# product-service/app.py
from flask import Flask, jsonify

app = Flask(__name__)

products = {
101: {"id": 101, "name": "Wireless Keyboard", "price": 49.99, "stock": 150},
102: {"id": 102, "name": "USB-C Hub", "price": 34.99, "stock": 75},
103: {"id": 103, "name": "Monitor Stand", "price": 89.99, "stock": 30},
}

@app.route("/products/<int:product_id>")
def get_product(product_id):
product = products.get(product_id)
if not product:
return jsonify({"error": "Product not found"}), 404
return jsonify(product)

@app.route("/products/<int:product_id>/reserve", methods=["POST"])
def reserve_stock(product_id):
product = products.get(product_id)
if not product:
return jsonify({"error": "Product not found"}), 404
if product["stock"] <= 0:
return jsonify({"error": "Out of stock"}), 409
product["stock"] -= 1
return jsonify({"reserved": True, "remaining_stock": product["stock"]})

@app.route("/health")
def health():
return jsonify({"status": "healthy", "service": "product-service"})

if __name__ == "__main__":
app.run(host="0.0.0.0", port=5002)

Order Service

This service calls both the User and Product services:

# order-service/app.py
import os
import uuid
from flask import Flask, jsonify, request
import requests

app = Flask(__name__)

USER_SERVICE = os.getenv("USER_SERVICE_URL", "http://user-service:5001")
PRODUCT_SERVICE = os.getenv("PRODUCT_SERVICE_URL", "http://product-service:5002")

orders = {}

@app.route("/orders", methods=["POST"])
def create_order():
data = request.json
user_id = data.get("user_id")
product_id = data.get("product_id")
quantity = data.get("quantity", 1)

# Verify user exists (synchronous call to User Service) user_resp = requests.get(f"{USER_SERVICE}/users/{user_id}", timeout=5) if user_resp.status_code != 200: return jsonify({"error": "User not found"}), 400 # Verify product exists and reserve stock (synchronous call to Product Service) product_resp = requests.get(f"{PRODUCT_SERVICE}/products/{product_id}", timeout=5) if product_resp.status_code != 200: return jsonify({"error": "Product not found"}), 400

product = product_resp.json()

# Reserve stock reserve_resp = requests.post( f"{PRODUCT_SERVICE}/products/{product_id}/reserve", timeout=5, ) if reserve_resp.status_code != 200: return jsonify({"error": "Could not reserve stock"}), 409 # Create the order order_id = str(uuid.uuid4())[:8] order = { "id": order_id, "user": user_resp.json(), "product": product["name"], "price": product["price"], "quantity": quantity, "total": product["price"] * quantity, "status": "confirmed", } orders[order_id] = order

return jsonify(order), 201

@app.route("/orders/<order_id>")
def get_order(order_id):
order = orders.get(order_id)
if not order:
return jsonify({"error": "Order not found"}), 404
return jsonify(order)

@app.route("/health")
def health():
return jsonify({"status": "healthy", "service": "order-service"})

if __name__ == "__main__":
app.run(host="0.0.0.0", port=5003)

Docker Compose

# docker-compose.yml
version: "3.8"

services:
user-service:
build: ./user-service
ports:
- "5001:5001"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5001/health"]
interval: 10s
timeout: 5s
retries: 3

product-service:
build: ./product-service
ports:
- "5002:5002"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5002/health"]
interval: 10s
timeout: 5s
retries: 3

order-service:
build: ./order-service
ports:
- "5003:5003"
environment:
- USER_SERVICE_URL=http://user-service:5001
- PRODUCT_SERVICE_URL=http://product-service:5002
depends_on:
user-service:
condition: service_healthy
product-service:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5003/health"]
interval: 10s
timeout: 5s
retries: 3

Run It

docker-compose up --build

Test the flow:

# Create an order
curl -X POST http://localhost:5003/orders \
  -H "Content-Type: application/json" \
  -d '{"user_id": 1, "product_id": 101, "quantity": 1}'

# Response:
# {
#   "id": "a1b2c3d4",
#   "user": {"id": 1, "name": "Alice Johnson", "email": "alice@example.com"},
#   "product": "Wireless Keyboard",
#   "price": 49.99,
#   "quantity": 1,
#   "total": 49.99,
#   "status": "confirmed"
# }

One curl command triggers three services. The Order Service verified the user, checked the product, reserved stock, and created the order. Each service is independently deployable, scalable, and replaceable.

Service Discovery

In Docker Compose, services find each other by name (http://user-service:5001). In production, you need service discovery:

  • DNS-based: Kubernetes provides this automatically. Service names resolve to the right pods.
  • Registry-based: Tools like Consul or etcd maintain a registry of service locations.
  • Load balancer: Put services behind a load balancer with a stable URL.
The key principle: services shouldn't hardcode each other's addresses. Use environment variables, DNS, or a service registry.

Data Management

This is where microservices get hard. Each service owns its database. That means:

No shared databases. The Order Service doesn't query the Users table directly. It calls the User Service's API. This sounds wasteful, but it's the entire point -- services can change their database schema without breaking other services. Eventual consistency. In a monolith, you wrap everything in a database transaction. With microservices, you can't. If the Order Service creates an order but the Payment Service fails to charge, you need a strategy:
  • Saga pattern: a sequence of local transactions with compensating transactions for rollback
  • Outbox pattern: write events to an outbox table in the same transaction as the data change, then publish them asynchronously
Data duplication is okay. The Order Service might store the user's name and email alongside the order. If the user later changes their email, the order keeps the email that was current at order time. This is often the correct behavior.

Monitoring and Observability

With a monolith, you check one set of logs. With microservices, a single user request might touch five services. You need:

Distributed tracing: tools like Jaeger or Zipkin trace a request across services. Each service adds its span to the trace, so you can see the full journey. Centralized logging: aggregate logs from all services into one place (ELK stack, Grafana Loki, Datadog). Include a correlation ID in every log entry so you can filter all logs for a specific request.
import uuid

@app.before_request
def add_correlation_id():
request.correlation_id = request.headers.get(
"X-Correlation-ID", str(uuid.uuid4())
)

@app.after_request
def log_request(response):
app.logger.info(
f"[{request.correlation_id}] {request.method} {request.path} -> {response.status_code}"
)
response.headers["X-Correlation-ID"] = request.correlation_id
return response

Health checks: every service should expose a /health endpoint. Your orchestrator (Kubernetes, Docker Compose) uses these to restart unhealthy services automatically. Metrics: track request rate, error rate, and latency for every service. The RED method (Rate, Errors, Duration) gives you a quick overview of system health.

Common Anti-Patterns

Distributed monolith. You split into microservices, but every change requires deploying all services together. This is worse than a monolith -- you have all the complexity of distributed systems with none of the benefits. Chatty services. If creating one order requires 20 API calls between services, your boundaries are wrong. Redesign so each service has the data it needs to do its job with minimal external calls. Shared libraries with business logic. A shared utility library for logging or HTTP is fine. A shared library containing domain models creates tight coupling -- you're back to a distributed monolith. No circuit breakers. If the Payment Service goes down, the Order Service keeps sending requests, gets timeouts, exhausts its thread pool, and goes down too. Use circuit breakers (like tenacity in Python, or Polly in .NET) to fail fast when a dependency is down.
from tenacity import retry, stop_after_attempt, wait_exponential

@retry(
stop=stop_after_attempt(3),
wait=wait_exponential(multiplier=1, min=1, max=10),
)
def call_user_service(user_id):
response = requests.get(
f"{USER_SERVICE}/users/{user_id}",
timeout=5,
)
response.raise_for_status()
return response.json()

Starting with microservices. Build the monolith first. Understand your domain. Then split along the boundaries that actually cause pain. Premature decomposition leads to wrong boundaries, which leads to the distributed monolith.

What's Next

Microservices is a deep topic. Once you've built something basic, explore:

  • Kubernetes for orchestration at scale
  • Event sourcing for services that need complete audit trails
  • CQRS (Command Query Responsibility Segregation) for services with different read/write patterns
  • Service mesh (Istio, Linkerd) for managing service-to-service communication
  • Contract testing (Pact) to verify services stay compatible
The technology choices matter less than getting the boundaries right. Get the boundaries wrong, and no amount of Kubernetes or Kafka will save you.

For more architecture and backend tutorials, check out CodeUp.

Ad 728x90