March 26, 20269 min read

CI/CD for Developers: Pipelines Explained for People Who Just Want to Ship Code

CI/CD demystified for developers who aren't DevOps engineers. Covers continuous integration, continuous deployment, GitHub Actions, pipeline stages, and practical advice for setting up your first pipeline.

cicd devops github-actions automation
Ad 336x280

You write code. You push it. And then... things happen. Tests run. Linters check your formatting. Somehow your code ends up on a server. If you've been treating CI/CD as a black box that someone else set up, this is where we crack it open.

CI/CD isn't complicated. It's just automation for the stuff you'd otherwise do manually: running tests, building your app, and putting it somewhere users can reach it. The pipeline is a script that does these things in order, triggered by a git push. That's the whole concept.

CI vs CD: The Actual Difference

Continuous Integration (CI) means every time you push code to a shared branch, automated checks run. Tests execute. The build compiles. Linters verify formatting. If anything fails, you know immediately -- not three days later when someone manually tests the feature. Continuous Delivery (CD) means your code is always in a deployable state. After CI passes, the artifact (your built app) is ready to be released at any time with a single click. Continuous Deployment (also CD, confusingly) goes one step further: after CI passes, the code automatically deploys to production. No human approval step.

Most teams do Continuous Integration and Continuous Delivery. Fully automated deployment to production without human approval is less common and requires high confidence in your test suite.

Why Developers Should Care

You might think "the DevOps team handles that." But here's why it matters to you directly:

Faster feedback. You push code and know within minutes if it breaks anything. Without CI, you find out when someone manually tests days later, and now you've forgotten what you changed. Fewer merge conflicts. When everyone integrates frequently (multiple times a day instead of once a week), conflicts are small and easy to resolve. Confidence to refactor. A solid CI pipeline with good tests means you can refactor aggressively. If you break something, you'll know immediately. Deploy on Friday. Controversial, but teams with good CI/CD can deploy on Friday because they trust their pipeline to catch problems. Teams without CI/CD can't deploy on Friday because they're terrified.

GitHub Actions: The Most Common Starting Point

If your code is on GitHub, GitHub Actions is the most natural CI/CD tool. It's free for public repos and has generous free-tier minutes for private repos. Let's build a pipeline from scratch.

Create .github/workflows/ci.yml:

name: CI

on:
push:
branches: [main]
pull_request:
branches: [main]

jobs:
test:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4

- uses: actions/setup-node@v4
with:
node-version: 20
cache: "npm"

- run: npm ci
- run: npm run lint
- run: npm test

That's a complete CI pipeline. Let's break it down:

  • on -- triggers. This runs on pushes to main and on pull requests targeting main.
  • jobs -- the work to do. Each job runs on a fresh virtual machine.
  • runs-on -- which OS. Usually ubuntu-latest unless you need Windows or macOS.
  • steps -- sequential commands. Check out code, set up Node, install dependencies, lint, test.
When you push this file, GitHub will run the pipeline automatically. You'll see a green checkmark or red X on every commit and PR.

Pipeline Stages: The Standard Flow

Most CI/CD pipelines follow this pattern:

Push Code → Install Dependencies → Lint → Test → Build → Deploy

Each stage acts as a gate. If linting fails, tests don't run. If tests fail, the build doesn't happen. If the build fails, nothing deploys. This catches problems early and avoids wasting time on later stages.

Here's a more complete pipeline:

name: CI/CD

on:
push:
branches: [main]
pull_request:
branches: [main]

jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: "npm"
- run: npm ci
- run: npm run lint

test:
runs-on: ubuntu-latest
needs: lint
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: "npm"
- run: npm ci
- run: npm test -- --coverage

build:
runs-on: ubuntu-latest
needs: test
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
cache: "npm"
- run: npm ci
- run: npm run build
- uses: actions/upload-artifact@v4
with:
name: build-output
path: dist/

deploy:
runs-on: ubuntu-latest
needs: build
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
steps:
- uses: actions/download-artifact@v4
with:
name: build-output
path: dist/
- name: Deploy to production
run: |
# Your deployment command here
echo "Deploying to production..."

Key additions:

  • needs creates dependencies between jobs. test waits for lint. build waits for test.
  • if adds conditions. Deploy only runs on pushes to main, not on pull requests.
  • Artifacts pass built files between jobs. The build job uploads the compiled output; the deploy job downloads it.

A Python Pipeline

Same concepts, different tools:

name: Python CI

on:
push:
branches: [main]
pull_request:
branches: [main]

jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.11", "3.12"]

steps:
- uses: actions/checkout@v4

- uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}

- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-dev.txt

- name: Lint with ruff
run: ruff check .

- name: Type check with mypy
run: mypy src/

- name: Test with pytest
run: pytest --cov=src --cov-report=xml

- name: Upload coverage
uses: codecov/codecov-action@v4
with:
file: coverage.xml

The matrix strategy runs the pipeline against multiple Python versions in parallel. If your code breaks on 3.12 but works on 3.11, you'll catch it.

Environment Variables and Secrets

Your pipeline needs API keys, database URLs, and other secrets. Never put these in your YAML file.

GitHub has a built-in secrets manager:

steps:
  - name: Deploy
    env:
      API_KEY: ${{ secrets.PRODUCTION_API_KEY }}
      DATABASE_URL: ${{ secrets.DATABASE_URL }}
    run: ./deploy.sh

You add secrets in your repository settings under Settings > Secrets and Variables > Actions. They're encrypted and only exposed to the pipeline at runtime. They never appear in logs (GitHub automatically masks them).

Use environments for different stages:
deploy-staging:
  runs-on: ubuntu-latest
  environment: staging
  steps:
    - run: ./deploy.sh
      env:
        API_URL: ${{ vars.API_URL }}  # Different per environment

deploy-production:
runs-on: ubuntu-latest
environment: production
needs: deploy-staging
steps:
- run: ./deploy.sh
env:
API_URL: ${{ vars.API_URL }} # Different value in production

Environments can also have protection rules -- like requiring manual approval before production deploys.

Caching: Make Pipelines Fast

The number one complaint about CI/CD: it's slow. Most of the time is spent installing dependencies. Caching fixes this.

- uses: actions/setup-node@v4
  with:
    node-version: 20
    cache: "npm"   # This line enables caching

The setup-node action handles npm caching automatically. For more control:

- uses: actions/cache@v4
  with:
    path: ~/.npm
    key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
    restore-keys: |
      ${{ runner.os }}-node-

This caches the npm directory. The cache key includes a hash of package-lock.json, so the cache automatically invalidates when dependencies change.

For Python with pip:

- uses: actions/cache@v4
  with:
    path: ~/.cache/pip
    key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements.txt') }}

A pipeline that takes 8 minutes without caching often drops to 2-3 minutes with it.

Branch Protection: Enforcing the Pipeline

A CI pipeline is useless if people can bypass it. Set up branch protection rules:

  1. Go to Settings > Branches > Add rule
  2. Apply to main
  3. Enable "Require status checks to pass before merging"
  4. Select your CI jobs as required checks
  5. Enable "Require pull request reviews before merging"
Now nobody (including you) can merge to main without the pipeline passing. This is the real power of CI -- it's not optional. Every change goes through the same checks.

Common Pipeline Patterns

Monorepo with path filters:
on:
  push:
    paths:
      - "frontend/**"
      - "package.json"

# Only runs when frontend code changes
Scheduled runs (cron):
on:
  schedule:
    - cron: "0 6   1"  # Every Monday at 6 AM UTC

# Good for dependency audits, stale test detection
Manual triggers:
on:
  workflow_dispatch:
    inputs:
      environment:
        description: "Deploy target"
        required: true
        default: "staging"
        type: choice
        options:
          - staging
          - production

This adds a "Run workflow" button in the GitHub UI with a dropdown to select the environment.

Other CI/CD Tools

GitHub Actions is the most common, but you'll encounter others:

  • GitLab CI/CD -- built into GitLab, uses .gitlab-ci.yml. Very similar concepts.
  • Jenkins -- the old workhorse. Self-hosted, endlessly configurable, can be a pain to maintain.
  • CircleCI -- cloud-based, fast, good caching. Uses .circleci/config.yml.
  • Travis CI -- was the standard for open source. Less popular now.
The concepts transfer directly between all of them. Triggers, stages, caching, secrets, artifacts -- same ideas, slightly different syntax.

Common Mistakes

Pipeline is too slow. If your pipeline takes 20 minutes, developers stop waiting for it and merge anyway. Cache aggressively, parallelize where possible, and only run what's necessary. Not failing fast. Put the fastest checks first. Linting takes 10 seconds; integration tests take 5 minutes. If linting fails, don't waste 5 minutes on integration tests. Flaky tests in CI. A test that fails randomly in CI but passes locally destroys trust. People start re-running pipelines "just in case" and ignoring failures. Fix flaky tests immediately. Too many manual steps. If deploying requires running the pipeline, then SSH-ing into a server, then running a script, then clearing a cache -- automate all of it. The point of CD is that deployment is one step. No pipeline for pull requests. Some teams only run CI on main. This means broken code gets merged and then the pipeline fails. Run CI on PRs so you catch issues before they reach the main branch.

Your First Pipeline: A Checklist

Starting from zero? Here's the minimum viable pipeline:

  1. Create .github/workflows/ci.yml
  2. Trigger on push and pull request to your main branch
  3. Install dependencies
  4. Run your linter
  5. Run your tests
  6. Set up branch protection to require the checks
That's enough to get massive value. You can add build steps, deployment, caching, and matrix testing incrementally as your project grows.

If you're learning to code and want to practice building projects that are worth adding CI/CD to, CodeUp helps you build real skills that scale from toy projects to production-grade codebases with proper automation.

Ad 728x90