March 27, 202610 min read

How to Deploy a Web App to AWS (Without a PhD in Cloud)

Deploy a web app to AWS step by step. EC2, S3, Route 53, HTTPS with ACM, environment variables, and basic monitoring explained clearly.

aws deployment cloud devops tutorial
Ad 336x280

AWS has over 200 services. The console has more buttons than a spaceship cockpit. You just want to put your web app on the internet. This tutorial cuts through the noise and shows you the handful of services you actually need, how they connect, and how to deploy a Node.js app from zero to "it's live."

We're not going to cover Lambda, ECS, EKS, Fargate, or any of the other fifteen ways to run code on AWS. We're going to do the simplest thing that works: an EC2 instance running your app, a domain pointed to it, and HTTPS. Once you understand this foundation, the fancier options make a lot more sense.

What You Actually Need

For a typical web app deployment, you'll use five AWS services:

  • EC2 (Elastic Compute Cloud): A virtual server to run your app
  • S3 (Simple Storage Service): For static files, backups, or hosting a static site
  • Route 53: DNS management (pointing your domain to your server)
  • ACM (AWS Certificate Manager): Free SSL certificates for HTTPS
  • ALB (Application Load Balancer): Sits in front of EC2, handles HTTPS termination
That's it. Five services. The rest can wait.

Part 1: Deploying a Node.js App to EC2

Launch an EC2 Instance

Go to the EC2 dashboard and click "Launch Instance."

Choose an AMI: Select "Amazon Linux 2023" or "Ubuntu Server 24.04 LTS". Ubuntu is more familiar if you're coming from local development. Choose an instance type: t2.micro is free tier eligible. It has 1 vCPU and 1 GB RAM, which is enough for a small app. Create a key pair: Download the .pem file. You'll need it to SSH into the server. Don't lose it. Configure security group: Create a new security group with these inbound rules:
TypePortSource
SSH22Your IP (for security)
HTTP80Anywhere (0.0.0.0/0)
HTTPS443Anywhere (0.0.0.0/0)
Custom TCP3000Anywhere (for testing)
Storage: 8 GB is the default. Bump it to 20 GB if you'll have build artifacts or logs.

Click "Launch Instance."

Connect to Your Instance

# Set permissions on the key file (required on Mac/Linux)
chmod 400 your-key.pem

# SSH into the instance
ssh -i your-key.pem ubuntu@YOUR_PUBLIC_IP

Replace YOUR_PUBLIC_IP with the IPv4 address shown in the EC2 console.

Set Up the Server

# Update packages
sudo apt update && sudo apt upgrade -y

# Install Node.js 20
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt install -y nodejs

# Verify
node --version   # v20.x.x
npm --version    # 10.x.x

# Install Git
sudo apt install -y git

# Install PM2 (process manager)
sudo npm install -g pm2

Deploy Your App

# Clone your repository
git clone https://github.com/yourusername/your-app.git
cd your-app

# Install dependencies
npm ci --production

# Set environment variables
sudo nano /etc/environment
# Add: DATABASE_URL="postgresql://user:pass@host:5432/db"
# Add: NODE_ENV="production"
# Save and exit, then reload:
source /etc/environment

# Or use a .env file
cp .env.example .env
nano .env

Run with PM2

Never use node server.js directly in production. If the process crashes, it stays dead. PM2 restarts it automatically.

# Start your app
pm2 start src/index.js --name my-app

# View logs
pm2 logs my-app

# Monitor resources
pm2 monit

# Set up auto-restart on server reboot
pm2 startup
pm2 save

Your app should now be running on http://YOUR_PUBLIC_IP:3000.

Set Up Nginx as a Reverse Proxy

You don't want users typing :3000 in the URL. Nginx listens on port 80 and forwards requests to your Node.js app.

sudo apt install -y nginx

Create a config file:

sudo nano /etc/nginx/sites-available/my-app
server {
    listen 80;
    server_name yourdomain.com www.yourdomain.com;

location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}

Enable the site and restart Nginx:

sudo ln -s /etc/nginx/sites-available/my-app /etc/nginx/sites-enabled/
sudo nginx -t                    # Test configuration
sudo systemctl restart nginx

Now http://YOUR_PUBLIC_IP (port 80) serves your app.

Part 2: Static Sites with S3

If your app is a static site (React, Vue, Next.js static export), you can skip EC2 entirely and host it on S3 for practically nothing.

Create an S3 Bucket

# Install AWS CLI
sudo apt install -y awscli

# Configure (or use IAM roles on EC2)
aws configure
# Enter your Access Key ID, Secret, region (e.g., us-east-1)
# Create bucket (name must be globally unique)
aws s3 mb s3://my-app-static-site

# Enable static website hosting
aws s3 website s3://my-app-static-site \
  --index-document index.html \
  --error-document 404.html

Set Bucket Policy for Public Access

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "PublicReadGetObject",
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::my-app-static-site/*"
    }
  ]
}

Save this as policy.json and apply it:

aws s3api put-bucket-policy --bucket my-app-static-site --policy file://policy.json

Upload Your Static Files

# Build your app
npm run build

# Upload to S3
aws s3 sync ./dist s3://my-app-static-site --delete

# With cache headers for assets
aws s3 sync ./dist s3://my-app-static-site \
  --delete \
  --cache-control "public, max-age=31536000" \
  --exclude "*.html" \
  --exclude "*.json"

aws s3 sync ./dist s3://my-app-static-site \
--delete \
--cache-control "public, max-age=0, must-revalidate" \
--include "*.html" \
--include "*.json"

Your site is live at: http://my-app-static-site.s3-website-us-east-1.amazonaws.com

Part 3: Custom Domain with Route 53

Register or Transfer a Domain

You can buy a domain directly through Route 53, or transfer one from another registrar. Either way, Route 53 needs to manage the DNS.

Create a Hosted Zone

If you bought the domain elsewhere:

  1. Go to Route 53 and create a hosted zone for yourdomain.com
  2. Route 53 gives you four nameserver (NS) records
  3. Go to your domain registrar and set these as the nameservers
  4. Wait for DNS propagation (can take up to 48 hours, usually faster)

Point the Domain to Your EC2 Instance

Create an A record:

Type: A
Name: yourdomain.com
Value: YOUR_EC2_PUBLIC_IP
TTL: 300

For www:

Type: CNAME
Name: www.yourdomain.com
Value: yourdomain.com
TTL: 300

For an S3 static site, create an alias record pointing to the S3 website endpoint instead of an IP address.

Part 4: HTTPS with ACM

AWS Certificate Manager provides free SSL certificates. The catch: they only work with AWS services like ALB and CloudFront, not directly on EC2.

  1. Request a certificate in ACM for yourdomain.com and *.yourdomain.com
  2. Validate via DNS (ACM gives you a CNAME record to add in Route 53)
  3. Create an Application Load Balancer:
- Listener on port 443 (HTTPS) using your ACM certificate - Listener on port 80 (HTTP) redirecting to HTTPS - Target group pointing to your EC2 instance on port 3000
  1. Update Route 53 to point your domain to the ALB (Alias record)
  2. Update EC2 security group to only allow traffic from the ALB

Option B: Certbot on EC2 (Free, No ALB Needed)

If you don't want to set up an ALB, use Let's Encrypt directly on the EC2 instance:

sudo apt install -y certbot python3-certbot-nginx

sudo certbot --nginx -d yourdomain.com -d www.yourdomain.com

Certbot automatically modifies your Nginx config to handle HTTPS. It also sets up auto-renewal:

# Test auto-renewal
sudo certbot renew --dry-run

This is simpler and costs nothing extra, but you don't get the scalability benefits of an ALB.

Part 5: Environment Variables Done Right

Hardcoding secrets in your code or checking .env files into Git is how breaches happen. On AWS, you have better options.

AWS Systems Manager Parameter Store

Free, encrypted storage for configuration values:

# Store a secret
aws ssm put-parameter \
  --name "/my-app/production/DATABASE_URL" \
  --value "postgresql://user:pass@host:5432/db" \
  --type SecureString

# Retrieve it
aws ssm get-parameter \
  --name "/my-app/production/DATABASE_URL" \
  --with-decryption \
  --query "Parameter.Value" \
  --output text

In your deployment script:

#!/bin/bash
export DATABASE_URL=$(aws ssm get-parameter --name "/my-app/production/DATABASE_URL" --with-decryption --query "Parameter.Value" --output text)
export JWT_SECRET=$(aws ssm get-parameter --name "/my-app/production/JWT_SECRET" --with-decryption --query "Parameter.Value" --output text)

pm2 restart my-app --update-env

Your EC2 instance needs an IAM role with ssm:GetParameter permission to access these.

Part 6: Basic Monitoring

CloudWatch Basics

EC2 instances automatically send basic metrics to CloudWatch: CPU utilization, network traffic, disk reads/writes. Set up alarms for critical thresholds:

# Create an alarm for high CPU
aws cloudwatch put-metric-alarm \
  --alarm-name "High CPU - my-app" \
  --metric-name CPUUtilization \
  --namespace AWS/EC2 \
  --statistic Average \
  --period 300 \
  --threshold 80 \
  --comparison-operator GreaterThanThreshold \
  --evaluation-periods 2 \
  --dimensions Name=InstanceId,Value=i-1234567890abcdef0 \
  --alarm-actions arn:aws:sns:us-east-1:123456789012:my-alerts

Application-Level Monitoring

Install the CloudWatch agent for memory and disk metrics (not included by default):

sudo apt install -y amazon-cloudwatch-agent

Or use PM2's built-in monitoring:

pm2 install pm2-logrotate    # Prevent logs from filling the disk
pm2 monit                     # Real-time CPU/memory per process

Log Management

Ship your app logs to CloudWatch Logs so you can search them without SSHing into the server:

# In your PM2 ecosystem file
module.exports = {
  apps: [{
    name: 'my-app',
    script: 'src/index.js',
    error_file: '/var/log/my-app/error.log',
    out_file: '/var/log/my-app/output.log',
    log_date_format: 'YYYY-MM-DD HH:mm:ss Z',
  }]
};

Then configure the CloudWatch agent to tail those log files.

Deployment Script

Here's a simple deployment script that puts it all together:

#!/bin/bash
set -e

APP_DIR=/home/ubuntu/my-app

echo "Pulling latest code..."
cd $APP_DIR
git pull origin main

echo "Installing dependencies..."
npm ci --production

echo "Loading environment..."
export DATABASE_URL=$(aws ssm get-parameter --name "/my-app/production/DATABASE_URL" --with-decryption --query "Parameter.Value" --output text)

echo "Running migrations..."
npm run migrate

echo "Restarting app..."
pm2 restart my-app --update-env

echo "Deployment complete!"

Run it manually, or trigger it from GitHub Actions after tests pass.

Common Mistakes

Leaving SSH open to the world. Restrict port 22 to your IP address in the security group. Better yet, use AWS Session Manager which eliminates SSH entirely. Using the root account. Create an IAM user with only the permissions it needs. Never use your root account for day-to-day work. Not setting up auto-scaling. A single EC2 instance is a single point of failure. For production, put at least two instances behind an ALB in different availability zones. Forgetting to set up billing alerts. Go to Billing, create a budget, and set an alert. A misconfigured service can rack up charges fast. Not backing up your database. If you're running a database on EC2, you need snapshots. If you're using RDS, enable automated backups. Data loss is not a hypothetical. Hardcoding the instance IP. EC2 public IPs change when you stop/start the instance. Use an Elastic IP (free while attached to a running instance) or, better, use a domain name pointing to an ALB.

What's Next

You've deployed a web app to AWS the straightforward way. From here, explore RDS for managed databases instead of running PostgreSQL on EC2, CloudFront as a CDN for faster global delivery, auto-scaling groups to handle traffic spikes, and Infrastructure as Code with Terraform or AWS CDK to make this all reproducible.

For hands-on cloud deployment projects and guided practice, check out CodeUp.

Ad 728x90