AWS EC2 Deployment Guide for Web Applications
EC2 remains the most flexible compute option in AWS for web applications that need consistent performance, custom software stacks, and predictable pricing. While ECS, EKS, and Lambda get more attention in the cloud-native world, EC2 gives you a server you control completely — no container orchestration complexity, no cold start latency, no surprise invocation costs.
This guide covers deploying a production Node.js web application on EC2: instance selection, security group configuration, application deployment, Nginx reverse proxy, SSL with Cloudflare, monitoring with CloudWatch, and the ongoing maintenance patterns that keep an EC2 deployment healthy.
Key Takeaways
- t3.large is the right starting point for a full-stack Node.js + PostgreSQL deployment
- Use Ubuntu 24.04 LTS — supported until 2029, widely documented, excellent package availability
- Elastic IP is mandatory — your EC2 IP changes on every stop/start without it
- Security groups are stateful — you only need inbound rules; outbound is typically allow-all
- Store your deployment SSH key in a separate
.pemfile; never commit it to git- Use EC2 instance connect or Session Manager instead of direct SSH when possible (zero key management)
- CloudWatch agent gives you memory and disk metrics (not available by default)
- Reserved Instances or Savings Plans reduce EC2 costs by 40-60% vs. on-demand
Instance Selection
The right instance type depends on your workload:
| Workload | Recommended Instance | vCPU | RAM | Cost/month |
|---|---|---|---|---|
| Light (blog, small app) | t3.small | 2 | 2GB | ~$18 |
| Medium (full-stack app) | t3.medium | 2 | 4GB | ~$35 |
| Production (multi-service) | t3.large | 2 | 8GB | ~$70 |
| Heavy (high traffic API) | c6i.xlarge | 4 | 8GB | ~$140 |
| Memory-heavy (ML/cache) | r6i.large | 2 | 16GB | ~$120 |
For a monorepo with 5 Node.js applications (Next.js, NestJS, Docusaurus, 2 brand sites) plus Docker infrastructure (PostgreSQL, Redis, Authentik), a t3.large is the minimum viable configuration. The t3 family uses "burstable" performance — performance is excellent during normal operation but sustained high CPU triggers throttling.
For consistently high CPU workloads (video processing, ML inference, heavy cryptography), use c6i (compute-optimized) instances instead.
Initial Server Setup
After launching your EC2 instance with Ubuntu 24.04:
# Connect via SSH
ssh -i your-key.pem [email protected]
# Update system packages
sudo apt update && sudo apt upgrade -y
# Install essential tools
sudo apt install -y \
git curl wget unzip \
build-essential \
nginx \
certbot python3-certbot-nginx \
docker.io docker-compose-plugin \
htop ncdu iotop
# Install Node.js via NVM (allows easy version management)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
source ~/.bashrc
nvm install 22
nvm use 22
nvm alias default 22
# Install pnpm
curl -fsSL https://get.pnpm.io/install.sh | sh -
source ~/.bashrc
# Install PM2 globally
npm install -g pm2
# Install PM2 log rotation immediately
pm2 install pm2-logrotate
pm2 set pm2-logrotate:max_size 50M
pm2 set pm2-logrotate:retain 7
pm2 set pm2-logrotate:compress true
Security Group Configuration
The security group is your EC2 instance's firewall. Configure it carefully:
Inbound Rules:
┌─────────┬──────────┬─────────────┬──────────────────────────────────┐
│ Type │ Protocol │ Port Range │ Source │
├─────────┼──────────┼─────────────┼──────────────────────────────────┤
│ SSH │ TCP │ 22 │ Your IP only (not 0.0.0.0/0!) │
│ HTTP │ TCP │ 80 │ 0.0.0.0/0 (Cloudflare IPs only) │
│ HTTPS │ TCP │ 443 │ 0.0.0.0/0 (Cloudflare IPs only) │
└─────────┴──────────┴─────────────┴──────────────────────────────────┘
Note: Internal app ports (3000, 3001, 3002, etc.) should NOT be
in the security group — traffic goes through Nginx only
For Cloudflare-proxied domains, restrict HTTP/HTTPS to Cloudflare IP ranges:
# Cloudflare IPv4 ranges — restrict port 80/443 source to these
103.21.244.0/22
103.22.200.0/22
103.31.4.0/22
104.16.0.0/13
104.24.0.0/14
108.162.192.0/18
131.0.72.0/22
141.101.64.0/18
162.158.0.0/15
172.64.0.0/13
173.245.48.0/20
188.114.96.0/20
190.93.240.0/20
197.234.240.0/22
198.41.128.0/17
This prevents direct access to your server, bypassing Cloudflare's WAF and DDoS protection.
Application Deployment
# Create application directory
sudo mkdir -p /opt/ecosire/app
sudo chown ubuntu:ubuntu /opt/ecosire/app
# Clone the repository
git clone https://github.com/your-org/your-repo.git /opt/ecosire/app
cd /opt/ecosire/app
# Create .env.local from template
cp .env.example .env.local
# Edit with production values
nano .env.local
# Install dependencies
pnpm install --frozen-lockfile
# Build everything
npx turbo run build
# Run database migrations
pnpm --filter @ecosire/db db:migrate
# Start infrastructure (PostgreSQL, Redis, Authentik)
docker compose -f infrastructure/docker-compose.dev.yml up -d
# Wait for services to be healthy
sleep 30
# Start Node.js applications
pm2 start ecosystem.config.cjs
# Save process list for reboot persistence
pm2 save
# Configure PM2 to start on system boot
pm2 startup
# Run the command it outputs
Elastic IP and DNS
An EC2 instance's public IP changes every time you stop and start it. Elastic IP provides a permanent IP:
# In AWS Console:
# 1. EC2 > Network & Security > Elastic IPs
# 2. Allocate Elastic IP address
# 3. Associate it with your instance
# Your IP is now permanent — update Cloudflare DNS to point to it
# A record: ecosire.com → 13.223.116.181 (your Elastic IP)
# A record: api.ecosire.com → 13.223.116.181
# A record: auth.ecosire.com → 13.223.116.181
In Cloudflare, set these records to "Proxied" (orange cloud) for web traffic. The Cloudflare proxy hides your actual EC2 IP, providing DDoS protection.
Storage: EBS Volume Management
EC2 instances include a root EBS volume. For production, you need enough space for build artifacts, logs, and Docker data:
# Check current disk usage
df -h
# Check which directories are consuming space
ncdu /
# Typical space requirements for a 5-app monorepo:
# - /opt/ecosire/app: ~2GB (code + node_modules + .next builds)
# - Docker data (/var/lib/docker): ~5GB
# - PM2 logs (/var/log/pm2): ~1GB (with rotation)
# - System: ~5GB
# Total: ~13GB minimum, recommend 30GB+ root volume
# If you need to resize an EBS volume (no downtime needed):
# 1. In AWS Console: EC2 > Volumes > Modify Volume
# 2. After resize completes, grow the filesystem:
sudo growpart /dev/xvda 1
sudo resize2fs /dev/xvda1
CloudWatch Monitoring
EC2 provides basic CPU and network metrics by default. For memory and disk metrics, install the CloudWatch agent:
# Download and install CloudWatch agent
wget https://s3.amazonaws.com/amazoncloudwatch-agent/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb
sudo dpkg -i amazon-cloudwatch-agent.deb
# Create configuration
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
// /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json
{
"agent": {
"metrics_collection_interval": 60,
"run_as_user": "cwagent"
},
"metrics": {
"append_dimensions": {
"AutoScalingGroupName": "${aws:AutoScalingGroupName}",
"ImageId": "${aws:ImageId}",
"InstanceId": "${aws:InstanceId}",
"InstanceType": "${aws:InstanceType}"
},
"metrics_collected": {
"mem": {
"measurement": ["mem_used_percent"],
"metrics_collection_interval": 60
},
"disk": {
"measurement": ["used_percent"],
"metrics_collection_interval": 60,
"resources": ["/", "/opt/ecosire"]
}
}
},
"logs": {
"logs_collected": {
"files": {
"collect_list": [
{
"file_path": "/var/log/pm2/ecosire-api.err.log",
"log_group_name": "/ec2/ecosire/api-errors",
"log_stream_name": "{instance_id}"
},
{
"file_path": "/var/log/nginx/ecosire-error.log",
"log_group_name": "/ec2/ecosire/nginx-errors",
"log_stream_name": "{instance_id}"
}
]
}
}
}
}
# Start the agent
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl \
-a fetch-config \
-m ec2 \
-c file:/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json \
-s
Automated Backups
Set up automated PostgreSQL backups to S3:
# Create backup script
cat > /opt/ecosire/scripts/backup-db.sh << 'EOF'
#!/bin/bash
set -e
DATE=$(date +%Y-%m-%d-%H%M%S)
BACKUP_FILE="/tmp/ecosire-db-${DATE}.sql.gz"
S3_BUCKET="s3://your-backups-bucket/postgres"
# Dump the database (connects via Docker network)
docker exec ecosire-postgres pg_dump \
-U ecosire \
-d ecosire_dev \
--no-owner \
--no-privileges \
| gzip > "$BACKUP_FILE"
# Upload to S3
aws s3 cp "$BACKUP_FILE" "$S3_BUCKET/"
# Clean up local file
rm "$BACKUP_FILE"
# Delete backups older than 30 days from S3
aws s3 ls "$S3_BUCKET/" \
| awk '{print $4}' \
| sort \
| head -n -30 \
| xargs -I {} aws s3 rm "$S3_BUCKET/{}" 2>/dev/null || true
echo "Backup complete: ${DATE}"
EOF
chmod +x /opt/ecosire/scripts/backup-db.sh
# Schedule daily backups at 3 AM UTC
(crontab -l 2>/dev/null; echo "0 3 * * * /opt/ecosire/scripts/backup-db.sh >> /var/log/db-backup.log 2>&1") | crontab -
IAM Role Configuration
Attach an IAM role to your EC2 instance for AWS service access (S3, CloudWatch, SES):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::your-products-bucket",
"arn:aws:s3:::your-products-bucket/*",
"arn:aws:s3:::your-backups-bucket",
"arn:aws:s3:::your-backups-bucket/*"
]
},
{
"Effect": "Allow",
"Action": [
"ses:SendEmail",
"ses:SendRawEmail"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"cloudwatch:PutMetricData",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
With the IAM role attached to your instance, AWS SDK calls use instance credentials automatically — no access key/secret key needed in your environment variables.
Common Pitfalls and Solutions
Pitfall 1: No Elastic IP — IP changes on restart
Stopping and starting (not rebooting) an EC2 instance assigns a new public IP. Without an Elastic IP, your DNS breaks. Allocate and associate an Elastic IP immediately after launching your instance.
Pitfall 2: SSH access locked out
If you lose your SSH key or lock yourself out by misconfiguring security groups, use EC2 Instance Connect (browser-based SSH) from the AWS console, or Session Manager (requires SSM agent installed, which comes with Ubuntu by default). As a last resort, detach the root EBS volume, attach it to another instance, fix the authorized_keys file, and reattach.
Pitfall 3: Running out of disk space during deployment
The .next build cache and node_modules grow substantially during development. Monitor disk usage with df -h and set a CloudWatch alarm on disk_used_percent > 80%. The ncdu command (ncdu /) identifies which directories are consuming space.
Pitfall 4: Memory exhaustion from Node.js OOM
Node.js has a default memory limit (1.4GB on 64-bit) that can cause OOM crashes on large applications. Set node_args: '--max-old-space-size=1024' in your PM2 ecosystem file to explicitly cap memory usage. Set max_memory_restart slightly above this to auto-restart if it's exceeded.
Pitfall 5: T3 CPU throttling under sustained load
T3 instances use "CPU credits" for burstable performance. Extended high-CPU operations (large builds, heavy database queries) exhaust credits, causing throttling to the "baseline" performance. Monitor CPUCreditBalance in CloudWatch. If credits are consistently depleted, upgrade to a c6i instance or enable "unlimited" mode (additional cost per CPU hour above baseline).
Frequently Asked Questions
Should I use EC2 or a managed service like AWS Elastic Beanstalk?
EC2 gives you full control: the exact Node.js version, file system access, ability to run Docker sidecar containers, and custom Nginx configuration. Elastic Beanstalk manages the underlying infrastructure but constrains your options and adds complexity for troubleshooting. For a team comfortable with Linux server management, EC2 with PM2 + Nginx is simpler and more predictable than managed platforms. Use Beanstalk if you want the platform to handle scaling and health management automatically.
How do I handle zero-downtime deployments on EC2?
The PM2 pm2 reload command provides zero-downtime for cluster-mode processes (NestJS API). For Next.js (fork mode), build the new version first, then reload PM2. During the few seconds PM2 takes to switch processes, Nginx queues incoming requests (with a small timeout). For truly zero-downtime, use two EC2 instances behind an ALB (Application Load Balancer) and deploy to one while the other serves traffic.
When should I use auto-scaling?
Auto-scaling adds significant operational complexity — health checks, launch templates, load balancers, and session affinity considerations. For applications with predictable traffic, a properly sized EC2 instance with vertical scaling (bigger instance type) is simpler and often cheaper than horizontal auto-scaling. Consider auto-scaling when you have traffic spikes more than 5x baseline and the cost of running a permanently larger instance exceeds the complexity of auto-scaling.
How do I migrate from EC2 to containers later?
Start by containerizing your application with Docker (write a Dockerfile for each app). Test it locally with Docker Compose. Then choose between ECS Fargate (serverless containers, simpler) or EKS (Kubernetes, more powerful but complex). The migration is non-disruptive if you containerize incrementally — run the containerized version behind the same Nginx/Cloudflare setup, verify behavior, then cut over.
What's the most cost-effective way to run EC2 in production?
Purchase a 1-year Reserved Instance (no upfront or partial upfront) for your baseline instance — 40% cheaper than on-demand. For additional capacity during traffic spikes, use Spot Instances (up to 90% cheaper) if your application can handle interruptions. Set up a CloudWatch billing alarm at 80% of your monthly budget so unexpected cost increases are caught early. For production web applications, Reserved Instances provide the best balance of cost and reliability.
Next Steps
Running a production web application on EC2 requires ongoing operational attention — security patches, disk management, performance monitoring, and deployment automation. ECOSIRE runs a production EC2 t3.large instance serving 5 applications across multiple domains, with automated backups, CloudWatch monitoring, and zero-downtime PM2 deployments.
Whether you need AWS infrastructure consulting, EC2 deployment setup, or complete DevOps support for your Node.js application, explore our services to see how we can help.
Written by
ECOSIRE Research and Development Team
Building enterprise-grade digital products at ECOSIRE. Sharing insights on Odoo integrations, e-commerce automation, and AI-powered business solutions.
Related Articles
Cloud Hosting for ERP: AWS vs Azure vs Google Cloud
A detailed comparison of AWS, Azure, and Google Cloud for ERP hosting in 2026. Covers performance, cost, regional availability, managed services, and ERP-specific recommendations.
Cloud vs On-Premise ERP in 2026: The Definitive Guide
Cloud vs on-premise ERP in 2026: total cost analysis, security comparison, scalability, compliance, and the right deployment model for your business.
Docker Compose for Development: Local Infrastructure
Docker Compose for local development: PostgreSQL, Redis, Authentik, networking, health checks, volume management, and environment-specific configurations for TypeScript monorepos.