Skip to content
cloud-cost

Cloud Cost Optimization Strategies That Actually Work in 2026

· 10 min read
Cloud Cost Optimization Strategies That Actually Work in 2026

78% of companies waste 21-50% of their cloud spend. For a startup on a $10K/mo AWS bill, that’s $2,500-$5,000 going nowhere every month. And nobody on your team has time to figure out where.

Most cloud cost optimization guides assume you have a dedicated FinOps team, a cloud architect, or at least someone whose full-time job is staring at billing dashboards. You don’t. You have engineers building product who occasionally glance at the AWS bill and wince.

This guide is for you. Seven strategies ranked by impact, with specific commands you can run, real dollar examples at startup scale, and a clear priority order so you know what to do first.

Start Here: Find Your Biggest Line Items

Before you optimize anything, you need to know where the money goes. The 80/20 rule applies to cloud bills just like everything else. Three or four services typically make up 80% of your total spend. Everything else is noise.

Run this to see your cost breakdown by service for last month:

aws ce get-cost-and-usage \
  --time-period Start=2026-02-01,End=2026-03-01 \
  --granularity MONTHLY \
  --metrics BlendedCost \
  --group-by Type=DIMENSION,Key=SERVICE

Don’t start by optimizing Lambda or CloudWatch or S3 storage classes. Find the three services eating the most money. For most small SaaS companies, that’s EC2 (compute), RDS (database), and data transfer. Those three are where you’ll find 80% of your savings.

We had a client spending $12K/mo on AWS. They’d never broken down the bill beyond the total. Twenty minutes of digging found three services making up $9,200 of that total. The other 40+ services combined cost $2,800. We ignored those entirely and focused on the big three.

Want us to run this analysis for you? Request a free async audit and we’ll send you a cost breakdown with specific savings recommendations. No call required.

Strategy 1: Kill Idle and Unused Resources

Effort: Low | Typical savings: 5-15% immediately

This is the easiest win and the one most teams skip. Resources get created for a project, the project ends, and the resources keep running. Nobody remembers to turn them off because nobody owns them.

Common culprits:

  • Orphaned EBS volumes — detached from any instance but still billing for storage
  • Unattached Elastic IPs — AWS charges $3.60/mo for each EIP not associated with a running instance
  • Idle load balancers — ALBs with zero traffic still cost ~$16/mo plus hourly charges
  • Forgotten dev/staging environments — that test cluster from three months ago is still running 24/7
  • Old snapshots — EBS snapshots accumulate silently and cost $0.05/GB/mo

Find orphaned EBS volumes:

aws ec2 describe-volumes \
  --filters Name=status,Values=available \
  --query 'Volumes[*].{ID:VolumeId,Size:Size,Created:CreateTime}' \
  --output table

Find unattached Elastic IPs:

aws ec2 describe-addresses \
  --query 'Addresses[?AssociationId==null].{IP:PublicIp,AllocID:AllocationId}' \
  --output table

We regularly find $500-$1,500/mo in orphaned resources during client audits. One client had a dozen EBS volumes totaling 2TB that had been detached for over a year. That’s $100/mo just sitting there.

Strategy 2: Right-Size Your Instances

Effort: Medium | Typical savings: 20-40% on compute

Overprovisioning is the most expensive mistake in cloud infrastructure. Teams pick instance sizes based on what they think they might need, not what they actually use. Then nobody goes back to check.

The tell is low utilization. If your EC2 instances run at 5-15% CPU average, they’re oversized. Check CloudWatch metrics for the last 30 days. Look at average CPU, memory (if you have the CloudWatch agent), and network throughput.

We had a client running 3x m5.2xlarge instances for a web application. Those are 8 vCPUs and 32GB RAM each. Actual usage: 2GB RAM, 6% average CPU. We moved them to m5.large (2 vCPUs, 8GB RAM). Same application performance. Saved $800/mo across the three instances.

How to find oversized instances:

aws cloudwatch get-metric-statistics \
  --namespace AWS/EC2 \
  --metric-name CPUUtilization \
  --dimensions Name=InstanceId,Value=i-1234567890abcdef0 \
  --start-time 2026-02-01T00:00:00Z \
  --end-time 2026-03-01T00:00:00Z \
  --period 86400 \
  --statistics Average

Also enable AWS Compute Optimizer. It analyzes your CloudWatch metrics and gives ML-powered right-sizing recommendations for EC2, EBS, Lambda, and ECS. It’s free.

Right-sizing isn’t a one-time event. Review quarterly. Traffic patterns change, codebases grow, and what was right-sized in January might be oversized by June.

Strategy 3: Schedule Non-Production Resources

Effort: Low | Typical savings: ~65% on non-prod compute

Your dev and staging environments don’t need to run at 3am on a Sunday. But they do, because nobody set up a schedule.

AWS’s own data says 70% of hours in a week are non-working hours. If you shut down dev/staging instances during nights and weekends, you save roughly 65% on those resources. For a team spending $2K/mo on non-production compute, that’s $1,300/mo.

Use AWS Instance Scheduler or a simple Lambda function triggered by EventBridge on a cron schedule. Tag your non-production resources with Environment=dev or Environment=staging, and the scheduler handles the rest.

# Tag resources for scheduling
aws ec2 create-tags \
  --resources i-1234567890abcdef0 \
  --tags Key=Schedule,Value=office-hours

One caveat: don’t schedule databases that take 10 minutes to start back up if your team starts work at 9am and expects things to be running. Start the schedule 15 minutes before working hours begin.

Strategy 4: Use Savings Plans Instead of Reserved Instances

Effort: Low | Typical savings: Up to 72% vs on-demand

If you’ve been running the same compute workloads for three or more months and expect to keep them for a year, you should be on a Savings Plan. You’re overpaying by 30-72% otherwise.

Savings Plans are simpler than Reserved Instances. You commit to a dollar amount of compute per hour (not a specific instance type or region). AWS automatically applies the discount to your usage. If your patterns change, the discount follows.

Start conservative. Look at your steady-state compute spend over the last 3 months. Commit to 50-60% of that as a Savings Plan. You can always add more plans later. Over-committing locks you into paying for capacity you might not need.

For a startup spending $4K/mo on EC2 on-demand, a conservative Savings Plan covering $2K/mo of that at a 30% discount saves $600/mo. Over a year, that’s $7,200 for filling out a form.

Reserved Instances still make sense for specific use cases (RDS databases, ElastiCache clusters), but for general compute, Savings Plans give you more flexibility with comparable discounts. For a deeper breakdown of costs at startup scale, see our small business guide to AWS bills.

Strategy 5: Use Spot Instances for Non-Critical Workloads

Effort: Medium | Typical savings: Up to 90% vs on-demand

Spot Instances use AWS’s spare capacity at steep discounts. The tradeoff is that AWS can reclaim them with a two-minute warning. That makes them perfect for workloads that can handle interruptions:

  • CI/CD build agents — if a build gets interrupted, just restart it
  • Batch processing jobs — checkpoint your progress, resume on a new instance
  • Test environments — nobody cares if the test cluster goes down briefly
  • Data processing — ETL jobs, log analysis, ML training

We use Spot for CI/CD across most of our client environments. Build costs drop 70-90%. The occasional interrupted build adds maybe five minutes to a developer’s day. The cost savings are worth it.

Don’t use Spot for production web servers, databases, or anything that needs to be available 24/7.

Strategy 6: Tag Everything

Effort: Medium (upfront) | Typical savings: Indirect but essential

Tags are how you answer “where is the money going?” Without them, you’re guessing.

Every resource should have at minimum:

  • Environment — production, staging, dev
  • Team — which team owns this resource
  • Application — which service or app it belongs to
  • CostCenter — for billing allocation (if you have multiple products)

Enforce tagging through your infrastructure as code. In Terraform:

resource "aws_instance" "web" {
  # ... instance config ...

  tags = {
    Environment = "production"
    Team        = "backend"
    Application = "api"
  }
}

Once everything is tagged, enable AWS Cost Allocation Tags. Your billing dashboard goes from “you spent $12K on EC2” to “the backend team spent $6K on production API servers and $3K on staging and $3K on dev.” Now you can optimize with precision instead of guessing.

Strategy 7: Automate Cost Monitoring

Effort: Low | Typical savings: Prevents cost spikes

Set up AWS Budgets to alert you before you overspend. Create a monthly budget at your expected spend level, with alerts at 80% and 100%. It takes five minutes and costs nothing.

aws budgets create-budget \
  --account-id 123456789012 \
  --budget file://budget.json \
  --notifications-with-subscribers file://notifications.json

Then build a monthly ritual. Spend 15 minutes on the first Monday of each month reviewing your cost breakdown. Look for anomalies, check utilization trends, and verify your Savings Plans are covering what they should. Most teams don’t need fancy FinOps tooling. They need the discipline to look at the bill once a month and ask “did anything change?”

Enable AWS Cost Anomaly Detection as well. It uses ML to flag unusual spending patterns and alerts you before a forgotten resource or misconfigured service runs up a surprise bill.

The Strategy Nobody Talks About: Outsource the Optimization

Every strategy above requires someone on your team to do the work. Find the waste. Right-size the instances. Set up the schedules. Enforce tagging. Review the bill monthly.

If your team is already stretched thin building product, that optimization work won’t happen. It’ll sit on someone’s backlog for months while you keep overpaying.

This is where fractional DevOps pays for itself. A senior consultant on retainer does the optimization, implements the automation, and keeps monitoring the bill month over month. We typically find 30-50% savings in the first audit alone.

The math works out: a $3-5K/mo retainer that finds $3-5K/mo in cloud savings is effectively free. Everything above that is profit. One client had a $15K/mo AWS bill. We found $4,200/mo in waste in the first week. The retainer paid for itself before the first invoice arrived.

If you’re not sure whether your team needs a full-time DevOps engineer or a fractional arrangement, the cloud bill is a good signal. If your bill has never been optimized, there’s almost certainly enough waste to fund the retainer.

What NOT to Optimize

A few things engineers waste time on that aren’t worth the effort:

Don’t optimize $5/mo services. If CloudWatch Logs costs you $8/mo, don’t spend two days redesigning your logging pipeline. Focus on the services that cost $2K+/mo.

Don’t over-commit on Savings Plans before your usage stabilizes. If you’re a fast-growing startup and your compute needs might double in 6 months, committing to a 3-year Reserved Instance at your current usage is a trap. Start with short-term, conservative Savings Plans.

Don’t sacrifice reliability for cost savings. The $500/mo you save by shrinking your RDS instance is worthless when the database crashes under load during a traffic spike. Right-size to your P95, not your average. The cheapest outage is the one that doesn’t happen.

FAQ

How much can I realistically save? Most small teams we audit save 30-50% on their first optimization pass. The biggest wins come from killing idle resources and right-sizing instances. After the initial pass, ongoing optimization typically saves another 5-10% per quarter as usage patterns change.

Should I use Reserved Instances or Savings Plans? Savings Plans for compute (EC2, Fargate, Lambda). They’re more flexible and offer comparable discounts. Reserved Instances for specific services like RDS where Savings Plans don’t apply. Start conservative and stack additional plans as your usage stabilizes.

Is cloud cost optimization worth it for a small bill ($3-5K/mo)? Yes. Even at $5K/mo, a 30% savings is $1,500/mo or $18K/year. That’s real money for a startup. The quick wins (killing idle resources, scheduling non-prod) take less than an hour and save money immediately.

How often should I review cloud costs? Monthly, 15 minutes. Check your cost breakdown by service and tag. Look for anomalies. Verify Savings Plan coverage. That’s it. Don’t turn it into a weekly ceremony. Monthly catches problems before they compound.

Start With Visibility

Cloud cost optimization isn’t a one-time project. It’s a habit. But the habit starts with seeing where the money goes.

Run the cost breakdown command. Find your top 3 services. Check for orphaned resources. Right-size the obvious outliers. Those four steps alone will save most teams 20-30% in the first week.

If you want a senior engineer to go through your entire AWS bill and tell you exactly what to fix, get a free async audit. We’ll send you a Loom walkthrough and a written report with specific savings recommendations, dollar amounts, and priority order. No call. No pressure. Just the numbers.

How mature is your DevOps?

Take our free assessment. Get a maturity score across 5 dimensions and specific recommendations — written by an engineer, not a bot.

Free DevOps Assessment

Get DevOps insights in your inbox

No spam. Unsubscribe anytime.