AWS cost optimisation tool to reduce cloud costs

Share via:

AWS’s broad range of services and pricing options gives you the flexibility to get the performance and capacity you need. Enterprises chose AWS cloud computing because of scalability or security. AWS cloud computing has also become one of the latest technology trends that companies follow. One of the appealing aspects of AWS is its “pay as you go” costing approach.

While AWS offers significant advantages over traditional on-premise infrastructures, the flexibility and scalability of AWS often lead to out-of-control costs. AWS costs can be blurred and complicated to analyze. Without dedicated utilities to identify the source of costs and how to manage them they can quickly fade away your profit margins.

It’s not uncommon to see businesses claiming that they are overspending in the cloud, that a double-digit percentage of money is wasted on unused services, or that millions of businesses are provisioning resources with more capacity than they need.

Failure to reduce AWS costs is not necessarily the fault of businesses. AWS pricing is difficult to analyse. If a cloud customer believes they are only paying for what they use, not what they are being provided, it’s easy to find that cloud bills can exceed expectations. There are also the additional services associated with the instances that drive up costs even when the instances are terminated.

Our development team has created an AWS cost optimisation solution that can help you reduce AWS costs and ensure that cloud spendings are in line with your organisation’s expected budgets. Learn how it can help you in this article.


What is Cost Optimization in AWS?

To know how you can get started with AWS cost optimization, we built an advanced Amazon cost analyser tool. It helps you visualize, analyse, and manage your AWS costs and usage over time, spending patterns across different dimensions, and your cost breakdowns across various resources. Once you understand what increases your AWS costs, you can explore cloud cost optimization measures and reduce AWS costs. AWS cost optimization requires implementation of cost-saving best practices to get the most out of the cloud investment. 

Why should you optimize your AWS costs?

Unlike on-premise environments, which often need high initial capital expenditures with low ongoing costs, cloud investments are operating expenditures. As a result, cloud costs can go out of control, while it becomes challenging to track their efficiency over time. Cloud auto-scaling gives organizations the flexibility to increase or reduce their cloud storage, networking, compute, and memory performance. In this way they can adapt to fluctuating compute demands at any time. Under the AWS costing approach, businesses should pay only for the resources that they use. But if they don’t have a cost optimization tool to monitor spendings and identify cost anomalies, they can quickly face an expensive cost overrun.

Utility to calculate AWS costs

Have you ever wondered what the price is for your logically grouped environments with a cloud provider like AWS, GCP, Azure, etc.? Have you found a tool that can answer this question quickly and for free? In this article, we will create a tool that captures AWS EC2 resources and calculates their price. Also, we will show an approach to how to implement it and leave room for extending this idea. We will use AWS’s boto3 javascript library and NodeJS to run this command line utility.


Let us assume you have two environments (for simplicity): dev and prod. Each environment consists of two services: Backend and frontend, where each service is just a set of static EC2 instances and each EC2 instance is tagged with at least these tags:

  • Env: dev
  • Service: frontend
  • Name:

Cost optimisation tool that we build

So, at the end of this article we will have a command line tool show-price, which accepts a single parameter – path, so, if we wish to see the price of all environments we have to run show-price -p “*”, in case we would like to check the price of all services – show-price -p “*.*”. The output will be like:

$ show-price -p "*"

.dev = 0.0058$ per hour
.prod = 0.0058$ per hour

$ show-price -p "*.*"

.dev.frontend = 0.0406$ per hour
.dev.backend = 0.0406$ per hour
.prod.backend = 0.0058$ per hour
.prod.frontend = 0.0058$ per hour



First of all, we have to configure our local environment and provide AWS credentials. So:

# Create a folder with AWS IAM access key and secret key
$ mkdir -p ~/.aws/

# Add credentials file
$ > ~/.aws/credentials

# Paste your IAM access key and secret key into this file
$ cat ~/.aws/credentials
aws_access_key_id = AKIA***
aws_secret_access_key = gDJh****

# Clone the project and install a show-price utility
$ git clone && cd show-price
$ npm install.

Data structure definition

As we work with hierarchical data it would be the best to use simple tree structure. So, our AWS infrastructure can be represented in a tree TreeNode as in example below:

* env name
*   |_ service 1
*          |_ instanceId 1: key: name, value: price
*          |_ instanceId 2: key: name, value: price
*   |_ service 2
*          |_ instanceId 3: key: name, value: price
*          |_ instanceId 4: key: name, value: price

Having this structure we can easily navigate over it and extract information that we need.More details about tree implementation can be found here.

Data structure processing

To process our tree, we need following main methods:- TreeNode.summarizePrice method which recursively summarizes all prices for all the nodes in a tree up to root. Code:

static summarizePrice(node) {
 if (node.isLeaf()) return Number(node.value);
 for (const child of node.children) {
   node.value += TreeNode.summarizePrice(child);
 return Number(node.value);

TreeNode.displayPrice method which iterates over the tree and displays nodes if their path equals a defined pattern. Code:

static displayPrice(node, pathRegexp) {
 if (node.path.match(pathRegexp)) {
   console.log(`${node.path} = ${node.value}$ per hour`);
 for (const child of node.children) {
   TreeNode.displayPrice(child, pathRegexp);

Let’s store prices for all the instance types in a simple csv file, which we can read and put into a tree for every leaf node, which is basically an AWS instance.And, finally, let’s extract data from AWS Cloud and use the TreeNode class to structure them in a way we need.


Final result displays AWS cost optimisation opportunities

After all manipulations, we will have a cool tool, which could display costs per env, service or even specific instance. For example:

# Display price per envs only
$ show-price -p "*"
.prod = 0.0174$ per hour
.dev = 0.0116$ per hour

# Display price per envs per services
$ show-price -p "*.*"
.prod.front = 0.0174$ per hour
.dev.front = 0.0058$ per hour
.dev.back = 0.0058$ per hour

# Display price for a specific env
$ show-price -p "prod"
.prod = 0.0174$ per hour

# Display price for a specific env and all it's services
$ show-price -p "prod.*"
.prod.front = 0.0174$ per hour

# Display price for all specific services within all envs
$ show-price -p "*.front"
.prod.front = 0.0174$ per hour
.dev.front = 0.0058$ per hour

# Display price for a specific instance in a specific env and service
$ show-price -p "prod.front.i-009105b93c431c998"
.prod.front.i-009105b93c431c998 = 0.005800$ per hour

# Display price of all instances for an env
$ show-price -p "prod.*.*"
.prod.front.i-009105b93c431c998 = 0.005800$ per hour
.prod.front.i-01adbf97655f57126 = 0.005800$ per hour
.prod.front.i-0c6137d97bd8318d8 = 0.005800$ per hour

Main reasons of wasted cloud spends

AWS non-production resources

Non-production resources, such as development environment, staging, testing, and quality assurance, are needed just during a work week, which means 40 hours. However, AWS on-demand charges are based on the time the resources are in use. So, spending on non-production resources is wasted at night and also on weekends (roughly 65% of the week).

AWS oversized resources

Oversized resources often are a second reason for AWS cost increase. AWS offers a range of sizes for each instance option, and many companies keep by default the largest size available. However, they don’t know what capacity they’ll need in the future. A study by ParkMyCloud found that the average utilization of provisioned AWS resources was just 2%, an indication of routine overprovisioning. If a company shrinks an instance by one size, they reduce AWS costs by 50%. Reducing by two sizes saves them 75% on AWS cloud spend. The easiest way to reduce AWS costs quickly and significantly is to reduce spending on unnecessary resources.


Using our solution you get a cost optimization process that is simply about reducing cloud costs through a series of optimization techniques such as:

  • Identifying poorly managed resources
  • Eliminating waste
  • Reserving capacity for higher discounts
  • And right-sizing computing services for scaling.

Monitor and measure your cloud spend

The tips below are some practices you can incorporate into your cost optimization strategy to reduce your AWS spend.

  • See which AWS services are costing you the most and why.
  • You can align AWS cloud costs with business metrics that matter to you.
  • Empower engineering to better report on AWS costs to finance.
  • Identify cost optimization opportunities you may not be aware of – such as architectural choices you can make to improve profitability.
  • Identify and track unused instances so you can remove them manually or automatically to eliminate waste.
  • Get cost optimization opportunities – such as instance size recommendations.
  • Detect, track, tag, and delete unallocated persistent storage such as Amazon EBS volumes when you delete an associated instance.
  • Identify soon-to-expire AWS Reserved Instances (RI), and avoid having expired RI instances which lead to more expensive ressources.
  • Introduce cost accountability by showing your teams how each project impacts the overall business bottom line, competitiveness, and ability to fund future growth. 
  • Tailor your provisioning to your needs.
  • Automate cloud cost management and optimization. Test native AWS tools before using more advanced third-party tools.
  • Schedule on and off times unless workloads need to run all the time.
  • Select the Delete on Termination checkbox when you first create or launch an EC2 instance. When you terminate the attached instance, the unattached EBS volumes are automatically removed.
  • Decide which workloads you want to use Reserved Instances and which you want to use On-Demand Pricing.
  • Keep your latest snapshot for a few weeks and then delete it while you create even more recent snapshots that you can use to recover your data in the event of a disaster.
  • Avoid reassigning an Elastic IP address more than 100 times per month. It guarantees that you will avoid having to pay for that. If you can not, use an optimization tool to find and free unallocated IP addresses after you have killed the instances they are bound to.
  • Upgrade to the latest generation of AWS instances to improve performance at the lower cost.
  • Use optimization tools to find and kill unused Elastic Load Balancers
  • Optimize your cloud costs as an ongoing part of your DevOps culture.

AWS cost optimisation is a continuous process

Applying best practices to AWS cost optimisation and using cloud spend optimisation tools is an everlasting process. Optimising costs should be a process that looks not only at how you can reduce your AWS spend, but also how you can align that spend with the business outcomes you care about, and how you can optimise your environment to meet your business goals.

A good approach to AWS cost optimization starts with getting a detailed picture of your current costs, identifying opportunities to optimize costs, and then making changes. Using our utility, analyzing the results, and implementing changes on your cloud can be not an easy task.

While cost optimization has traditionally focused on reducing waste and purchasing plans (such as reserved instances), many forward-thinking organizations are now increasingly focused on technical enablement and architecture optimization.


Enterprises have realised that cost optimisation is not just about reducing AWS costs, but also about providing technical teams with the cost information they need to make cost-driven development decisions that lead to profitability. In addition, engineering needs to be able to properly report cloud spend to finance – and see how that spend aligns with the business metrics they care about. Engineers are able to see the cost impact of their work and how code changes affect their AWS spend.

Your AWS cloud has to be monitored at all times to find out when assets are underutilised or not used at all. The utility will also help you to see when there are opportunities to reduce costs via terminating, deleting, or releasing zombie assets. It’s also important to monitor Reserved Instances to ensure they are utilised at 100%. Of course, it’s not possible to manually monitor a cloud environment 24/7/365, so many organisations are taking advantage of policy-driven automation.

Hire cloud experts to manage and reduce AWS costs

If you are worried about overspending, our solution can automate cost anomaly alerts that notify engineers of cost fluctuations so teams can address any code issues to prevent cost overruns.

Many organisations end up under-resourcing, compromising performance or security, or under-utilising AWS infrastructure. Working with AWS cloud experts is the best way to create an efficient AWS cost optimisation strategy. While a company could continue to analyse its costs and implement improvements, there are new issues that can arise.

Our technical team can help you avoid these traps and reduce your AWS cloud costs. With continuous monitoring, you can be sure you aren’t missing any cloud cost optimisation opportunities.

Leave a Comment

Scroll to Top