AWS’s broad range of services and pricing options gives you the flexibility to get the performance and capacity you need. Enterprises chose AWS cloud computing because of scalability or security. AWS cloud has also become one of the latest technology trends companies follow. One of the appealing aspects of AWS is its “pay as you go” costing approach.
While AWS offers significant advantages over traditional on-premise infrastructures, the flexibility and scalability of AWS often lead to out-of-control costs. AWS costs can be blurred and complicated to analyze. Without dedicated utilities to identify the source of costs and how to manage them, they can quickly fade away your profit margins.
It’s common to see businesses claim that they overspend in the cloud, that unused services waste a double-digit percentage of money, or that millions of companies provision resources with more capacity than they need.
Failure to reduce AWS costs is not necessarily the fault of businesses. AWS pricing is challenging to analyze. If cloud customer believes they are only paying for what they use, not what they are being provided, it’s easy to find that cloud bills can exceed expectations. Additional services are also associated with the instances that drive up costs even when the samples are terminated.
Our development team has created an AWS cost optimization solution to help you reduce AWS costs and ensure that cloud spendings align with your expected budgets. Learn how it can help you in this article.

What is Cost Optimization in AWS?
We built an advanced Amazon cost analyzer tool in order to know how you can get started with AWS cost optimization. It helps you visualize, analyze, and manage your AWS costs and usage over time, spending patterns across different dimensions, as well as your cost breakdowns across various resources. Once you understand what increases your AWS costs, you can explore cloud cost optimization measures and reduce AWS costs. AWS cost optimization requires implementing cost-saving best practices to get the most out of the cloud investment.
Why should you optimize your AWS costs?
Unlike on-premise environments, which often need high initial capital expenditures with low ongoing costs, cloud investments are operating expenses. As a result, cloud costs can go out of control, while tracking their efficiency over time becomes challenging. Cloud auto-scaling allows organizations to increase or reduce cloud storage, networking, computing, and memory performance. In this way, they can adapt to fluctuating computing demands at any time. Under the AWS costing approach, businesses should pay only for the resources that they use. However, they can quickly face an expensive cost overrun if they don’t have a cost optimization tool to monitor spending and identify cost anomalies.
Utility to calculate AWS costs
Have you ever wondered about the price for your logically grouped environments with a cloud provider like AWS, GCP, Azure, etc.? Have you found a tool to answer this question quickly and for free? In this article, we will create a helpful utility that captures AWS EC2 resources and calculates their price in detail. Also, we will show an approach to how to implement it and leave room for extending this idea. We will use AWS’s boto3 JavaScript library and NodeJS to run this command line utility.
Assumptions
Let us assume you have two environments (for simplicity): dev and prod. Each domain consists of two services: Backend and Frontend, where each service is just a set of static EC2 instances, and each EC2 instance is tagged with at least these tags:
- Env: dev
- Service: frontend
- Name: frontend-service-01.dev

Cost optimization tool that we build
So, at the end of this article, we will have a command line tool SHOW-PRICE, which accepts a single parameter – path, so, if we wish to see the price of all environments, we have to run SHOW-PRICE-P “*” in case we would like to check the price of all services – SHOW-PRICE-P “*.*”. The output will look like the following:
$ show-price -p "*"
.dev = 0.0058$ per hour
.prod = 0.0058$ per hour
$ show-price -p "*.*"
.dev.frontend = 0.0406$ per hour
.dev.backend = 0.0406$ per hour
.prod.backend = 0.0058$ per hour
.prod.frontend = 0.0058$ per hour
Implementation
Configuration
Firstly, we have to configure our local environment and provide AWS credentials. So:
# Create a folder with the AWS IAM access key and secret key
$ mkdir -p ~/.aws/
# Add credentials file
$ > ~/.aws/credentials
# Paste your IAM access key and secret key into this file
$ cat ~/.aws/credentials
[default]
aws_access_key_id = AKIA***
aws_secret_access_key = gDJh****
# Clone the project and install a show-price utility
$ git clone [email protected]:vpaslav/show-price.git && cd show-price
$ npm install.

Data structure definition
As we work with hierarchical data, using a simple tree structure would be best. So, our AWS infrastructure can be represented in a tree TreeNode as in the example below:
* env name
* |_ service 1
* |_ instanceId 1: key: name, value: price
* |_ instanceId 2: key: name, value: price
* |_ service 2
* |_ instanceId 3: key: name, value: price
* |_ instanceId 4: key: name, value: price
Having this structure, we can easily orient over it and extract the information that we need. You can find more details about tree implementation, for instance, here.
Data structure processing
We need the following main methods:- TreeNode.summarizePrice method, which recursively summarizes all prices for all the nodes in a tree up to the root to process our tree. Code:
static summarizePrice(node) {
if (node.isLeaf()) return Number(node.value);
for (const child of node.children) {
node.value += TreeNode.summarizePrice(child);
}
return Number(node.value);
}
– TreeNode.displayPrice method that iterates over the tree and displays nodes in case their path equals a defined pattern. Code:
static displayPrice(node, pathRegexp) {
if (node.path.match(pathRegexp)) {
console.log(`${node.path} = ${node.value}$ per hour`);
}
for (const child of node.children) {
TreeNode.displayPrice(child, pathRegexp);
}
}
Let’s store prices for all the instance types in a simple CSV file, which we can read and put into a tree for every leaf node, basically an AWS instance. And finally, let’s extract data from AWS Cloud and use the TreeNode class to structure them in the way we need.

The final result displays AWS cost optimization opportunities
After all manipulations, we will have a cool tool that could display costs per env, service, or even specific instance. For example:
# Display price per envs only
$ show-price -p "*"
.prod = 0.0174$ per hour
.dev = 0.0116$ per hour
# Display price per envs per services
$ show-price -p "*.*"
.prod.front = 0.0174$ per hour
.dev.front = 0.0058$ per hour
.dev.back = 0.0058$ per hour
# Display price for a specific env
$ show-price -p "prod"
.prod = 0.0174$ per hour
# Display price for a specific env and all its services
$ show-price -p "prod.*"
.prod.front = 0.0174$ per hour
# Display price for all specific services within all envs
$ show-price -p "*.front"
.prod.front = 0.0174$ per hour
.dev.front = 0.0058$ per hour
# Display price for a specific instance in a specific env and service
$ show-price -p "prod.front.i-009105b93c431c998"
.prod.front.i-009105b93c431c998 = 0.005800$ per hour
# Display price of all instances for an env
$ show-price -p "prod.*.*"
.prod.front.i-009105b93c431c998 = 0.005800$ per hour
.prod.front.i-01adbf97655f57126 = 0.005800$ per hour
.prod.front.i-0c6137d97bd8318d8 = 0.005800$ per hour
The main reasons for wasted cloud spend
AWS non-production resources
Non-production resources, such as development environment, staging, testing, and quality assurance, are needed just during a work week, which means 40 hours. However, AWS on-demand charges are based on the time the resources are in use. So, spending on non-production resources is wasted at night and also on weekends (roughly 65% of the week).
AWS oversized resources
Oversized resources often are a second reason for AWS cost increase. AWS offers a range of sizes for each instance option, and many companies keep by default the largest size available. However, they don’t know what capacity they’ll need in the future. A study by ParkMyCloud found that the average utilization of provisioned AWS resources was just 2%, an indication of routine overprovisioning. If a company shrinks an instance by one size, they reduce AWS costs by 50%. Reducing by two sizes saves them 75% on AWS cloud spend. The easiest way to reduce AWS costs quickly and significantly is to reduce spending on unnecessary resources.

Using our solution, you get a cost optimization process that is simply about reducing cloud costs through a series of optimization techniques such as:
- Identifying poorly managed resources
- Eliminating waste
- Reserving capacity for higher discounts
- And right-sizing computing services for scaling.
Monitor and measure your cloud spend
Below are some practices you can incorporate into your cost optimization strategy to reduce your AWS spend.
- See which AWS services cost you the most and why.
- You can align AWS cloud costs with business metrics.
- Empower engineering to better report on AWS costs to finance.
- Identify cost optimization opportunities you may not be aware of, such as architectural choices you can make to improve profitability.
- Identify and track unused instances so you can remove them to eliminate waste.
- Get cost optimization opportunities, such as instance size recommendations.
- Detect, track, tag, and delete persistent unallocated storage, such as Amazon EBS volumes, when you delete an associated instance.
- Identify soon-to-expire AWS Reserved Instances (RI), and avoid having expired RI instances which lead to more expenses.
- Introduce cost accountability by showing your teams how each project impacts the business’s overall bottom line, competitiveness, and ability to fund future growth.
- Tailor your provisioning to your needs.
- Automate cloud cost management and optimization. Test native AWS tools before using more advanced third-party tools.
- Schedule on and off times unless workloads need to run all the time.
Some additional tools to reduce cloud costs using business analysis:
- Select the Delete on Termination checkbox when creating or launching an EC2 instance. When you terminate the attached instance, the unattached EBS volumes are automatically removed.
- Decide which workloads you wish to use, considering Reserved Instances, and which you want to use On-Demand Pricing.
- Keep your latest snapshot for a few weeks and then delete it while you create even more recent snapshots that you can use to recover your data in the event of a disaster.
- Avoid reassigning an Elastic IP address more than 100 times per month. It guarantees that you will avoid having to pay for that. If you can not, use an optimization tool to find and free unallocated IP addresses after killing the bounded instances.
- Upgrade to the latest generation of AWS instances to improve performance at a lower cost.
- Use optimization tools to find and kill unused Elastic Load Balancers
- Optimize your cloud costs as an ongoing part of your DevOps culture.
AWS cost optimization is a continuous process
Applying best practices to AWS cost optimization and using cloud spend optimization tools is an everlasting process. Optimizing costs should be a process that looks at reducing your AWS spending, aligning that spending with essential business outcomes, and optimizing your environment to meet your business goals.
An excellent approach to AWS cost optimization starts with getting a detailed picture of your current costs, identifying opportunities to optimize costs, and then implementing changes. Using our utility, analyzing the results, and implementing changes on your cloud can be challenging.
While cost optimization has traditionally focused on reducing waste and purchasing plans (such as reserved instances), many forward-thinking organizations increasingly focus on technical enablement and architecture optimization.

Enterprises have realized that cost optimization is not just reducing AWS costs but also providing technical teams with the required cost information to make cost-driven development decisions that lead to profitability. Additionally, engineering needs to be able to properly report cloud spending to finance – and see how that spending aligns with the business metrics they care about. Engineers can see the cost impact and how code changes affect their AWS spend.
You must monitor the AWS cloud to determine when assets are underutilized or unused. The utility will also help you see opportunities to reduce costs via terminating, deleting, or releasing zombie assets. Monitoring Reserved Instances is vital to ensure they are entirely utilized. Of course, it’s impossible to manually scan a cloud environment 24/7, 365 days per year, so many organizations use policy-driven automation.
Hire cloud experts to manage and reduce AWS costs
If you are worried about overspending, our solution can automate cost anomaly alerts that notify engineers of cost fluctuations so teams can address any code issues to prevent cost overruns.
Many organizations end up under-resourcing, compromising performance or security, or under-utilizing AWS infrastructure. Working with AWS cloud experts is the best way to create an efficient AWS cost optimization strategy. While a company could continue to analyze its costs and implement improvements, there are new issues that can arise.
Our technical team can help you avoid these traps and reduce your AWS cloud costs. With continuous monitoring, you can be sure you aren’t missing any cloud cost optimization opportunities.
You may also be interested in the following article: Best RFP Practices in Web Development + RFP Template.
Let’s talk about your project
Drop us a line! We would love to hear from you.