
Amazon RDS is a powerful managed database service that can be easily created, scaled, and configured to your desire. However, Amazon RDS, like most AWS services, can incur high (and more importantly wasteful) cost very fast if not properly set up and monitored. As a database maintainer, this overview of the service will give you the high-level principles and the tools of how to get the most for your money.
*In this outline, I will focus on PostgreSQL Database Type, but the same or very similar configurations and metrics are available for the other DB Engines. Only Amazon Aurora – Serverless will differ substantially.
Main Sources of Cost for Amazon RDS
There are 4 main cost-related configurations that you need to consider when creating or updating an Amazon Rds Instance: Instance Class, Multi-AZ Configuration, Hard Drive Storage, and Provisioned IOPS, and then there are the supporting services: Snapshots and Snapshot Exports to S3. Let’s deep dive into each one.
Instance Class (COST IMPACT: HIGH)

This is the primary driver of cost. It defines the hardware type purpose, vCPU, CPU Credit, Memory, and Network Performance. It is billed by the hour (and/or bust credits if applicable) and RDS has a vast array of instance options you can pick from. List of instance types and their purpose here and pricing here.
Multi-AZ Configuration (COST IMPACT: HIGH)

It adds redundancy to your instance in case of failure. It is a MUST for a production database and not so much for development and staging. It could almost double the cost as a warm instance is always in stand-by mode to be used if a problem occurs with the main instance. Make sure you don’t use it unless in a production environment.
Enabled Multi-AZ configuration also increase the cost of your Instance Type, Hard Drive Storage, Provisioned IOPS.
Hard Drive Storage (COST IMPACT: LOW TO MEDIUM)

It can be Magnetic, SSD, +/- Provisioned IOPS. The option here is not much of a concern in terms of cost, even though magnetic will be cheaper than SSD. It’s billed per GB and it’s extremely low. Of course, if you are in the hundreds of thousands of GB Database you might want to pay more attention to this. Provisioned IOPS can also get very costly.
Pro Tip: When you set up the Storage, check "Storage Autoscaling", it’s free to autoscale (outside of the extra storage used) and it will prevent your DB from going down if any major fluctuations of data occur. The source of this is not just growth in your data, but it could be because of reading replicas lag, data not being vacuumed, or index not being restructured when influx of requests occur (to name few reasons). Of course, you should not rely on this long-term and adjust the "Allocated storage" once it’s noticed and permanent.
Snapshots (RDS backups) (COST IMPACT: LOW)
This is very low and almost insignificant to the overall bill. Things that you might wanna watch out for are the retention period as you are billed for backups storage as well as data transfer. For nonproduction DBs, you can set up retention to low periods of 3 -7 days to eliminate long-term storage costs.
Provisioned IOPS (COST IMPACT: MEDIUM TO HIGH)
IOPS are rather an extra configuration that is not applied by default, but you should be careful if you use them, as they will impact cost significantly.
Snapshot Exports to S3 (COST IMPACT: LOW)
This is not a common option that most people use, but nonetheless, be aware of the cost associated with this. The cost of it will grow as your database size grows.
Performance Measurement & Instance Adjustments
Now that you have the Database running, you need to focus on monitoring and optimizations. I have identified the main metrics for evaluation:
- Network Throughput
- CPU Utilization and/or CPU Credit Usage
- Freeable Memory
- Read IOPS and Write IOPS
- Burst Balance and/or CPU Credit Usage

This is not a complete list, but a few of the basic ones you should pay attention to. Those particular metrics will determine if a change is needed. As we are talking about savings here, you are looking for wasted resources or resource utilization that is less than 50% of what the Instance Class allows. The ideal rule is that on average you want to utilize 75% of the allowed resource. Yes, you can even go up to 85%-90%, but that will really be pushing the limits and leaving you cornered if an unexpected surge happens. Let’s dive deep into each metric.
Network Throughput
Usage here will dictate changes in the Instance Class Type **** and, specifically the _Network Performanc_e specification.
CPU Utilization and/or CPU Credit Usage
Evaluate the CPU usage. If you are using too little you can scale down to an instance with a different Core Count and vCPU units.
Freeable Memory
Evaluate how much memory you use, especially at your peaks, around 75% utilization is normal. If you are using too little you can scale down to an instance with a lower Memory capacity.
Read IOPS and Write IOPS
It determines the speed of access to your hard disk. Any changes here will dictate changes to your hard drive type and/or provisioning/de-provisioning additional IOPS. More info about IOPS and determining your base limit
Burst Balance and/or CPU Credit Usage
*Only applicable for "Burstable Performance Instances" If your Instance Type falls in this category, pay close attention, as burstable instances can burst over the Instance capacity, and that will incur a cost. On the other side, if you are not utilizing this option, you might want to get an instance with a lower CPU Credit/hour. You will have to monitor both metrics – "Burst Balance" and "CPU Credit Usage" in order to evaluate the best fit for you.
Warning: None of those Metrics work in isolation by themself as the Instance Class is not configurable per metric, but they come as a predefined set of configuratons. You will need to determine what is the best pair for your use case.
Pro Tip: Enabled Performance Insights at least for the first few months if not permanently in production. With that tool you will be able to fine-tune queries, connections, and ton of other things on your stack or directly on your app that’s using the database. Optimizaitons through those insides can lead to broader Instance optimizations and potentially a Instance Type scale down and more savings. More Info about Performance Insights
Addition Ways for Cost Savings
Reserve Instance

Reserved Instance (RI) is the way to go if you are going to be running the same instance for a long time. It can save you around ~30% to 50% if you commit for a year or more. An interesting detail is that you are paying for the units used for a particular Instance Class Type. This commitment allows you to scale up or down with THE SAME INSTANCE CLASS by still utilizing the reservation. More info on Reserve Instances
Set Up an Account/Service Budget
In the end, if you are unsure as to what configuration properties you need, you should at least set up a budget and monitor accordingly. You can accomplish this under My Billing Dashboard -> Budgets. It will not directly save you money, but this is the proactive way to monitor costs.
Conclusion
This has been a very brief intro on evaluating your Amazon RDS Instance on a high level, but It should get you started in the right direction. If you need a more deep dive, you will have to go thoroughly through the Amazon RDS documentation, keep monitoring via CloudWatch and Performance Insights and examine what works best for your use case.
Also, Read
The Most for Your Money from Amazon S3
Contacts
I am a Software, Data, and Machine Learning Consultant, certified in AWS Machine Learning & AWS Solutions Architect – Professional. Get in touch if you need help with your next Machine Learning project. To stay updated with my latest articles and projects, follow me on Medium.
Other contacts: