Taming AWS Costs With Cost and Usage Reports + AWS Athena

The AWS Cost and Usage Report auto-generates hourly or daily billing reports and pushes the data to an S3 bucket.

The Cost and Usage Report, or CUR, is the successor to the old Detailed Billing Report. Both CUR and DBR generate estimated costs based on AWS service, usage type, resource tags, and more, and push it to S3 for analysis. These can then be used to aggregate and analyze costs for better visibility and control of AWS spending.

The difference is that while DBR generates CSV files, CUR generates more database-friendly GZ or Parquet files. AWS Athena, AWS Redshift, and AWS QuickSight can then ingest these directly for analysis.

Introduction to AWS Athena

Athena is a serverless query service. Data is stored as static files in S3 and read in real-time for analysis using Presto, which is an ANSI-standard SQL engine.

Athena retrieves the data using a feature called schema-on-read, meaning it superimposes the schema on the underlying data when you execute the query. No server software or daemon is running, hence the term “serverless.”

The advantage of Athena is that it frees us from the usual database maintenance and performance-tuning activities. We can bank on S3’s excellent reliability to keep data reliably and durably. The caveat is that the data format and partition structure becomes critical for query performance.

Overall, this architecture makes Athena very performant, reliable, and cost-effective. 

Setting up Cost and Usage Reports in AWS

Setting up CUR with Athena is very simple.

AWS provides a Cloudformation stack with everything ready to go. Just follow the instructions in the documentation to enable CUR, configure an S3 bucket, and set up the Cloudformation stack.

Once the stack is ready, we can check the CUR status by going to our database and running the following query: 

Shell
 




xxxxxxxxxx
1


 
1
SELECT status FROM cost_and_usage_data_status;


The above query will return either READY or UPDATING , the latter indicating that Athena may return incomplete results.

If the status is READY , we can verify it by counting the total number of items in our Athena database so far.

SQL
 




xxxxxxxxxx
1
10


 
1
SELECT
2
 
          
3
    COUNT(identity_line_item_id) AS Count
4
    
5
FROM my_cur_db.my_cur_table
6
 
          
7
WHERE
8
    MONTH = CAST(MONTH(CURRENT_DATE) AS varchar(4))
9
    AND YEAR = CAST(YEAR(CURRENT_DATE) AS varchar(4));
10
 
          


 

The MONTH(CURRENT_DATE) statement will fetch the current month as an integer, and CAST() will convert it to a string.

The above query will return the number of rows currently in the billing table.

Athena CUR Columns for Cost Wrangling

Let’s explore our dataset and identify the columns we will need for analyzing our costs. The documentation has relevant details of all the columns that CUR will populate. Here are a few examples:

Billing Columns

Resource Columns

Configuration Columns

Usage Columns

Breaking Down the Cost Into Components

First, let’s get the total cost for the current month.

The line_item_blended_cost field will contain the charge for a line item. The discount_total_discount will include any discounts we will need to adjust to get our net cost.

SQL
 




xxxxxxxxxx
1
10


 
1
SELECT
2
 
          
3
    SUM(line_item_blended_cost + discount_total_discount)
4
    
5
FROM my_cur_db.my_cur_table
6
 
          
7
WHERE
8
    MONTH = CAST(MONTH(CURRENT_DATE) AS varchar(4))
9
    AND YEAR = CAST(YEAR(CURRENT_DATE) AS varchar(4));
10
 
          


 

Now let’ find out the total cost incurred by individual accounts. This query is helpful if we have multiple member accounts under a master account.

SQL
 




xxxxxxxxxx
1
15


 
1
SELECT
2
 
          
3
    line_item_usage_account_id AS AccountID,
4
    SUM(line_item_blended_cost + discount_total_discount) AS Cost
5
    
6
FROM my_cur_db.my_cur_table
7
 
          
8
WHERE
9
    MONTH = CAST(MONTH(CURRENT_DATE) AS varchar(4))
10
    AND YEAR = CAST(YEAR(CURRENT_DATE) AS varchar(4))
11
    
12
GROUP BY line_item_usage_account_id
13
 
          
14
ORDER BY Cost DESC;
15
 
          


 

Let’s see the cost incurred by individual AWS services, from highest to lowest.

SQL
 




x
14


 
1
SELECT
2
 
          
3
    line_item_product_code AS ProductCode,
4
    SUM(line_item_blended_cost + discount_total_discount) AS Cost
5
    
6
FROM my_cur_db.my_cur_table
7
 
          
8
WHERE
9
    MONTH = CAST(MONTH(CURRENT_DATE) AS varchar(4))
10
    AND YEAR = CAST(YEAR(CURRENT_DATE) AS varchar(4))
11
    
12
GROUP BY line_item_product_code 
13
 
          
14
ORDER BY Cost DESC;


 

Putting it all together, we can retrieve the cost incurred by individual accounts and the services as follows.

SQL
 




x
16


 
1
SELECT
2
 
          
3
    line_item_usage_account_id AS AccountID,
4
    line_item_product_code AS ProductCode,
5
    SUM(line_item_blended_cost + discount_total_discount) AS Cost
6
    
7
FROM my_cur_db.my_cur_table
8
 
          
9
WHERE
10
    MONTH = CAST(MONTH(CURRENT_DATE) AS varchar(4))
11
    AND YEAR = CAST(YEAR(CURRENT_DATE) AS varchar(4))
12
    
13
GROUP BY line_item_usage_account_id, line_item_product_code
14
 
          
15
ORDER BY Cost DESC;
16
 
          


 

Another great feature of CUR is that it also populates any resource tags we configure. We can use this to see costs grouped by individual cost centers.

Let’s say we have a Project resource tag with values of different projects in our organization. We can get costs grouped by projects to identify where our spending is highest.

SQL
 




x
15


 
1
SELECT
2
 
          
3
    resource_tags_user_project AS Project,
4
    SUM(line_item_blended_cost + discount_total_discount) AS Cost
5
    
6
FROM my_cur_db.my_cur_table
7
 
          
8
WHERE
9
    MONTH = CAST(MONTH(CURRENT_DATE) AS varchar(4))
10
    AND YEAR = CAST(YEAR(CURRENT_DATE) AS varchar(4))
11
    
12
GROUP BY resource_tags_user_project
13
 
          
14
ORDER BY Cost DESC;
15
 
          


 

Combining the above query with resource IDs, and we can see which resources within which projects have the highest cost.

SQL
 




x
16


 
1
SELECT
2
 
          
3
    line_item_resource_id AS ResourceID,
4
    resource_tags_user_project AS Project,
5
    SUM(line_item_blended_cost + discount_total_discount) AS Cost
6
    
7
FROM my_cur_db.my_cur_table
8
 
          
9
WHERE
10
    MONTH = CAST(MONTH(CURRENT_DATE) AS varchar(4))
11
    AND YEAR = CAST(YEAR(CURRENT_DATE) AS varchar(4))
12
    
13
GROUP BY line_item_resource_id, resource_tags_user_project
14
 
          
15
ORDER BY Cost DESC;
16
 
          


 

Identifying the Most Expensive Resources Within Services

Let’s say we want to find out the most expensive Lambda functions, EC2 servers, RDS instances, etc.

We can do this pretty quickly by using the line_item_product_code column and aggregating based on resource IDs.

For example, to find out the most expensive Lambda functions:

SQL
 




x
19


 
1
SELECT
2
 
          
3
    line_item_resource_id AS Resource,
4
    SUM(line_item_blended_cost + discount_total_discount) AS Cost
5
    
6
FROM my_cur_db.my_cur_table
7
 
          
8
WHERE
9
    MONTH = CAST(MONTH(CURRENT_DATE) AS varchar(4))
10
    AND YEAR = CAST(YEAR(CURRENT_DATE) AS varchar(4))
11
    AND line_item_product_code = 'AWSLambda'
12
    AND line_item_resource_id != ''
13
    
14
GROUP BY line_item_resource_id
15
 
          
16
ORDER BY Cost DESC
17
 
          
18
LIMIT 25;
19
 
          


 

The query above is pretty simple: filter for all resources of the AWSLambda service, aggregate by resource IDs, and sort by the total cost of those resources. By combining this with resource tags, we can automate daily or weekly email reports to keep an eye on our project expenses.

Let’s keep going. 

Most expensive EC2 resources:

SQL
 




x
19


 
1
SELECT
2
 
          
3
    line_item_resource_id AS Resource,
4
    SUM(line_item_blended_cost + discount_total_discount) AS Cost
5
    
6
FROM my_cur_db.my_cur_table
7
 
          
8
WHERE
9
    MONTH = CAST(MONTH(CURRENT_DATE) AS varchar(4))
10
    AND YEAR = CAST(YEAR(CURRENT_DATE) AS varchar(4))
11
    AND line_item_product_code = 'AmazonEC2'
12
    AND line_item_resource_id != ''
13
    
14
GROUP BY line_item_resource_id
15
 
          
16
ORDER BY Cost DESC
17
 
          
18
LIMIT 25;
19
 
          


 

Most expensive RDS resources:

SQL
 




x
19


 
1
SELECT
2
 
          
3
    line_item_resource_id AS Resource,
4
    SUM(line_item_blended_cost + discount_total_discount) AS Cost
5
    
6
FROM my_cur_db.my_cur_table
7
 
          
8
WHERE
9
    MONTH = CAST(MONTH(CURRENT_DATE) AS varchar(4))
10
    AND YEAR = CAST(YEAR(CURRENT_DATE) AS varchar(4))
11
    AND line_item_product_code = 'AmazonRDS'
12
    AND line_item_resource_id != ''
13
    
14
GROUP BY line_item_resource_id
15
 
          
16
ORDER BY Cost DESC
17
 
          
18
LIMIT 25;
19
 
          


 

Most expensive CloudWatch log groups:

SQL
 




x
19


 
1
SELECT
2
 
          
3
    line_item_resource_id AS Resource,
4
    SUM(line_item_blended_cost + discount_total_discount) AS Cost
5
    
6
FROM my_cur_db.my_cur_table
7
 
          
8
WHERE
9
    MONTH = CAST(MONTH(CURRENT_DATE) AS varchar(4))
10
    AND YEAR = CAST(YEAR(CURRENT_DATE) AS varchar(4))
11
    AND line_item_product_code = 'AmazonCloudWatch'
12
    AND line_item_resource_id LIKE '%log-group%'
13
    
14
GROUP BY line_item_resource_id
15
 
          
16
ORDER BY Cost DESC
17
 
          
18
LIMIT 25;
19
 
          


 

Most expensive DynamoDB instances:

SQL
 




x
19


 
1
SELECT
2
 
          
3
    line_item_resource_id AS Resource,
4
    SUM(line_item_blended_cost + discount_total_discount) AS Cost
5
    
6
FROM my_cur_db.my_cur_table
7
 
          
8
WHERE
9
    MONTH = CAST(MONTH(CURRENT_DATE) AS varchar(4))
10
    AND YEAR = CAST(YEAR(CURRENT_DATE)  AS varchar(4))
11
    AND line_item_product_code = 'AmazonDynamoDB'
12
    AND line_item_resource_id != ''
13
    
14
GROUP BY line_item_resource_id
15
 
          
16
ORDER BY Cost DESC
17
 
          
18
LIMIT 25;
19
 
          


 

Most expensive S3 buckets:

SQL
 




x
19


 
1
SELECT
2
 
          
3
    line_item_resource_id AS Resource,
4
    SUM(line_item_blended_cost + discount_total_discount) AS Cost
5
    
6
FROM my_cur_db.my_cur_table
7
 
          
8
WHERE
9
    MONTH = CAST(MONTH(CURRENT_DATE) AS varchar(4))
10
    AND YEAR = CAST(YEAR(CURRENT_DATE)  AS varchar(4))
11
    AND line_item_product_code = 'AmazonS3'
12
    AND line_item_resource_id != ''
13
    
14
GROUP BY line_item_resource_id
15
 
          
16
ORDER BY Cost DESC
17
 
          
18
LIMIT 25;
19
 
          


 

Athena can help us identify the costliest operations within a service. For example, how do we identify runaway Glacier transition costs in S3?

Most expensive S3 bucket operations:

SQL
 




x
20


 
1
SELECT
2
 
          
3
    line_item_resource_id AS Resource,
4
    line_item_operation AS Operation,
5
    SUM(line_item_blended_cost + discount_total_discount) AS Cost
6
    
7
FROM my_cur_db.my_cur_table
8
 
          
9
WHERE
10
    MONTH = CAST(MONTH(CURRENT_DATE) AS varchar(4))
11
    AND YEAR = CAST(YEAR(CURRENT_DATE)  AS varchar(4))
12
    AND line_item_product_code = 'AmazonS3'
13
    AND line_item_resource_id != ''
14
    
15
GROUP BY line_item_resource_id, line_item_operation
16
 
          
17
ORDER BY Cost DESC
18
 
          
19
LIMIT 25;
20
 
          


 

Another important use case is to examine the cost of resources by their names.

Let’s say I have several resources with the name of my project, myfrontendproject . These could be ec2-myfrontendproject , or rds-myfrontendproject-master , or s3-assets-myfrontendproject , etc. We can use the resource tags column to group costs based on names as well.

SQL
 




x
18


 
1
SELECT
2
 
          
3
    resource_tags_user_name AS ResourceName,
4
    SUM(line_item_blended_cost + discount_total_discount) AS Cost
5
    
6
FROM my_cur_db.my_cur_table
7
 
          
8
WHERE
9
    MONTH = CAST(MONTH(CURRENT_DATE) AS varchar(4))
10
    AND YEAR = CAST(YEAR(CURRENT_DATE)  AS varchar(4))
11
    AND lower(resource_tags_user_name) LIKE '%myfrontendproject%'
12
    
13
GROUP BY resource_tags_user_name
14
 
          
15
ORDER BY Cost DESC
16
 
          
17
LIMIT 25;
18
 
          


 

Things to Remember when Using Athena

The serverless nature of Athena provides tremendous cost and reliability benefits, but there are a few considerations.

First, there is a default concurrency limit of 20 which caps how many queries can be executed in parallel. Refer to the service limits page for more details.

Second, when using SELECT queries, we should limit the columns to what we need. Athena charges based on the amount of data you process, so limiting columns is an excellent performance and cost optimization.

Third, the data storage format will have a significant impact on the query processing time. Parquet formats will be more performant than CSV since they are columnar and can utilize Snappy compression.

Fourth, when joining multiple tables, keep the larger table on the left of the join and the smaller one on the right. Athena distributes the right table to worker nodes and streams the left one for the join.

Fifth, Athena does not support user-defined functions, stored procedures, indexes, prepared or EXPLAIN statements. A full list of limitations is here.

Conclusion

The Athena and CUR combination can help alleviate a lot of my-cloud-bill-is-a-huge-black-box problems. The Cost Explorer and Budget Reports are fine, but there are some problems only a heinous quadruple-join can solve.

CUR is also available for Redshift and QuickSight. If Athena’s concurrency limits are causing issues or if you need a full-blown RDBMS for cost analysis, then Redshift is the way to do. For visualizations, QuickSight can use the columns directly but not the query results, so something like Redash or Tableau might be better for more complex dashboards.

Resources

Further Reading

The 4 AWS Pricing Principles with a Critical Eye

 

 

 

 

Top