In the previous AWS Insiders podcast episode, author of “The DynamoDB Book” Alex Debrie joined AWS superfan and CTO of ESW Capital Rahul Subramaniam for an in-depth discussion on the merits of AWS DynamoDB. They continued their discussion in episode three, providing fascinating use case examples and additional insider tips for DynamoDB customers.
AWS DynamoDB’s infinite scale possibilities
Most people find it hard to digest the AWS claim that DynamoDB has potentially infinite scale. But the principles beneath AWS DynamoDB are simple and you can see how it will scale as your data grows. Alex noted that the most important thing to know about DynamoDB is that it has that primary key structure, but every single item is going to have what’s called a partition key. This partition key helps decide to which shard or node a particular item is going to go.
When a request comes into DynamoDB, the first thing that’s going to hit is a global request router that’s deployed across the entire region, which handles all the tables within a particular region. This global request router:
- Gets that item
- Pulls up the metadata for that table
- Hashes the partition key value that was sent in for a particular item
- Based on that, it knows which node to go to in the fleet
Each node in the fleet is going to hold about 10 gigabytes of data, and if you have a hundred gig table, you’ll have 10 different primaries behind the scenes serving that data. In that request router, it’s going to hash that partition key, say, “Oh, you’re on, this item is on primary four.” So that’s an 01 constant time lookup to figure out which record in the hash map this belongs to. As you go from a hundred gigs to a terabyte, you now have a hundred different partitions.
With AWS DynamoDB, whether you have 10 partitions, a hundred partitions, or a thousand partitions, it will take the same amount of time to get to the specific partition. That same level of scalable efficiency also applies when you are locating particular items or item ranges within that partition.
AWS DynamoDB use cases
The first use case Alex brings up is a billing and charging system that DevFactory is doing for telecoms. For cell phones and internet-connected devices sending 3G, 4G, and 5G data all over the place, you think of all the tiny charging requests they have on a per-second, per-minute basis. These telecoms receive and process hundreds of millions of calls every second and must instantly manage a range of variables without downtime, including whether a call can be authorized, for how long, how much will be billed and to where, as well as post-call information such as duration. The scale of this, and the need to keep track of all those little charge requests is immense.
A lot of the old generation systems are built on relational databases and they can do that, but there’s more latency in how much they can do. There are more limits on how many concurrent transactions they can handle, where it starts to get quite expensive. Alex and the DevFactory team designed this system from scratch using AWS DynamoDB. It can handle global telecom requests, and it can handle pretty significant amounts of traffic at a really low, predictable latency. With this system you can optimize for the important paths that need to be handled to authorize particular charge requests, make sure they’re good, and allow them. You don’t want someone trying to make a call and it takes 30 seconds to figure out if they have enough credits on their bill or something similar.
Another use case example Alex mentioned is a customer that was in an industry which was positively impacted by the COVID-19 pandemic. There was an increase in usage caused by changing user habits, so this customer foresaw a large scaling event on the horizon. They also had cyclical usage patterns, where usage was higher during the day than at night and usage was higher during the week than the weekend, and there were periods during the year where usage was much higher than other times. The customer was in a low usage period, but recognized that in a few months that was going to increase. Using AWS DynamoDB, they had the predictability and scalability to manage this event and plan for it ahead of time. DynamoDB’s ability to scale up and down during the day, week, or different periods of the year also proved incredibly beneficial and would have been difficult to achieve with a relational database.
Top three tips for AWS DynamoDB customers
Alex divided his tips into three different learning levels, as follows.
Brand new DynamoDB users
If this is your first time doing Dynamo or NoSQL data modeling, don’t treat it like a relational database. Understand how you think about DynamoDB data modeling, the principles of single table design, and access first design. The error that occurs here is normalizing your data too much, using that relational model to try to do joins in your application code rather than pre-joining your data to handle your access patterns and things like that. So if you’re brand new, understand that it’s different and understand how single table design works, because that teaches you that something else is going on here.
Intermediate DynamoDB users
Be careful about going too far on the other end of the spectrum – hyper normalizing or denormalizing everything can be a problem. Some users also group together all sorts of items into the same partition or partition keys, such as customers, addresses, orders, order items, even though these items are never being fetched in the same request. These items should therefore be in different partition keys so you avoid overloading that too much.
Experienced DynamoDB users
Really think about the specifics of your application and do the math. Figure out what is going to be optimal for your exact application. You need to do the math and identify what makes sense for your application.
In the last portion of this podcast episode, Rahul and Alex discussed the mistakes to avoid and best practices to follow for AWS DynamoDB cost management. For these valuable AWS cost saving insights, listen to the full podcast episode, available with transcript now.