DynamoDb Incremental Backups – Part Four

Before we start: If you have missed the previous three posts, please check them out here:

Part One
Part Two
Part Three

At this stage, I’m going to assume you are comfortable with DynamoDb Incremental Backups, and the format they are stored in.

In this post, we will walk through the restore step, and I’ll be first to admit, this can be taken much further than I have. I haven’t had the time, or need to take this as far as I would have liked, but don’t let this stop you! I would love to hear from you if you have done something interesting with this, ie –  automating your DR / backup restore testing.

For our DynamoDb Incremental backups solution, we have incremental backups stored in S3. The data format is in the native DynamoDb format, which is very handy. It allows us to run it out to DynamoDb with no transformation.

Each key (or file) stored in an S3 versioned bucket is a snapshot of a row at a point in time. This allows us to be selective in what we restore. It also provides a human-readable audit log!

S3 has API available which will allow us to scan the list of backups are available:

Get Bucket Object Versions

Leveraging this, we can build a list of data that we would like to restore. This could range from a row from a point in time, or entire table at a point in time.

There are a range of tools which allow us to restore directly these backups in S3:

  1. Dynamo Incremental Restore
    The first option allows you to specify a point in time for a given prefix (folder location) in S3. The workflow is:

    1. It scans all the data available using the Version List in S3
    2. Build a list of data that is required to update.
    3. Download the file(s) required from #2, and push it to DynamoDb
  2. Dynamo Migrator
  3. DynamoDb Replicator
    A snapshot script that scans an S3 folder where incremental backups have been made, and writes the aggregate to a file on S3, providing a snapshot of the backup’s state.

We haven’t had any issues with our incremental backups, but the the next steps would be to automate the DR restore at a regular interval to ensure it provides the protection you are looking for.

DynamoDb Incremental Backups – Part Two

The next blog post in this series, we will delve into the details of our DynamoDb incremental backup solution.

If you missed the first post, check it out: Part One

I am not going to delve into DynamoDb too much. If you are reading this blog post, I will be assuming you know about DynamoDb, looking to use it, or are already using it.

DynamoDb Streams

Let’s delve into the DynamoDb Stream. DynamoDb Streams allow you to capture mutations on the data within the table. In other words, capture item changes at the point in time when they occurred.

DynamoDB Streams – High Level

This feature enables a plethora of possibilities such as data analysis, replication, triggers, and backups. It is very simply to enable (as simple as a switch), and it basically enables an ordered list of table events for a 24 hour window.

Continue reading DynamoDb Incremental Backups – Part Two

DynamoDb Incremental Backups – Part One

DynamoDb is an AWS fully managed NoSQL service, which provides a fast and predictable data store. We’ve been using it for several microservices in the past 18 months, and one feature that is sorely missed, are incremental backups.

AWS provides an option to take snapshots of your table using a service called DataPipeline. At a high level, what this does is:
1. Create an EMR (Elastic Map-Reduce) cluster
2. Perform a parallel full scan of the table in question (while consuming read units) into JSON data
3. This JSON data can be uploaded to S3 or similiar

DynamoDb to S3 Template in Data Pipeline Architect

The issue I have with this is, that the backup is not an “point in time” snapshot, it is essentially scanning the table (which can take hours) while the table is essentially still live.

Our requirements for DPO (Data Point Objective) is 30 minutes. Which basically means, if “shit hits the fan”, we can only have 30 minutes of data loss (in the worst case). This is our contractual agreement with our clients.

Given this, we have been investigating ways to solve this problem, which has led us to creating incremental backups for DynamoDb, stored in an S3 versioned bucket.

DynamoDb Incremental Backups to S3
DynamoDb Incremental Backups to S3

 

In the next post, I’ll walk through the details of our implementation, and provide the source code of the Lambda Function.