Before we start: If you have missed the previous three posts, please check them out here:
At this stage, I’m going to assume you are comfortable with DynamoDb Incremental Backups, and the format they are stored in.
In this post, we will walk through the restore step, and I’ll be first to admit, this can be taken much further than I have. I haven’t had the time, or need to take this as far as I would have liked, but don’t let this stop you! I would love to hear from you if you have done something interesting with this, ie – automating your DR / backup restore testing.
For our DynamoDb Incremental backups solution, we have incremental backups stored in S3. The data format is in the native DynamoDb format, which is very handy. It allows us to run it out to DynamoDb with no transformation.
Each key (or file) stored in an S3 versioned bucket is a snapshot of a row at a point in time. This allows us to be selective in what we restore. It also provides a human-readable audit log!
S3 has API available which will allow us to scan the list of backups are available:
Leveraging this, we can build a list of data that we would like to restore. This could range from a row from a point in time, or entire table at a point in time.
There are a range of tools which allow us to restore directly these backups in S3:
- Dynamo Incremental Restore
The first option allows you to specify a point in time for a given prefix (folder location) in S3. The workflow is:
- It scans all the data available using the Version List in S3
- Build a list of data that is required to update.
- Download the file(s) required from #2, and push it to DynamoDb
- Dynamo Migrator
- DynamoDb Replicator
A snapshot script that scans an S3 folder where incremental backups have been made, and writes the aggregate to a file on S3, providing a snapshot of the backup’s state.
We haven’t had any issues with our incremental backups, but the the next steps would be to automate the DR restore at a regular interval to ensure it provides the protection you are looking for.