engineering
python
unittests
dynamodb

10 Mar 2025

Exhaustive DynamoDB Tests with DB Delta

I've spent a lot of development time recently on a new python project, Siloed. Following a more traditional approach, everything was developed in monolith-like structure. Now I'm a big proponent of test driven development, so unittests are always an important part of my development flow. However, writing exhaustive tests for code that interacts with DynamoDB quickly becomes laborious and verbose.

Previously, my unittests have always ended up looking something like


def test_example_function(mock_dynamo_table):
    response = mock_dynamo_table.get_item(
        Key={
            "PK": "ITEM_ID#1",
            "SK": "DETAILS"
        }
    )["Item"]
    assert response["name"] == "Item 1"

    update_item_name(mock_dynamo_table, "Item 1 Updated")

    response = mock_dynamo_table.get_item(
        Key={
            "PK": "ITEM_ID#1",
            "SK": "DETAILS"
        }
    )["Item"]
    assert response["name"] == "Item 1 Updated"

The above works, but theres a few issues with it.

  1. Continuously reading and comparing values from a database is long winded, and verbose. A lot of code is often duplicated, and the code that is comparing different DB values pre and post function execution ends up being 90% of your test code.
  2. The above isn't a very exhaustive test. In fact, all it checks is that a single attribute in a single item was updated. The update_item_name function could have deleted your entire database with the exception of ITEM_ID#1 and the above test would still pass.

The second point is actually fairly important. While the above example is trivial, codebases in the wild, especially those for established apps, tend to be far more complex. Database queries become far more intricate and multiple mutating functions are often chained together, making it far more difficult to get a complete picture of what is actually changing in your database.

A revised approach to testing

With Siloed, I wanted to try something new. Ultimately, since the API itself is stateless, the only thing I really cared about with most of the code I was testing was how the contents of my database changed. This shifts the focus of the tests from the code itself to the changes that occurred in the DynamoDB table.

Ideally, I wanted (most) of my unittests to have the same basic flow.

  1. Declare all the database changes that I expect to see as a result of my function execution, ideally from a JSON or similar readable format.
  2. Execute my target function.
  3. Validate that only the changes that I declared at the start of the test were actually executed.

The final step is the critical part. I wanted to make sure that everything in my DynamoDB table was left completely unchanged unless it was defined in the changeset.

db_delta to the rescue

Nothing that I could find fit the above criteria, so I decided to write my own open-source Python library to make it happen. It's been a minute since I had published a python package, so I was excited to get something useful off the ground. The result is db_delta, which is now available on PyPi, with the source code on GitHub.

db_delta provides data models that can be used to define your expected DynamoDB changes. It also provides a context manager that does a complete scan operation before and after your executed test code. Once your test is finished running, db_delta compares the initial table state to the final table state, and then validates any changes against the ones that you defined. If any changes have been made that you did not explicitly declare, an exception is raised.

An basic example looks like this.

Step 1: Define a ChangeSet in JSON format

DynamoDB changes are declared before your test in JSON format. Each change has a change_type attribute that describes what type of change is being declared, as well as a key attribute that tells db_delta what key to expect.

// changeset.json
[
  {
    "change_type": "updated_item",
    "key": {
      "PK": "FOO",
      "SK": "BAR"
    },
    "updated_fields": [
      {
        "field": "bar",
        "update_type": "updated",
        "new_value": "bar-foo"
      },
      {
        "field": "foo",
        "update_type": "removed"
      }
    ]
  }
]

Step 2: Execute test function with the context manager

The validate_dynamodb_changeset can then be used to test that only the changes declared in the changeset are actually reflected in your database.

import pytest
from moto import mock_aws
from db_delta import ChangeSet, validate_dynamodb_changeset


TABLE_SCHEMA = {
    # your table schema here
}

INITIAL_ITEMS = [
    # define initial DynamoDB items here
]

@pytest.fixture
def mock_dynamodb_resource():
    with mock_aws():
        yield boto3.resource("dynamodb")

@pytest.fixture
def mock_dynamo_table(mock_dynamodb_resource):
    table = mock_dynamodb_resource.create_table(**TABLE_SCHEMA)
    for item in INITIAL_ITEMS:
        table.put_item(Item=item)
    yield table


def test_example_function(mock_dynamo_table):

    expected_changeset = ChangeSet.from_json("path_to_changeset.json")

    with validate_dynamodb_changeset(table, expected_changeset):
        # call your function that modifies the contents of your database.
        # db_delta will ensure that only the changes specified are executed.
        example_function(table, "foo", "bar")

There are several advantages to this approach.

  1. The logic for reading and comparing database outputs is abstracted away, making the tests clear, clean and much shorter.
  2. If the code being tested modifies your DynamoDB table in any way that was not declared, the test fails. This means you can be confident that only the changes you expect are actually made.

Finishing up

I won't go into too much detail on db_delta. If you are interested in using it, you can install it using PyPi

$ pip install db-delta

The source code is in a public GitHub repository and can be found here. For a complete example that demonstrates how to configure pytest and moto to get it all to work, see the GitHub repo examples here.

If you enjoyed this article, consider following us on LinkedIn to get updates on future content and what we are working on, or send me a connection request directly. Don't forget to check out the rest of our content and stay tuned for weekly blog posts on tech and business.