
AWS recently published a guide on evolving DynamoDB data models that walks through the steps needed to safely modify your database schema in production. The guide is useful, but it also perfectly illustrates why we built StatelyDB’s Elastic Schema.
Reading through the recommendations from AWS, we couldn’t help but think: this is way too much work for something that should be simple. Why can’t the database do this for you?
Let’s say you want to add a DeliveryDate field to your orders table and be able to query by it. Here’s what AWS recommends:
DeliveryDate as the key.And AWS points out: “DynamoDB doesn’t provide native functionality to perform bulk updates. Developers must build and manage their own solution.” In fact, they don’t provide built-in solutions for any of this work.
As you make more and more of these changes, you’re left with a lot to manage:
Worse, every migration adds to a legacy that you hope is documented enough for the next engineers. As AWS says:
Document the changes to your table’s data model as the features of your application evolve. […] This will help you maintain consistency across data stored on your DynamoDB tables.
Are you willing to rely on developers reading the documentation to maintain the consistency of your data?
This pain isn’t hypothetical. It’s the reality most teams face when evolving production databases, and we’ve seen it play out countless times. We built StatelyDB to eliminate all of that complexity. StatelyDB’s Elastic Schema solves these problems by making the database responsible for schema evolution, not your application code.
With Elastic Schema, the same change looks like this:
1// Version 1: Original schema
2itemType("Order", {
3  keyPath: "/customer-:customerId/order-:orderId",
4  fields: {
5    customerId: { type: uuid },
6    orderId: { type: uuid },
7    items: { type: array(orderItem) },
8    orderDate: { type: timestampSeconds }
9  }
10});
11
12// Version 2: Add delivery date and another way of querying
13itemType("Order", {
14  keyPath: [
15    "/customer-:customerId/order-:orderId",
16    // Add a new key path to allow listing by delivery date
17    "/delivery-:deliveryDate/customer-:customerId/order-:orderId",
18  ],
19  fields: {
20    customerId: { type: uuid },
21    orderId: { type: uuid },
22    items: { type: array(orderItem) },
23    orderDate: { type: timestampSeconds },
24    /** The day (without time) the order will be delivered */
25    deliveryDate: {
26      type: timestampSeconds,
27      // Provide a value for older records that didn't have this
28      readDefault: "2024-01-01"
29    }
30  }
31});
32
33migrate(1, "Add delivery date tracking", (m) => {
34  m.changeType("Order", (t) => {
35    t.addField("deliveryDate");
36  });
37});That’s it. No GSI creation, no backfill scripts, no application code to handle missing fields. The new deliveryDate field is available on the code generated from schema version 2, and a new key path allows us to list all orders by that field to support the new access pattern.
StatelyDB automatically handles:
readDefault in the migration tells StatelyDB what value to provide when older records don’t have the new field, eliminating the need for application-level null handling.This was just a simple example like AWS shared, but Elastic Schema can handle much more. Need to change a field type? Rename a field? Those operations work the same way:
1// Change field type
2migrate(2, "Make priority numeric", (m) => {
3  m.changeType("Order", (t) => {
4    // Remove old string field, with a default for old clients
5    t.removeField("priority", "medium");
6    t.addField("priorityLevel"); // Add new numeric field
7  });
8});
9
10// Rename field
11migrate(3, "Clarify field name", (m) => {
12  m.changeType("Order", (t) => {
13    t.renameField("createdAt", "orderDate");
14  });
15});Each migration is declarative, meaning you describe what changed, and StatelyDB figures out how to handle the data transformation.
Freedom to Change Your Data Model
For development teams using StatelyDB instead of raw DynamoDB:
We chose DynamoDB as StatelyDB’s first storage engine because we have experienced its scale and operational benefits. But we’ve eliminated the schema management complexity that AWS’s guide documents.
AWS’s schema evolution guide is practical advice for DynamoDB users. But the fact that such extensive planning and custom tooling is required for basic schema changes shows there has to be a better way.
We think schema evolution should be as routine as deploying code. Your early architectural decisions shouldn’t require months of engineering effort to change later. The database should handle the complexity of data transformation for you.
If you’re tired of treating every schema change like a major engineering project, try StatelyDB for free and see what database evolution looks like when it’s designed for change from day one.