Get started with Amazon DynamoDB

Step-by-step instructions to ingest DynamoDB data to Propel.

Architecture

Amazon DynamoDB Pools in Propel consume the change data capture DynamoDB streams via Amazon Data Firehose.

Features

Amazon DynamoDB ingestion supports the following features:

Feature nameSupportedNotes
Event collectionCollects change data capture events from DynamoDB streams.
Real-time updatesSee the Real-time updates section.
Real-time deletesSee the Real-time deletes section.
Batch Delete APISee Batch Delete API.
Batch Update APISee Batch Update API.
Bulk insertUp to 500 events per HTTP request.
API configurableSee API docs.
Terraform configurableSee Terraform docs.

How does the Amazon DynamoDB Data Pool work?

The Amazon DynamoDB Data Pool works by consuming the change data capture events DynamoDB produces to a Kinesis data stream.

When your DynamoDB tables change, these events are captured by the Kinesis data stream and forwarded through Amazon Data Firehose to a secure Propel HTTP endpoint. This enables real-time data ingestion and synchronization between your DynamoDB database and Propel.

Propel handles the special encoding, data format, and basic authentication required for receiving events via Amazon Data Firehose.

By default, the Amazon DynamoDB Data Pool have the following columns:

ColumnTypeDescription
_propel_received_atTIMESTAMPThe timestamp when the event was collected in UTC.
_propel_payloadJSONThe JSON payload of the event.
event_idSTRINGThe event unique ID.
event_nameSTRINGThe event name: INSERT, MODIFY, or REMOVE.
event_sourceSTRINGThe event source: aws:dynamodb.
record_formatSTRINGThe record format: JSON.
user_identityJSONThe user identity.
aws_regionSTRINGThe AWS region.
approximate_creation_date_timeINT64The approximate event creation date time.
approximate_creation_date_time_precisionSTRINGThe timestamp precision.
keysJSONThe DynamoDB partition and sorting key values.
new_imageJSONThe new JSON object.
old_imageJSONThe old JSON object.
size_bytesINT64The size bytes.

When creating an Amazon DynamoDB Data Pool, you can flatten top-level or nested JSON keys into specific columns.

See our step-by-step setup guide.

Schema changes

The Amazon DynamoDB Data Pool is designed to handle semi-structured, schema-less JSON data. This flexibility allows you to add new properties to your payload as needed. The entire payload is always stored in the _propel_payload column.

However, Propel enforces the schema for required fields. If you stop providing data for a required field that was previously unpacked into its own column, Propel will return an error.

Adding Columns

1

Go to the Schema tab

Go to the Data Pool and click the “Schema” tab.

Click the “Add Column” button to define the new column.

2

Add column

Specify the JSON property to extract, the column name, and the type and click “Add column”.

3

Track progress

After clicking adding the column, an asynchronous operation will begin to add the column to the Data Pool. You can track the progress in the “Operations” tab.

Note that adding a column does not backfill existing rows. To backfill, run a batch update operation.

Column deletions, modifications, and data type changes are not supported as they are breaking changes to the schema. If you need to change the schema, you can create a new Data Pool.

Data Types

The table below shows the default mappings from JSON types to Propel types. You can change these mappings when creating an Amazon Data Firehose Data Pool.

JSON TypePropel Type
StringSTRING
NumberDOUBLE
ObjectJSON
ArrayJSON
BooleanBOOLEAN
NullJSON

Limits

  • Each POST request can include up to 500 events (as a JSON array).
  • The payload size can be up to 1 MiB.

Best Practices

  • Maximize the number of events per request, up to 500 events or 1 MiB, to enhance ingestion speed.
  • Implement exponential backoff for retries on 429 (Too Many Requests) and 500 (Internal Server Error) responses to prevent data loss.
  • Set up alerts or notifications for 413 (Content Too Large) errors, as these indicate exceeded event count or payload size, and retries will not resolve the issue.
  • Ensure the Amazon Data Firehose Data Pool is created with all necessary fields. Utilize a sample event and the “Extract top-level fields” feature during setup.
  • Assign the correct event timestamp as the default. If your event lacks a timestamp, use the _propel_received_at column.
  • Configure all fields, except the default timestamp, as optional to minimize 400 (Bad Request) errors. This configuration allows Propel to process requests even with missing data, which can be backfilled later.
  • Set up alerts or notifications for 400 errors, as these signify schema issues that require attention, and retries will not resolve them.
  • Confirm that all data types are correct. Objects, arrays, and dictionaries should be designated as JSON in Propel.

Transforming data

Once your data is in a DynamoDB Data Pool, you can use Materialized Views to:

Frequently Asked Questions