Skip to main content
Version: Next

DynamoDB

Testing

Important Capabilities

CapabilityStatusNotes
ClassificationOptionally enabled via classification.enabled.
Detect Deleted EntitiesEnabled by default via stateful ingestion.
Platform InstanceBy default, platform_instance will use the AWS account id.

This plugin extracts the following:

AWS DynamoDB table names with their region, and infer schema of attribute names and types by scanning the table

Prerequisities

Notice of breaking change: Starting v0.13.3, aws_region is now a required configuration for DynamoDB Connector. The connector will no longer loop through all AWS regions; instead, it will only use the region passed into the recipe configuration.

In order to execute this source, you need to attach the AmazonDynamoDBReadOnlyAccess policy to a user in your AWS account. Then create an API access key and secret for the user. This future proofs it in case we need to make further changes. But you can use these privileges to run this source for now

dynamodb:ListTables
dynamodb:DescribeTable
dynamodb:Scan

We need dynamodb:Scan because Dynamodb does not return the schema in dynamodb:DescribeTable and thus we sample few values to understand the schema.

Concept Mapping

Source ConceptDataHub ConceptNotes
"dynamodb"Data Platform
DynamoDB TableDataset

CLI based Ingestion

Starter Recipe

Check out the following recipe to get started with ingestion! See below for full configuration options.

For general pointers on writing and running a recipe, see our main recipe guide.

source:
type: dynamodb
config:
aws_access_key_id: "${AWS_ACCESS_KEY_ID}"
aws_secret_access_key: "${AWS_SECRET_ACCESS_KEY}"
aws_region: "${AWS_REGION}"

# If there are items that have most representative fields of the table, users could use the
# `include_table_item` option to provide a list of primary keys of the table in dynamodb format.
# For each `region.table`, the list of primary keys can be at most 100.
# We include these items in addition to the first 100 items in the table when we scan it.
#
# include_table_item:
# region.table_name:
# [
# {
# "partition_key_name": { "attribute_type": "attribute_value" },
# "sort_key_name": { "attribute_type": "attribute_value" },
# },
# ]

sink:
# sink configs

Config Details

Note that a . is used to denote nested fields in the YAML recipe.

FieldDescription
aws_access_key_id
One of string, null
AWS access key ID. Can be auto-detected, see the AWS boto3 docs for details.
Default: None
aws_advanced_config
object
Advanced AWS configuration options. These are passed directly to botocore.config.Config.
aws_endpoint_url
One of string, null
The AWS service endpoint. This is normally constructed automatically, but can be overridden here.
Default: None
aws_profile
One of string, null
The named profile to use from AWS credentials. Falls back to default profile if not specified and no access keys provided. Profiles are configured in ~/.aws/credentials or ~/.aws/config.
Default: None
aws_proxy
One of string, null
A set of proxy configs to use with AWS. See the botocore.config docs for details.
Default: None
aws_region
One of string, null
AWS region code.
Default: None
aws_retry_mode
Enum
One of: "legacy", "standard", "adaptive"
Default: standard
aws_retry_num
integer
Number of times to retry failed AWS requests. See the botocore.retry docs for details.
Default: 5
aws_secret_access_key
One of string, null
AWS secret access key. Can be auto-detected, see the AWS boto3 docs for details.
Default: None
aws_session_token
One of string, null
AWS session token. Can be auto-detected, see the AWS boto3 docs for details.
Default: None
max_schema_size
integer
Maximum number of fields to include in the schema.
Default: 300
platform_instance
One of string, null
The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://docs.datahub.com/docs/platform-instances/ for more details.
Default: None
read_timeout
number
The timeout for reading from the connection (in seconds).
Default: 60
env
string
The environment that all assets produced by this connector belong to
Default: PROD
aws_role
One of string, array, null
AWS roles to assume. If using the string format, the role ARN can be specified directly. If using the object format, the role can be specified in the RoleArn field and additional available arguments are the same as boto3's STS.Client.assume_role.
Default: None
aws_role.union
One of string, AwsAssumeRoleConfig
aws_role.union.RoleArn 
string
ARN of the role to assume.
aws_role.union.ExternalId
One of string, null
External ID to use when assuming the role.
Default: None
database_pattern
AllowDenyPattern
A class to store allow deny regexes
database_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
domain
map(str,AllowDenyPattern)
A class to store allow deny regexes
domain.key.allow
array
List of regex patterns to include in ingestion
Default: ['.*']
domain.key.allow.string
string
domain.key.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
domain.key.deny
array
List of regex patterns to exclude from ingestion.
Default: []
domain.key.deny.string
string
include_table_item
One of array, null
[Advanced] The primary keys of items of a table in dynamodb format the user would like to include in schema. Refer "Advanced Configurations" section for more details
Default: None
include_table_item.key.object
object
table_pattern
AllowDenyPattern
A class to store allow deny regexes
table_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
classification
ClassificationConfig
classification.enabled
boolean
Whether classification should be used to auto-detect glossary terms
Default: False
classification.info_type_to_term
map(str,string)
classification.max_workers
integer
Number of worker processes to use for classification. Set to 1 to disable.
Default: 4
classification.sample_size
integer
Number of sample values used for classification.
Default: 100
classification.classifiers
array
Classifiers to use to auto-detect glossary terms. If more than one classifier, infotype predictions from the classifier defined later in sequence take precedance.
Default: [{'type': 'datahub', 'config': None}]
classification.classifiers.DynamicTypedClassifierConfig
DynamicTypedClassifierConfig
classification.classifiers.DynamicTypedClassifierConfig.type 
string
The type of the classifier to use. For DataHub, use datahub
classification.classifiers.DynamicTypedClassifierConfig.config
One of object, null
The configuration required for initializing the classifier. If not specified, uses defaults for classifer type.
Default: None
classification.column_pattern
AllowDenyPattern
A class to store allow deny regexes
classification.column_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
classification.table_pattern
AllowDenyPattern
A class to store allow deny regexes
classification.table_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
stateful_ingestion
One of StatefulStaleMetadataRemovalConfig, null
Default: None
stateful_ingestion.enabled
boolean
Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or datahub_api is specified, otherwise False
Default: False
stateful_ingestion.fail_safe_threshold
number
Prevents large amount of soft deletes & the state from committing from accidental changes to the source configuration if the relative change percent in entities compared to the previous state is above the 'fail_safe_threshold'.
Default: 75.0
stateful_ingestion.remove_stale_metadata
boolean
Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.
Default: True

Advanced Configurations

Using include_table_item config

If there are items that have most representative fields of the table, users could use the include_table_item option to provide a list of primary keys of the table in dynamodb format. We include these items in addition to the first 100 items in the table when we scan it.

Take AWS DynamoDB Developer Guide Example tables and data as an example, if a account has a table Reply in the us-west-2 region with composite primary key Id and ReplyDateTime, users can use include_table_item to include 2 items as following:

Example:

# The table name should be in the format of region.table_name
# The primary keys should be in the DynamoDB format
include_table_item:
us-west-2.Reply:
[
{
"ReplyDateTime": { "S": "2015-09-22T19:58:22.947Z" },
"Id": { "S": "Amazon DynamoDB#DynamoDB Thread 1" },
},
{
"ReplyDateTime": { "S": "2015-10-05T19:58:22.947Z" },
"Id": { "S": "Amazon DynamoDB#DynamoDB Thread 2" },
},
]

Code Coordinates

  • Class Name: datahub.ingestion.source.dynamodb.dynamodb.DynamoDBSource
  • Browse on GitHub

Questions

If you've got any questions on configuring ingestion for DynamoDB, feel free to ping us on our Slack.