sqlalchemy-aurora-data-api - An AWS Aurora Serverless Data API dialect for SQLAlchemy
This is a fork of sqlalchemy-aurora-data-api: https://github.com/chanzuckerberg/sqlalchemy-aurora-data-api
This package provides a SQLAlchemy dialect for accessing PostgreSQL and MySQL databases via the AWS Aurora Data API.
Installation
pip install sqlalchemy-aurora-data-api
Prerequisites
-
Set up an AWS Aurora Serverless cluster and enable Data API access for it. If you have previously set up an Aurora Serverless cluster, you can enable Data API with the following AWS CLI command:
aws rds modify-db-cluster --db-cluster-identifier DB_CLUSTER_NAME --enable-http-endpoint --apply-immediately
-
Save the database credentials in AWS Secrets Manager using a format expected by the Data API (a JSON object with the keys
username
andpassword
):aws secretsmanager put-secret-value --secret-id MY_DB_CREDENTIALS --secret-string "$(jq -n '.username=env.PGUSER | .password=env.PGPASSWORD')"
-
Configure your AWS command line credentials using standard AWS conventions. You can verify that everything works correctly by running a test query via the AWS CLI:
aws rds-data execute-statement --resource-arn RESOURCE_ARN --secret-arn SECRET_ARN --sql "select * from pg_catalog.pg_tables"
Usage
The package registers two SQLAlchemy dialects, mysql+auroradataapi://
and postgresql+auroradataapi://
. Two
sqlalchemy.create_engine()
connect_args
keyword arguments are required to connect to the database:
-
aurora_cluster_arn
(also referred to asresourceArn
in the Data API documentation)- If not given as a keyword argument, this can also be specified using the
AURORA_CLUSTER_ARN
environment variable
- If not given as a keyword argument, this can also be specified using the
-
secret_arn
(the database credentials secret)- If not given as a keyword argument, this can also be specified using the
AURORA_SECRET_ARN
environment variable
- If not given as a keyword argument, this can also be specified using the
All connection string contents other than the protocol (dialect) and the database name (path component, my_db_name
in the example below) are ignored.
from sqlalchemy import create_engine
cluster_arn = "arn:aws:rds:us-east-1:123456789012:cluster:my-aurora-serverless-cluster"
secret_arn = "arn:aws:secretsmanager:us-east-1:123456789012:secret:MY_DB_CREDENTIALS"
engine = create_engine('postgresql+auroradataapi://:@/my_db_name',
echo=True,
connect_args=dict(aurora_cluster_arn=cluster_arn, secret_arn=secret_arn))
with engine.connect() as conn:
for result in conn.execute("select * from pg_catalog.pg_tables"):
print(result)
Motivation
The RDS Data API is the link between the AWS Lambda serverless environment and the sophisticated features provided by PostgreSQL and MySQL. The Data API tunnels SQL over HTTP, which has advantages in the context of AWS Lambda:
- It eliminates the need to open database ports to the AWS Lambda public IP address pool
- It uses stateless HTTP connections instead of stateful internal TCP connection pools used by most database drivers (the stateful pools become invalid after going through AWS Lambda freeze-thaw cycles, causing connection errors and burdening the database server with abandoned invalid connections)
- It uses AWS role-based authentication, eliminating the need for the Lambda to handle database credentials directly
Debugging
This package uses standard Python logging conventions. To enable debug output, set the package log level to DEBUG:
logging.basicConfig() logging.getLogger("aurora_data_api").setLevel(logging.DEBUG)
Links
- Project home page (GitHub)
- Documentation (Read the Docs)
- Package distribution (PyPI)
- Change log
- aurora-data-api, the Python DB-API 2.0 client that sqlalchemy-aurora-data-api depends on
Bugs
Please report bugs, issues, feature requests, etc. on GitHub.
License
Licensed under the terms of the Apache License, Version 2.0.