DynamoDBClient
The DynamoDBClient is the connection between your code and DynamoDB. It handles authentication, network calls, retries, and timeouts.
Why use it?
pydynox Models need a client to talk to DynamoDB. You can either:
- Set a default client once at app startup (recommended)
- Pass a client to each model's config
The client wraps the AWS SDK and adds features like rate limiting. It's built in Rust for speed.
Key features
- Multiple credential sources (env vars, profile, SSO, AssumeRole)
- Timeout and retry configuration
- Rate limiting built-in
- Local development support
Basic usage
By default, the client uses the AWS credential chain: env vars, profile, instance profile, EKS IRSA, etc.
Credentials
pydynox supports multiple ways to authenticate. Pick the one that fits your environment.
Environment variables
Set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. The client picks them up automatically. Good for local dev and CI/CD.
Profile (including SSO)
Use a named profile from ~/.aws/credentials or ~/.aws/config. Works with SSO profiles too. Good for local dev with multiple AWS accounts.
For SSO profiles, run aws sso login --profile my-profile first.
AssumeRole
Assume an IAM role in another account. Good for cross-account access or when you need temporary elevated permissions.
from pydynox import DynamoDBClient
# AssumeRole for cross-account access
client = DynamoDBClient(
role_arn="arn:aws:iam::123456789012:role/MyRole",
role_session_name="my-session", # optional, defaults to "pydynox-session"
)
# With external ID (for third-party access)
client = DynamoDBClient(
role_arn="arn:aws:iam::123456789012:role/MyRole",
external_id="my-external-id",
)
Explicit credentials
Pass credentials directly. Good for testing or when credentials come from a secrets manager. Avoid hardcoding in production.
"""Client with explicit credentials."""
from pydynox import DynamoDBClient
# Hardcoded credentials (not recommended for production)
client = DynamoDBClient(
access_key="AKIAIOSFODNN7EXAMPLE",
secret_key="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
region="us-east-1",
)
# With session token (for temporary credentials)
client = DynamoDBClient(
access_key="AKIAIOSFODNN7EXAMPLE",
secret_key="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
session_token="FwoGZXIvYXdzEBY...",
region="us-east-1",
)
Warning
Don't hardcode credentials. Use env vars or profiles instead.
EKS IRSA / GitHub Actions OIDC
These work automatically via the default credential chain. The env vars are injected by EKS or GitHub Actions. Just use DynamoDBClient() with no config. Good for Kubernetes and CI/CD pipelines.
Environment variables
pydynox uses the AWS SDK for Rust, which supports standard AWS environment variables:
| Variable | Description |
|---|---|
AWS_ACCESS_KEY_ID |
Access key for authentication |
AWS_SECRET_ACCESS_KEY |
Secret key for authentication |
AWS_SESSION_TOKEN |
Session token for temporary credentials |
AWS_REGION / AWS_DEFAULT_REGION |
Default region |
AWS_PROFILE |
Profile name from ~/.aws/credentials |
AWS_ENDPOINT_URL |
Custom endpoint (for local dev) |
AWS_MAX_ATTEMPTS |
Max retry attempts |
AWS_RETRY_MODE |
Retry mode: standard or adaptive |
These work automatically - no code changes needed. Set them in your shell, .env file, or CI/CD pipeline.
For the full list, see the AWS SDK environment variables documentation.
Configuration
Timeouts and retries
from pydynox import DynamoDBClient
# Set connection and read timeouts (in seconds)
client = DynamoDBClient(
connect_timeout=5.0, # 5 seconds to establish connection
read_timeout=30.0, # 30 seconds to read response
)
# Short timeouts for Lambda (fail fast)
lambda_client = DynamoDBClient(
connect_timeout=2.0,
read_timeout=10.0,
)
Local development
Proxy
Default client
Set a default client once instead of passing it to each model:
"""Setting a default client for all models."""
import asyncio
import os
from pydynox import DynamoDBClient, Model, ModelConfig, set_default_client
from pydynox.attributes import StringAttribute
# Create and set default client once at app startup
# Uses environment variables or default credential chain
client = DynamoDBClient(
endpoint_url=os.environ.get("AWS_ENDPOINT_URL"),
)
set_default_client(client)
# All models use the default client automatically
class User(Model):
model_config = ModelConfig(table="users")
pk = StringAttribute(partition_key=True)
sk = StringAttribute(sort_key=True)
name = StringAttribute()
class Order(Model):
model_config = ModelConfig(table="orders")
pk = StringAttribute(partition_key=True)
sk = StringAttribute(sort_key=True)
total = StringAttribute()
async def main():
# No need to pass client to each model
user = User(pk="USER#1", sk="PROFILE", name="John")
await user.save() # Uses the default client
if __name__ == "__main__":
asyncio.run(main())
Override per model if needed:
set_default_client(prod_client)
# Different client for audit logs
class AuditLog(Model):
model_config = ModelConfig(table="audit_logs", client=audit_client)
pk = StringAttribute(partition_key=True)
Rate limiting
"""Client with rate limiting."""
from pydynox import DynamoDBClient
from pydynox.rate_limit import AdaptiveRate, FixedRate
# Fixed rate: constant throughput
client = DynamoDBClient(
rate_limit=FixedRate(rcu=50, wcu=25),
)
# Adaptive rate: adjusts based on throttling
client = DynamoDBClient(
rate_limit=AdaptiveRate(max_rcu=100, max_wcu=50),
)
See rate limiting for details.
Constructor reference
| Parameter | Type | Description |
|---|---|---|
region |
str | AWS region |
profile |
str | AWS profile name (supports SSO) |
access_key |
str | AWS access key ID |
secret_key |
str | AWS secret access key |
session_token |
str | Session token for temporary credentials |
endpoint_url |
str | Custom endpoint for local dev |
role_arn |
str | IAM role ARN for AssumeRole |
role_session_name |
str | Session name for AssumeRole |
external_id |
str | External ID for AssumeRole |
connect_timeout |
float | Connection timeout (seconds) |
read_timeout |
float | Read timeout (seconds) |
max_retries |
int | Max retry attempts |
proxy_url |
str | HTTP/HTTPS proxy URL |
rate_limit |
FixedRate/AdaptiveRate | Rate limiter |
Methods
Most of the time you'll use Models instead of these methods directly. But they're useful for quick operations or when you need more control.
Table operations
Table operations follow the async-first pattern. Async methods have no prefix, sync methods have sync_ prefix.
| Async (default) | Sync | Description |
|---|---|---|
create_table(...) |
sync_create_table(...) |
Create a new table |
table_exists(table) |
sync_table_exists(table) |
Check if table exists |
delete_table(table) |
sync_delete_table(table) |
Delete a table |
wait_for_table_active(table) |
sync_wait_for_table_active(table) |
Wait for table to be ready |
See table operations for details.
Item operations
Item operations also follow async-first. Async methods have no prefix, sync methods have sync_ prefix.
| Async (default) | Sync | Description |
|---|---|---|
put_item(table, item, ...) |
sync_put_item(...) |
Save an item. Overwrites if key exists. |
get_item(table, key) |
sync_get_item(...) |
Get item by primary key. |
delete_item(table, key, ...) |
sync_delete_item(...) |
Delete item by primary key. |
update_item(table, key, updates, ...) |
sync_update_item(...) |
Update specific attributes. |
query(table, key_condition, ...) |
sync_query(...) |
Find items by partition key. |
batch_write(table, put_items, delete_keys) |
sync_batch_write(...) |
Write up to 25 items at once. |
batch_get(table, keys, consistent_read) |
sync_batch_get(...) |
Get up to 100 items at once. |
transact_write(operations) |
sync_transact_write(...) |
Atomic multi-item write. |
transact_get(gets) |
sync_transact_get(...) |
Atomic multi-item read. |
Write methods (put_item, update_item, delete_item) support these optional parameters:
| Parameter | Description |
|---|---|
condition_expression |
Condition that must be true for the write to succeed |
expression_attribute_names |
Placeholders for reserved words |
expression_attribute_values |
Placeholders for values |
return_values_on_condition_check_failure |
If True, get the existing item on ConditionalCheckFailedException |
return_values |
Get item data back from the write. Saves an extra GET call. |
Returning values
Use return_values to get item data back from a write operation without making a separate GET call.
Each operation supports different values:
| Operation | Allowed values |
|---|---|
put_item |
NONE, ALL_OLD |
delete_item |
NONE, ALL_OLD |
update_item |
NONE, ALL_OLD, UPDATED_OLD, ALL_NEW, UPDATED_NEW |
When return_values is set (and not NONE), the method returns dict | None with the item attributes. Without it, you get OperationMetrics as before. Metrics are always available via get_last_metrics().
put_item
Get the old item when overwriting an existing key. Returns None if the key didn't exist.
"""Get the old item when overwriting with put_item."""
import os
from pydynox import DynamoDBClient, set_default_client
endpoint = os.environ.get("AWS_ENDPOINT_URL", "http://localhost:4566")
client = DynamoDBClient(
endpoint_url=endpoint,
region="us-east-1",
access_key="testing",
secret_key="testing",
)
set_default_client(client)
TABLE = "rv_put_example"
if not client.sync_table_exists(TABLE):
client.sync_create_table(TABLE, partition_key=("pk", "S"), sort_key=("sk", "S"), wait=True)
# Save an item
client.sync_put_item(TABLE, {"pk": "USER#1", "sk": "PROFILE", "name": "Alice", "age": 25})
# Overwrite it and get the old version back
old_item = client.sync_put_item(
TABLE,
{"pk": "USER#1", "sk": "PROFILE", "name": "Bob", "age": 30},
return_values="ALL_OLD",
)
print(f"Old name: {old_item['name']}") # Alice
print(f"Old age: {old_item['age']}") # 25
# Metrics are available via get_last_metrics()
print(f"Duration: {client.get_last_metrics().duration_ms:.1f}ms")
# If the item didn't exist before, old_item is None
new_item = {"pk": "USER#NEW", "sk": "PROFILE", "name": "Charlie"}
old_item = client.sync_put_item(TABLE, new_item, return_values="ALL_OLD")
print(f"Old item for new key: {old_item}") # None
client.sync_delete_table(TABLE)
update_item
Get item data back after an update. ALL_NEW is the most common — gives you the full item in one round trip.
"""Get item data back after an update_item call."""
import os
from pydynox import DynamoDBClient, set_default_client
endpoint = os.environ.get("AWS_ENDPOINT_URL", "http://localhost:4566")
client = DynamoDBClient(
endpoint_url=endpoint,
region="us-east-1",
access_key="testing",
secret_key="testing",
)
set_default_client(client)
TABLE = "rv_update_example"
if not client.sync_table_exists(TABLE):
client.sync_create_table(TABLE, partition_key=("pk", "S"), sort_key=("sk", "S"), wait=True)
client.sync_put_item(
TABLE, {"pk": "USER#1", "sk": "PROFILE", "name": "Alice", "age": 25, "status": "active"}
)
# ALL_NEW: get the full item after the update
item = client.sync_update_item(
TABLE,
{"pk": "USER#1", "sk": "PROFILE"},
updates={"name": "Bob", "age": 30},
return_values="ALL_NEW",
)
print(f"Full item after update: {item}")
# {'pk': 'USER#1', 'sk': 'PROFILE', 'name': 'Bob', 'age': 30, 'status': 'active'}
# UPDATED_NEW: only the fields that changed (new values)
changed = client.sync_update_item(
TABLE,
{"pk": "USER#1", "sk": "PROFILE"},
updates={"age": 35},
return_values="UPDATED_NEW",
)
print(f"Changed fields (new): {changed}")
# {'age': 35}
# ALL_OLD: the full item before the update
old = client.sync_update_item(
TABLE,
{"pk": "USER#1", "sk": "PROFILE"},
updates={"name": "Charlie"},
return_values="ALL_OLD",
)
print(f"Full item before update: {old}")
# {'pk': 'USER#1', 'sk': 'PROFILE', 'name': 'Bob', 'age': 35, 'status': 'active'}
# UPDATED_OLD: only the fields that changed (old values)
old_changed = client.sync_update_item(
TABLE,
{"pk": "USER#1", "sk": "PROFILE"},
updates={"age": 40},
return_values="UPDATED_OLD",
)
print(f"Changed fields (old): {old_changed}")
# {'age': 35}
# Metrics are always available via get_last_metrics()
print(f"Duration: {client.get_last_metrics().duration_ms:.1f}ms")
client.sync_delete_table(TABLE)
delete_item
Get the deleted item back. Useful when you need to archive or log what was removed.
"""Get the deleted item back from delete_item."""
import os
from pydynox import DynamoDBClient, set_default_client
endpoint = os.environ.get("AWS_ENDPOINT_URL", "http://localhost:4566")
client = DynamoDBClient(
endpoint_url=endpoint,
region="us-east-1",
access_key="testing",
secret_key="testing",
)
set_default_client(client)
TABLE = "rv_delete_example"
if not client.sync_table_exists(TABLE):
client.sync_create_table(TABLE, partition_key=("pk", "S"), sort_key=("sk", "S"), wait=True)
client.sync_put_item(TABLE, {"pk": "USER#1", "sk": "PROFILE", "name": "Alice", "age": 25})
# Delete and get the item that was removed
deleted_item = client.sync_delete_item(
TABLE,
{"pk": "USER#1", "sk": "PROFILE"},
return_values="ALL_OLD",
)
print(f"Deleted: {deleted_item['name']}") # Alice
# Metrics are available via get_last_metrics()
print(f"Duration: {client.get_last_metrics().duration_ms:.1f}ms")
# If the item didn't exist, deleted_item is None
deleted_item = client.sync_delete_item(
TABLE,
{"pk": "USER#GHOST", "sk": "NONE"},
return_values="ALL_OLD",
)
print(f"Deleted non-existent: {deleted_item}") # None
client.sync_delete_table(TABLE)
Tip
return_values="ALL_NEW" on update_item is the most useful one. You get the full item after the update in one call instead of doing update + get.
Utility methods
| Method | Description |
|---|---|
ping() |
Check if the client can connect to DynamoDB. Returns True or False. |
get_region() |
Get the AWS region this client is configured for. |
See async operations for examples and best practices.
Next steps
- Models - Define models with typed attributes
- Rate limiting - Control throughput
- Async - Async operations