Export usage data to S3
Last validated:
The Aperture dashboard shows recent usage data, but it is not designed for long-term retention or integration with external analytics tools. Exporting usage data to S3-compatible storage gives you a durable record of every LLM session for compliance auditing, cost analysis, and custom reporting.
Aperture's S3 exporter periodically writes session logs to the bucket you configure. It supports AWS S3, Google Cloud Storage, MinIO, Backblaze B2, and other S3-compatible services.
Prerequisites
Before you begin, ensure you have the following:
- An Aperture instance with at least one provider configured.
- Admin access to the Aperture configuration.
- An S3-compatible bucket with write access. You need the bucket name, region, and credentials (access key ID and secret).
Configure the S3 exporter
Open the Settings page in the Aperture dashboard and add an exporters section to your configuration:
"exporters": {
"s3": {
"bucket_name": "aperture-exports",
"region": "us-east-1",
"access_key_id": "YOUR_AWS_ACCESS_KEY_ID",
"access_secret": "YOUR_AWS_SECRET_KEY"
}
}
Setting bucket_name to a non-empty value enables the S3 exporter. Aperture begins exporting session logs automatically on the next export cycle.
Use a non-AWS S3-compatible service
Aperture also supports S3-compatible storage services beyond AWS. For Google Cloud Storage, MinIO, Backblaze B2, or any service with an S3-compatible API, you can use the same configuration with the addition of an endpoint field.
For services other than AWS S3 (such as Google Cloud Storage, MinIO, or Backblaze B2), add the endpoint field with the service's S3-compatible API URL:
"exporters": {
"s3": {
"endpoint": "https://storage.googleapis.com",
"bucket_name": "aperture-exports",
"region": "us-east-1",
"access_key_id": "YOUR_ACCESS_KEY_ID",
"access_secret": "YOUR_SECRET_KEY"
}
}
The region field is required even for non-AWS services because the AWS SDK validates it.
Customize export behavior
You can adjust how frequently Aperture exports data and how many records it includes per batch using the every and limit fields:
"exporters": {
"s3": {
"bucket_name": "aperture-exports",
"region": "us-east-1",
"access_key_id": "YOUR_AWS_ACCESS_KEY_ID",
"access_secret": "YOUR_AWS_SECRET_KEY",
"prefix": "prod",
"every": 1800,
"limit": 2000
}
}
The following table summarizes these fields:
| Field | Default | Description |
|---|---|---|
prefix | "" | (Optional) Path prefix for S3 objects. Must not end with /. |
every | 3600 | Seconds between export cycles. The example above exports every 30 minutes. |
limit | 1000 | Maximum records per export batch. Aperture caps this at 2500. |
Verify the export
After saving the configuration, wait for the next export cycle (based on your every setting) and check the S3 bucket for new objects. The objects appear under the configured prefix (if set) and contain session log records in JSON format.
If no objects appear after the expected interval, check the Aperture server logs for S3-related errors such as authentication failures or permission issues.
Next steps
- Build a custom webhook to send real-time event data to your own services.
- Review the Aperture dashboard reference for details on the built-in usage views.
- Refer to the exporters configuration reference for the complete field reference.