Right now, we’re asserting three new capabilities for Amazon S3 Storage Lens that provide you with deeper insights into your storage efficiency and utilization patterns. With the addition of efficiency metrics, help for analyzing billions of prefixes, and direct export to Amazon S3 Tables, you’ve gotten the instruments you want to optimize software efficiency, cut back prices, and make data-driven choices about your Amazon S3 storage technique.
New efficiency metric classes
S3 Storage Lens now consists of eight new efficiency metric classes that assist determine and resolve efficiency constraints throughout your group. These can be found at group, account, bucket, and prefix ranges. For instance, the service helps you determine small objects in a bucket or prefix that may decelerate software efficiency. This may be mitigated by batching small objects or utilizing the Amazon S3 Categorical One Zone storage class for increased efficiency small object workloads.
To entry the brand new efficiency metrics, you want to allow efficiency metrics within the S3 Storage Lens superior tier when creating a brand new Storage Lens dashboard or enhancing an current configuration.
| Metric class | Particulars | Use case | Mitigation |
| Learn request dimension | Distribution of learn request sizes (GET) by day | Establish dataset with small learn request patterns that decelerate efficiency | Small request: Batch small objects or use Amazon S3 Categorical One Zone for high-performance small object workloads |
| Write request dimension | Distribution of write request sizes (PUT, POST, COPY, and UploadPart) by day | Establish dataset with small write request patterns that decelerate efficiency | Massive request: Parallelize requests, use MPU or use AWS CRT |
| Storage dimension | Distribution of object sizes | Establish dataset with small small objects that decelerate efficiency | Small object sizes: Contemplate bundling small objects |
| Concurrent PUT 503 errors | Variety of 503s attributable to concurrent PUT operation on identical object | Establish prefixes with concurrent PUT throttling that decelerate efficiency | For single author, modify retry conduct or use Amazon S3 Categorical One Zone. For a number of writers, use consensus mechanism or use Amazon S3 Categorical One Zone |
| Cross-Area information switch | Bytes transferred and requests despatched throughout Area, in Area | Establish potential efficiency and price degradation attributable to cross-Area information entry | Co-locate compute with information in the identical AWS Area |
| Distinctive objects accessed | Quantity or share of distinctive objects accessed per day | Establish datasets the place small subset of objects are being ceaselessly accessed. These could be moved to increased efficiency storage tier for higher efficiency | Contemplate transferring lively information to Amazon S3 Categorical One Zone or different caching options |
| FirstByteLatency (current Amazon CloudWatch metric) | Each day common of first byte latency metric | The each day common per-request time from the entire request being obtained to when the response begins to be returned | |
| TotalRequestLatency (current Amazon CloudWatch metric) | Each day common of Whole Request Latency | The each day common elapsed per request time from the primary byte obtained to the final byte despatched |
The way it works
On the Amazon S3 console I select Create Storage Lens dashboard to create a brand new dashboard. You can too edit an current dashboard configuration. I then configure basic settings similar to offering a Dashboard identify, Standing, and the optionally available Tags. Then, I select Subsequent.

Subsequent, I outline the scope of the dashboard by choosing Embrace all Areas and Embrace all buckets and specifying the Areas and buckets to be included.

I decide in to the Superior tier within the Storage Lens dashboard configuration, choose Efficiency metrics, then select Subsequent.

Subsequent, I choose Prefix aggregation as an extra metrics aggregation, then go away the remainder of the knowledge as default earlier than I select Subsequent.

I choose the Default metrics report, then Normal objective bucket because the bucket kind, after which choose the Amazon S3 bucket in my AWS account because the Vacation spot bucket. I go away the remainder of the knowledge as default, then choose Subsequent.

I overview all the knowledge earlier than I select Submit to finalize the method.

After it’s enabled, I’ll obtain each day efficiency metrics instantly within the Storage Lens console dashboard. You can too select to export report in CSV or Parquet format to any bucket in your account or publish to Amazon CloudWatch. The efficiency metrics are aggregated and revealed each day and shall be accessible at a number of ranges: group, account, bucket, and prefix. On this dropdown menu, I select the % concurrent PUT 503 error for the Metric, Final 30 days for the Date vary, and 10 for the Prime N buckets.

The Concurrent PUT 503 error depend metric tracks the variety of 503 errors generated by simultaneous PUT operations to the identical object. Throttling errors can degrade software efficiency. For a single author, modify retry conduct or use increased efficiency storage tier similar to Amazon S3 Categorical One Zone to mitigate concurrent PUT 503 errors. For a number of writers state of affairs, use a consensus mechanism to keep away from concurrent PUT 503 errors or use increased efficiency storage tier similar to Amazon S3 Categorical One Zone.
Full analytics for all prefixes in your S3 buckets
S3 Storage Lens now helps analytics for all prefixes in your S3 buckets by a brand new Expanded prefixes metrics report. This functionality removes earlier limitations that restricted evaluation to prefixes assembly a 1% dimension threshold and a most depth of 10 ranges. Now you can observe as much as billions of prefixes per bucket for evaluation on the most granular prefix stage, no matter dimension or depth.
The Expanded prefixes metrics report consists of all current S3 Storage Lens metric classes: storage utilization, exercise metrics (requests and bytes transferred), information safety metrics, and detailed standing code metrics.
The way to get began
I observe the identical steps outlined within the The way it works part to create or replace the Storage Lens dashboard. In Step 4 on the console, the place you choose export choices, you’ll be able to choose the brand new Expanded prefixes metrics report. Thereafter, I can export the expanded prefixes metrics report in CSV or Parquet format to any basic objective bucket in my account for environment friendly querying of my Storage Lens information.

Good to know
This enhancement addresses eventualities the place organizations want granular visibility throughout their complete prefix construction. For instance, you’ll be able to determine prefixes with incomplete multipart uploads to cut back prices, observe compliance throughout your complete prefix construction for encryption and replication necessities, and detect efficiency points on the most granular stage.
Export S3 Storage Lens metrics to S3 Tables
S3 Storage Lens metrics can now be robotically exported to S3 Tables, a totally managed characteristic on AWS with built-in Apache Iceberg help. This integration offers each day computerized supply of metrics to AWS managed S3 Tables for fast querying with out requiring extra processing infrastructure.
The way to get began
I begin by following the method outlined in Step 5 on the console, the place I select the export vacation spot. This time, I select Expanded prefixes metrics report. Along with Normal objective bucket, I select Desk bucket.
The brand new Storage Lens metrics are exported to new tables in an AWS managed bucket aws-s3.

I choose the expanded_prefixes_activity_metrics desk to view API utilization metrics for expanded prefix reviews.

I can preview the desk on the Amazon S3 console or use Amazon Athena to question the desk.

Good to know
S3 Tables integration with S3 Storage Lens simplifies metric evaluation utilizing acquainted SQL instruments and AWS analytics providers similar to Amazon Athena, Amazon QuickSight, Amazon EMR, and Amazon Redshift, with out requiring an information pipeline. The metrics are robotically organized for optimum querying, with customized retention and encryption choices to fit your wants.
This integration permits cross-account and cross-Area evaluation, customized dashboard creation, and information correlation with different AWS providers. For instance, you’ll be able to mix Storage Lens metrics with S3 Metadata to investigate prefix-level exercise patterns and determine objects in prefixes with chilly information which can be eligible for transition to lower-cost storage tiers.
On your agentic AI workflows, you should use pure language to question S3 Storage Lens metrics in S3 Tables with the S3 Tables MCP Server. Brokers can ask questions similar to ‘which buckets grew probably the most final month?’ or ‘present me storage prices by storage class’ and get instantaneous insights out of your observability information.
Now accessible
All three enhancements can be found in all AWS Areas the place S3 Storage Lens is presently supplied (besides the China Areas and AWS GovCloud (US)).
These options are included within the Amazon S3 Storage Lens Superior tier at no extra cost past customary superior tier pricing. For the S3 Tables export, you pay just for S3 Tables storage, upkeep, and queries. There isn’t a extra cost for the export performance itself.
To be taught extra about Amazon S3 Storage Lens efficiency metrics, help for billions of prefixes, and export to S3 Tables, discuss with the Amazon S3 consumer information. For pricing particulars, go to the Amazon S3 pricing web page.


