16 C
Canberra
Friday, October 24, 2025

Amazon EMR Serverless observability, Half 1: Monitor Amazon EMR Serverless employees in close to actual time utilizing Amazon CloudWatch


Amazon EMR Serverless means that you can run open supply large knowledge frameworks corresponding to Apache Spark and Apache Hive with out managing clusters and servers. With EMR Serverless, you’ll be able to run analytics workloads at any scale with automated scaling that resizes assets in seconds to satisfy altering knowledge volumes and processing necessities.

We have now launched job employee metrics in Amazon CloudWatch for EMR Serverless. This characteristic means that you can monitor vCPUs, reminiscence, ephemeral storage, and disk I/O allocation and utilization metrics at an mixture employee degree on your Spark and Hive jobs.

This submit is a part of a collection about EMR Serverless observability. On this submit, we talk about the right way to use these CloudWatch metrics to watch EMR Serverless employees in close to actual time.

CloudWatch metrics for EMR Serverless

On the per-Spark job degree, EMR Serverless emits the next new metrics to CloudWatch for each driver and executors. These metrics present granular insights into job efficiency, bottlenecks, and useful resource utilization.

WorkerCpuAllocated The entire numbers of vCPU cores allotted for employees in a job run
WorkerCpuUsed The entire numbers of vCPU cores utilized by employees in a job run
WorkerMemoryAllocated The entire reminiscence in GB allotted for employees in a job run
WorkerMemoryUsed The entire reminiscence in GB utilized by employees in a job run
WorkerEphemeralStorageAllocated The variety of bytes of ephemeral storage allotted for employees in a job run
WorkerEphemeralStorageUsed The variety of bytes of ephemeral storage utilized by employees in a job run
WorkerStorageReadBytes The variety of bytes learn from storage by employees in a job run
WorkerStorageWriteBytes The variety of bytes written to storage from employees in a job run

The next are the advantages of monitoring your EMR Serverless jobs with CloudWatch:

  • Optimize useful resource utilization – You’ll be able to acquire insights into useful resource utilization patterns and optimize your EMR Serverless configurations for higher effectivity and value financial savings. For instance, underutilization of vCPUs or reminiscence can reveal useful resource wastage, permitting you to optimize employee sizes to attain potential value financial savings.
  • Diagnose widespread errors – You’ll be able to establish root causes and mitigation for widespread errors with out log diving. For instance, you’ll be able to monitor the utilization of ephemeral storage and mitigate disk bottlenecks by preemptively allocating extra storage per employee.
  • Achieve close to real-time insights – CloudWatch gives close to real-time monitoring capabilities, permitting you to trace the efficiency of your EMR Serverless jobs as and when they’re working, for fast detection of any anomalies or efficiency points.
  • Configure alerts and notifications – CloudWatch allows you to arrange alarms utilizing Amazon Easy Notification Service (Amazon SNS) based mostly on predefined thresholds, permitting you to obtain notifications via e mail or textual content message when particular metrics attain crucial ranges.
  • Conduct historic evaluation – CloudWatch shops historic knowledge, permitting you to research traits over time, establish patterns, and make knowledgeable selections for capability planning and workload optimization.

Resolution overview

To additional improve this observability expertise, we’ve got created an answer that gathers all these metrics on a single CloudWatch dashboard for an EMR Serverless utility. You have to launch one AWS CloudFormation template per EMR Serverless utility. You’ll be able to monitor all the roles submitted to a single EMR Serverless utility utilizing the identical CloudWatch dashboard. To be taught extra about this dashboard and deploy this answer into your individual account, check with the EMR Serverless CloudWatch Dashboard GitHub repository.

Within the following sections, we stroll you thru how you should utilize this dashboard to carry out the next actions:

  • Optimize your useful resource utilization to avoid wasting prices with out impacting job efficiency
  • Diagnose failures resulting from widespread errors with out the necessity for log diving and resolve these errors optimally

Conditions

To run the pattern jobs supplied on this submit, it is advisable to create an EMR Serverless utility with default settings utilizing the AWS Administration Console or AWS Command Line Interface (AWS CLI), after which launch the CloudFormation template from the GitHub repo with the EMR Serverless utility ID supplied because the enter to the template.

You have to submit all the roles on this submit to the identical EMR Serverless utility. If you wish to monitor a distinct utility, you’ll be able to deploy this template on your personal EMR Serverless utility ID.

Optimize useful resource utilization

When working Spark jobs, you usually begin with the default configurations. It may be difficult to optimize your workload with none visibility into precise useful resource utilization. A few of the most typical configurations that we’ve seen clients modify are spark.driver.cores, spark.driver.reminiscence, spark.executor.cores, and spark.executors.reminiscence.

For instance how the newly added CloudWatch dashboard worker-level metrics will help you fine-tune your job configurations for higher price-performance and enhanced useful resource utilization, let’s run the next Spark job, which makes use of the NOAA Built-in Floor Database (ISD) dataset to run some transformations and aggregations.

Use the next command to run this job on EMR Serverless. Present your Amazon Easy Storage Service (Amazon S3) bucket and EMR Serverless utility ID for which you launched the CloudFormation template. Be certain to make use of the identical utility ID to submit all of the pattern jobs on this submit. Moreover, present an AWS Id and Entry Administration (IAM) runtime position.

aws emr-serverless start-job-run 
--name emrs-cw-dashboard-test-1 
 --application-id  
 --execution-role-arn  
 --job-driver '{
 "sparkSubmit": {
 "entryPoint": "s3:///scripts/windycity.py",
 "entryPointArguments": ["s3://noaa-global-hourly-pds/2024/", "s3:///emrs-cw-dashboard-test-1/"]
 } }'

Now let’s verify the executor vCPUs and reminiscence from the CloudWatch dashboard.

This job was submitted with default EMR Serverless Spark configurations. From the Executor CPU Allotted metric within the previous screenshot, the job was allotted 396 vCPUs in whole (99 executors * 4 vCPUs per executor). Nonetheless, the job solely used a most of 110 vCPUs based mostly on Executor CPU Used. This means oversubscription of vCPU assets. Equally, the job was allotted 1,584 GB reminiscence in whole based mostly on Executor Reminiscence Allotted. Nonetheless, from the Executor Reminiscence Used metric, we see that the job solely used 176 GB of reminiscence throughout the job, indicating reminiscence oversubscription.

Now let’s rerun this job with the next adjusted configurations.

Unique Job (Default Configuration) Rerun Job (Adjusted Configuration)
spark.executor.reminiscence 14 GB 3 GB
spark.executor.cores 4 2
spark.dynamicAllocation.maxExecutors 99 30
Complete Useful resource Utilization

6.521 vCPU-hours

26.084 memoryGB-hours

32.606 storageGB-hours

1.739 vCPU-hours

3.688 memoryGB-hours

17.394 storageGB-hours

Billable Useful resource Utilization

7.046 vCPU-hours

28.182 memoryGB-hours

0 storageGB-hours

1.739 vCPU-hours

3.688 memoryGB-hours

0 storageGB-hours

We use the next code:

aws emr-serverless start-job-run 
--name emrs-cw-dashboard-test-2 
 --application-id  
 --execution-role-arn  
 --job-driver '{
 "sparkSubmit": {
 "entryPoint": "s3:///scripts/windycity.py",
 "entryPointArguments": ["s3://noaa-global-hourly-pds/2024/", "s3:///emrs-cw-dashboard-test-2/"],
 "sparkSubmitParameters": "--conf spark.driver.cores=2 --conf spark.driver.reminiscence=3g --conf spark.executor.reminiscence=3g --conf spark.executor.cores=2 --conf spark.dynamicAllocation.maxExecutors=30"
 } }'

Let’s verify the executor metrics from the CloudWatch dashboard once more for this job run.

Within the second job, we see decrease allocation of each vCPUs (396 vs. 60) and reminiscence (1,584 GB vs. 120 GB) as anticipated, leading to higher utilization of assets. The unique job ran for 4 minutes, 41 seconds. The second job took 4 minutes, 54 seconds. This reconfiguration has resulted in 79% decrease value financial savings with out affecting the job efficiency.

You need to use these metrics to additional optimize your job by rising or reducing the variety of employees or the allotted assets.

Diagnose and resolve job failures

Utilizing the CloudWatch dashboard, you’ll be able to diagnose job failures resulting from points associated to CPU, reminiscence, and storage corresponding to out of reminiscence or no area left on the machine. This allows you to establish and resolve widespread errors rapidly with out having to verify the logs or navigate via Spark Historical past Server. Moreover, as a result of you’ll be able to verify the useful resource utilization from the dashboard, you’ll be able to fine-tune the configurations by rising the required assets solely as a lot as wanted as a substitute of oversubscribing to the assets, which additional saves prices.

Driver errors

For instance this use case, let’s run the next Spark job, which creates a big Spark knowledge body with just a few million rows. Usually, this operation is completed by the Spark driver. Whereas submitting the job, we additionally configure spark.rpc.message.maxSize, as a result of it’s required for activity serialization of knowledge frames with a lot of columns.

aws emr-serverless start-job-run 
--name emrs-cw-dashboard-test-3 
--application-id  
--execution-role-arn  
--job-driver '{
"sparkSubmit": {
"entryPoint": "s3:///scripts/create-large-disk.py"
"sparkSubmitParameters": "--conf spark.rpc.message.maxSize=2000"
} }'

After a couple of minutes, the job failed with the error message “Encountered errors when releasing containers,” as seen within the Job particulars part.

When encountering non-descriptive error messages, it turns into essential to research additional by analyzing the motive force and executor logs to troubleshoot additional. However earlier than additional log diving, let’s first verify the CloudWatch dashboard, particularly the motive force metrics, as a result of releasing containers is usually carried out by the motive force.

We will see that the Driver CPU Used and Driver Storage Used are nicely inside their respective allotted values. Nonetheless, upon checking Driver Reminiscence Allotted and Driver Reminiscence Used, we will see that the motive force was utilizing all the 16 GB reminiscence allotted to it. By default, EMR Serverless drivers are assigned 16 GB reminiscence.

Let’s rerun the job with extra driver reminiscence allotted. Let’s set driver reminiscence to 27 GB as the place to begin, as a result of spark.driver.reminiscence + spark.driver.memoryOverhead ought to be lower than 30 GB for the default employee sort. park.rpc.messsage.maxSize will probably be unchanged.

aws emr-serverless start-job-run 
—identify emrs-cw-dashboard-test-4 
—application-id  
—execution-role-arn  
—job-driver '{
"sparkSubmit": {
"entryPoint": "s3:///scripts/create-large-disk.py"
"sparkSubmitParameters": "--conf spark.driver.reminiscence=27G --conf spark.rpc.message.maxSize=2000"
} }'

The job succeeded this time round. Let’s verify the CloudWatch dashboard to watch driver reminiscence utilization.

As we will see, the allotted reminiscence is now 30 GB, however the precise driver reminiscence utilization didn’t exceed 21 GB throughout the job run. Due to this fact, we will additional optimize prices right here by lowering the worth of spark.driver.reminiscence. We reran the identical job with spark.driver.reminiscence set to 22 GB, and the job nonetheless succeeded with higher driver reminiscence utilization.

Executor errors

Utilizing CloudWatch for observability is right for diagnosing driver-related points as a result of there is just one driver per job and driver assets used is the precise useful resource utilization of the one driver. Alternatively, executor metrics are aggregated throughout all the employees. Nonetheless, you should utilize this dashboard to supply solely an satisfactory quantity of assets to make your job succeed, thereby avoiding oversubscription of assets.

For instance, let’s run the next Spark job, which simulates uniform disk over-utilization throughout all employees by processing very giant NOAA datasets from a number of years. This job additionally transiently caches a really giant knowledge body on disk.

aws emr-serverless start-job-run 
--name emrs-cw-dashboard-test-5 
--application-id  
--execution-role-arn  
--job-driver '{
"sparkSubmit": {
"entryPoint": "s3:///scripts/noaa-disk.py"
} }'

After a couple of minutes, we will see that the job failed with “No area left on machine” error within the Job particulars part, which signifies that a few of the employees have run out of disk area.

Checking the Operating Executors metric from the dashboard, we will establish that there have been 99 executor employees working. Every employee comes with 20 GB storage by default.

As a result of it is a Spark activity failure, let’s verify the Executor Storage Allotted and Executor Storage Used metrics from the dashboard (as a result of the motive force gained’t run any duties).

As we will see, the 99 executors have used up a complete of 1,940 GB from the overall allotted executor storage of two,126 GB. This consists of each the info shuffled by the executors and the storage used for caching the info body. We don’t see the complete 2,126 GB being utilized from this graph as a result of there is likely to be just a few executors out of the 99 executors that weren’t holding a lot knowledge when the job failed (earlier than these executors may begin processing duties and retailer the info body chunks).

Let’s rerun the identical job however with elevated executor disk dimension utilizing the parameter spark.emr-serverless.executor.disk. Let’s strive with 40 GB disk per executor as a place to begin.

aws emr-serverless start-job-run 
--name emrs-cw-dashboard-test-6 
--application-id  
--execution-role-arn  
--job-driver '{
"sparkSubmit": {
"entryPoint": "s3:///scripts/noaa-disk.py"
"sparkSubmitParameters": "--conf spark.emr-serverless.executor.disk=40G"
}
}'

This time, the job ran efficiently. Let’s verify the Executor Storage Allotted and Executor Storage Used metrics.

Executor Storage Allotted is now 4,251 GB as a result of we’ve doubled the worth of spark.emr-serverless.executor.disk. Though there may be now twice as a lot aggregated executors’ storage, the job nonetheless used solely a most of 1,940 GB out of 4,251 GB. This means that our executors have been possible working out of disk area solely by just a few GBs. Due to this fact, we will attempt to set spark.emr-serverless.executor.disk to an excellent decrease worth like 25 GB or 30 GB as a substitute of 40 GB to avoid wasting storage prices as we did within the earlier state of affairs. As well as, you’ll be able to monitor Executor Storage Learn Bytes and Executor Storage Write Bytes to see in case your job is I/O intensive. On this case, you should utilize the Shuffle-optimized disks characteristic of EMR Serverless to additional improve your job’s I/O efficiency.

The dashboard can also be helpful to seize details about transient storage used whereas caching or persisting the info frames, together with spill-to-disk situations. The Storage tab of Spark Historical past Server data any caching actions, as seen within the following screenshot. Nonetheless, this knowledge will probably be misplaced from Spark Historical past Server after the cache is evicted or when the job finishes. Due to this fact, Executor Storage Used can be utilized to do an evaluation of a failed job run resulting from transient storage points.

On this specific instance, the info was evenly distributed among the many executors. Nonetheless, when you have an information skew (for, instance just one–2 executors out of 99 course of essentially the most quantity of knowledge, and in consequence, your job runs out of disk area), the CloudWatch dashboard gained’t precisely seize this state of affairs as a result of the storage knowledge is aggregated throughout all of the executors for a job. For diagnosing points on the particular person executor degree, we have to monitor per-executor-level metrics. We discover extra superior examples of how per-worker-level metrics will help you establish, mitigate, and resolve hard-to-find points via EMR Serverless integration with Amazon Managed Service for Prometheus.

Conclusion

On this submit, you discovered the right way to successfully handle and optimize your EMR Serverless utility utilizing a single CloudWatch dashboard with enhanced EMR Serverless metrics. These metrics can be found in all AWS Areas the place EMR Serverless is out there. For extra particulars about this characteristic, check with Job-level monitoring.


In regards to the Authors

Kashif Khan is a Sr. Analytics Specialist Options Architect at AWS, specializing in large knowledge companies like Amazon EMR, AWS Lake Formation, AWS Glue, Amazon Athena, and Amazon DataZone. With over a decade of expertise within the large knowledge area, he possesses intensive experience in architecting scalable and sturdy options. His position includes offering architectural steering and collaborating intently with clients to design tailor-made options utilizing AWS analytics companies to unlock the complete potential of their knowledge.

Veena Vasudevan is a Principal Associate Options Architect and Information & AI specialist at AWS. She helps clients and companions construct extremely optimized, scalable, and safe options; modernize their architectures; and migrate their large knowledge, analytics, and AI/ML workloads to AWS.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

[td_block_social_counter facebook="tagdiv" twitter="tagdivofficial" youtube="tagdiv" style="style8 td-social-boxed td-social-font-icons" tdc_css="eyJhbGwiOnsibWFyZ2luLWJvdHRvbSI6IjM4IiwiZGlzcGxheSI6IiJ9LCJwb3J0cmFpdCI6eyJtYXJnaW4tYm90dG9tIjoiMzAiLCJkaXNwbGF5IjoiIn0sInBvcnRyYWl0X21heF93aWR0aCI6MTAxOCwicG9ydHJhaXRfbWluX3dpZHRoIjo3Njh9" custom_title="Stay Connected" block_template_id="td_block_template_8" f_header_font_family="712" f_header_font_transform="uppercase" f_header_font_weight="500" f_header_font_size="17" border_color="#dd3333"]
- Advertisement -spot_img

Latest Articles