28 C
Canberra
Sunday, March 15, 2026

Decreasing prices for shuffle-heavy Apache Spark workloads with serverless storage for Amazon EMR Serverless


At re:Invent 2025, we introduced serverless storage for Amazon EMR Serverless, eliminating the necessity to provision native disk storage for Apache Spark workloads. Serverless storage of Amazon EMR Serverless reduces knowledge processing prices by as much as 20% whereas serving to stop job failures from disk capability constraints.

On this publish, we discover the price enhancements we noticed when benchmarking Apache Spark jobs with serverless storage on EMR Serverless. We take a deeper have a look at how serverless storage helps scale back prices for shuffle-heavy Spark workloads, and we define sensible steerage on figuring out the forms of queries that may profit most from enabling serverless storage in your EMR Serverless Spark jobs.

Benchmark outcomes for EMR 7.12 with serverless storage in opposition to customary disks

We carried out the efficiency and price financial savings benchmarking utilizing the TPC-DS dataset at 3TB scale, operating 100+ queries that included a mixture of excessive and low shuffle operations. The check configuration utilized Dynamic Useful resource Allocation (DRA) with no pre-initialized capability. The system was arrange with 20GB of disk area, and Spark configurations included 4 cores and 14GB reminiscence for each driver and executor, with dynamic allocation beginning at 3 preliminary executors (spark.dynamicAllocation.initialExecutors = 3). A comparative evaluation was carried out between native disk storage and serverless storage configurations. The intention was to evaluate each whole and common value implications between these storage approaches.

The next desk and chart evaluate the price discount we noticed within the testing surroundings described above. Based mostly on us-east-1 pricing, we noticed a price financial savings of greater than 26% when utilizing serverless storage.

Shuffle
serverless storage customary Disks financial savings
Complete Value ($) 24.28 33.1 26.65%
Common Value ($) 0.233 0.318 26.73%

Average cost comparison between standard disks and serverless storage

% Relative financial savings (per question) of serverless storage in comparison with customary disk shuffle

On this testing, we noticed that serverless storage in EMR Serverless reduces value for roughly 80% of TPC-DS queries. For the queries the place it supplies advantages, it delivers a mean value saving of roughly 47%, with financial savings of as much as 85%. Queries that regress sometimes have low shuffle depth, keep excessive parallelism all through execution, or full shortly sufficient that executor scale-down alternatives are minimal. The next determine exhibits the proportion value distinction for every of the TPC-DS queries when serverless storage was enabled, in comparison with the baseline configuration with out serverless storage. Constructive values point out value financial savings (greater is healthier), whereas unfavourable values point out value regressions.

Share value financial savings per TPC-DS question with serverless storage enabled

Percentage cost savings per TPC-DS query with serverless storage enabled

Runtime comparability

There’s vital value financial savings as a result of elevated elasticity from terminating executors earlier. Nonetheless, job completion time might enhance as a result of the shuffle knowledge is saved in serverless storage slightly than domestically on the executors. The extra learn and write latency for shuffle knowledge contributes to the longer runtime. The next desk and chart present the runtime comparability, we noticed in our testing surroundings.

Shuffle
serverless storage customary disks runtime
Complete Period (sec) 6770.63 4908.52 -37.94%
Common Period (sec) 65.1 47.2 -37.92%

Runtime comparison

Storing shuffle externally and decoupling from the compute allowed the flexibleness for EMR Serverless to show off unused sources dynamically because the state data has been offloaded from the compute. Nonetheless, these value financial savings might be realized solely when DRA is on. If DRA is turned off, Spark would maintain these unused sources alive including to the whole value.

Question patterns that profit from serverless storage

The associated fee financial savings from serverless storage rely closely on how executor demand modifications throughout phases of a job. On this part, we look at widespread execution patterns and clarify which question shapes are almost certainly to learn from serverless storage of EMR Serverless and which question patterns might not profit from shuffle externalization.

Inverted triangle sample queries

With a purpose to perceive why externalizing the shuffle knowledge can permit such a big value financial savings, think about a simplified question. The next question calculates annual whole gross sales from the TPC-DS dataset by becoming a member of the store_sales and date_dim tables, summing the gross sales quantities per yr, and ordering the outcomes.

SELECT d_year, SUM(ss_net_paid) AS total_sales
FROM store_sales
JOIN date_dim ON store_sales.ss_sold_date_sk = date_dim.d_date_sk
GROUP BY d_year
ORDER BY d_year;

This question demonstrates that prime executor demand in the course of the map part and low executor demand within the scale back part is an aggregation question with a excessive cardinality enter and a low cardinality group by.

  • Stage 1 (Excessive Executor Demand)

The be a part of and skim steps scan all the store_sales and date_dim tables. This typically includes billions of rows in large-scale TPC-DS datasets, so Spark will attempt to parallelize the scan throughout many executors to maximise learn throughput and compute effectivity.

  • Stage 2 (Low Executor Demand)

The aggregation is on d_year, which usually has few distinctive values, similar to solely a handful of years within the knowledge. This implies after the shuffle stage, the scale back part combines the partial aggregates into numerous keys equal to the variety of years (typically < 10). Only some Spark duties are wanted to complete the ultimate aggregation, so most executors turn out to be idle.

With shuffle data saved on the native disk, the compute sources related to these idle executors would nonetheless be operating with a view to maintain the shuffle knowledge obtainable. With shuffle knowledge offloaded from the nodes operating the executors, with DRA enabled, these nodes with idle executors get launched instantly.

As a result of early phases course of high-cardinality inputs and later phases collapse knowledge right into a small variety of keys, these queries kind an “inverted triangle” execution sample: huge parallelism on the high and slender parallelism on the backside as proven within the following picture:

Inverted triangle pattern queries

Hourglass sample queries

Relying upon the complexity of the job, there might be a number of phases with various demand on variety of executors wanted for the stage. Such jobs can profit from larger elasticity obtained by offloading shuffle knowledge to exterior serverless storage. One such sample is the hour glass sample. The next determine exhibits a workload sample the place executor demand expands, contracts throughout shuffle-heavy phases, and expands once more. Serverless storage of EMR Serverless decouples shuffle knowledge from compute, enabling extra environment friendly scale-down throughout slender phases and serving to enhance value optimization for elastic workloads.

 Hourglass sample in Spark stage execution

Hourglass pattern queries

To establish queries of this class, think about the next instance, The question progresses by means of three phases:

  • Stage 1: The preliminary be a part of and filter between store_sales and merchandise produces a large, high-cardinality intermediate dataset, requiring excessive parallelism (many executors).
  • Stage 2: Aggregation teams by a small set of classes similar to “Residence” or “Electronics”, leading to a drastic drop in output partitions. So this stage effectively runs with just a few executors, as there’s little knowledge to parallelize.
  • Stage 3: The small result’s joined (often a broadcast be a part of) again to a big reality desk with a date dimension, once more producing a big consequence that’s well-parallelized, inflicting Spark to ramp up executor utilization for this stage.
WITH stage1_large_scan AS (
-- Stage 1: Scan and huge be a part of generates a lot of parallelism and wishes many executors
SELECT ss_item_sk, ss_sold_date_sk, ss_net_paid, i_category
FROM store_sales
JOIN merchandise ON store_sales.ss_item_sk = merchandise.i_item_sk
WHERE merchandise.i_category IN ('Residence', 'Electronics')
),
stage2_small_agg AS (
-- Stage 2: Combination on low-cardinality column (by class), lowering to few teams, so few executors wanted
SELECT i_category, SUM(ss_net_paid) AS total_cat_sales
FROM stage1_large_scan
GROUP BY i_category
),
stage3_broadcast_filter AS (
-- Stage 3: Be a part of again to high-cardinality desk, pushing parallelism up once more
SELECT s.*, d.d_year
FROM store_sales s
JOIN date_dim d ON s.ss_sold_date_sk = d.d_date_sk
)

SELECT s3.d_year, s2.i_category, s2.total_cat_sales
FROM stage2_small_agg s2
JOIN stage3_broadcast_filter s3 ON s2.i_category = s3.i_category
ORDER BY s3.d_year, s2.i_category;

This sample is widespread for reporting and dimensional evaluation situations and is efficient for demonstrating how Spark dynamically adjusts useful resource utilization throughout job phases primarily based on cardinality and parallelism wants. Such queries also can profit from the elasticity enabled by exterior serverless storage.

Rectangle sample queries

Not all queries profit from externalizing the shuffle. Take into account a question the place the cardinality is excessive all through, which means each the phases function on a lot of partitions and keys. Sometimes, queries that group by high-cardinality columns (similar to merchandise or buyer) trigger most phases to require comparable quantities of parallelism. The next determine illustrates a Spark workload the place parallelism stays persistently excessive throughout phases. On this sample, each Stage 1 and Stage 2 function on a lot of partitions and keys, leading to sustained executor demand all through the job lifecycle.

Excessive-cardinality execution sample with sustained parallelism

Rectangle pattern queries

The next question is identical question that we used within the inverted triangle sample earlier, with one change. We now have changed the dim_date desk (low cardinality) with merchandise (excessive cardinality).

SELECT i_item_id, SUM(ss_net_paid) AS total_sales
FROM store_sales
JOIN merchandise ON store_sales.ss_item_sk = merchandise.i_item_sk
GROUP BY i_item_id
ORDER BY i_item_id
LIMIT 100;

  • Stage 1: Reads the rows from store_sales and joins with merchandise, spreading knowledge throughout many partitions—just like the unique question’s first stage.
  • Stage 2: The aggregation is by i_item_id, which usually has hundreds to tens of millions of distinct values in actual datasets. This retains parallelism excessive; many duties deal with non-overlapping keys, and shuffle outputs stay massive.

There is no such thing as a vital drop in cardinality: As a result of neither stage is diminished to a small group set, most executors keep busy all through the job’s predominant phases, with little idle time even after the shuffle. One of these question ends in a flatter executor utilization profile as a result of every stage processes an identical quantity of labor, thus minimizing variation in useful resource utilization. These rectangle sample queries won’t see the price profit from the elasticity obtained by offloading shuffle knowledge. Nonetheless, there should still be different advantages similar to discount of job failures and efficiency bottlenecks from disk constraints, freedom from capability planning and sizing, and provisioning of storage for intermediate knowledge operations.

Conclusion

Serverless storage for Amazon EMR Serverless can ship substantial value financial savings for workloads with dynamic useful resource patterns, as seen within the 26% common value financial savings we noticed in our testing surroundings. By externalizing shuffle knowledge, you may achieve the elasticity to launch idle executors instantly, demonstrated by the financial savings reaching as much as 85% in our testing surroundings, on queries following inverted triangle and hourglass patterns when Dynamic Useful resource Allocation is enabled.Understanding your workload traits is vital. Whereas rectangle sample queries might not see dramatic value reductions, they’ll nonetheless profit from improved reliability and elimination of capability planning overhead.

To get began: Analyze your job execution patterns, allow Dynamic Useful resource Allocation, and pilot serverless storage on shuffle-heavy workloads. Trying to scale back your Amazon EMR Serverless prices for Spark workloads? Discover serverless storage for EMR Serverless right now.


In regards to the authors

Sekar Srinivasan

Sekar Srinivasan

Sekar has over 20 years of expertise working with knowledge. He’s enthusiastic about serving to clients construct scalable options modernizing their structure and producing insights from their knowledge. In his spare time he likes to work on non-profit tasks, particularly these targeted on underprivileged Youngsters’s training.

Praveen Mohan Prasad

Praveen Mohan Prasad

Praveen is a knowledge and AI Specialist with 10+ years of expertise in distributed knowledge methods and machine studying, specializing in Info Retrieval and vector search methods. Lively open-source contributor and technical speaker within the ML-Search and Agentic-AI area.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

[td_block_social_counter facebook="tagdiv" twitter="tagdivofficial" youtube="tagdiv" style="style8 td-social-boxed td-social-font-icons" tdc_css="eyJhbGwiOnsibWFyZ2luLWJvdHRvbSI6IjM4IiwiZGlzcGxheSI6IiJ9LCJwb3J0cmFpdCI6eyJtYXJnaW4tYm90dG9tIjoiMzAiLCJkaXNwbGF5IjoiIn0sInBvcnRyYWl0X21heF93aWR0aCI6MTAxOCwicG9ydHJhaXRfbWluX3dpZHRoIjo3Njh9" custom_title="Stay Connected" block_template_id="td_block_template_8" f_header_font_family="712" f_header_font_transform="uppercase" f_header_font_weight="500" f_header_font_size="17" border_color="#dd3333"]
- Advertisement -spot_img

Latest Articles