18.6 C
Canberra
Sunday, February 22, 2026

Greatest practices for right-sizing Amazon OpenSearch Service domains


Amazon OpenSearch Service is a completely managed service for search, analytics, and observability workloads, serving to you index, search, and analyze massive datasets with ease. Ensuring your OpenSearch Service area is right-sized—balancing efficiency, scalability, and price—is important to maximizing its worth. An over-provisioned area wastes sources, whereas an under-provisioned one dangers efficiency bottlenecks like excessive latency or write rejections.

On this put up, we information you thru the steps to find out in case your OpenSearch Service area is right-sized, utilizing AWS instruments and greatest practices to optimize your configuration for workloads like log analytics, search, vector search, or artificial information testing.

Why right-sizing your OpenSearch Service area issues

Proper-sizing your OpenSearch Service area gives optimum efficiency, reliability, and cost-efficiency. An undersized area results in excessive CPU utilization, reminiscence stress, and question latency, whereas an outsized area drives pointless spend and useful resource waste. By repeatedly matching area sources to workload traits similar to ingestion charge, question complexity, and information development, you’ll be able to preserve predictable efficiency with out overpaying for unused capability.

Past price and efficiency, right-sizing facilitates architectural agility. It helps make certain your cluster scales easily throughout visitors spikes, meets SLA targets, and sustains stability beneath altering workloads. Repeatedly tuning sources to match precise demand optimizes infrastructure effectivity and helps long-term operational resilience.

Key Amazon CloudWatch metrics

OpenSearch Service gives Amazon CloudWatch metrics that provide insights into varied elements of your area’s efficiency. These metrics fall into 16 completely different classes, together with cluster metrics, EBS quantity metrics, and occasion metrics. To find out in case your OpenSearch Service area is misconfigured, monitor these widespread signs that point out resizing or optimization could also be crucial. These are attributable to imbalances in useful resource allocation, workload calls for, or configuration settings. The next desk summarizes these parameters:

CloudWatch Metrics Parameter
CPU Utilization Metrics CPUUtilization: Common CPU utilization throughout all information nodes.

  • Optimum vary: 60-80% for sustained workloads

Main management aircraft CPU utilization (for devoted major nodes): Common CPU utilization on major nodes.

  • Optimum vary: Below regular circumstances <50%
Reminiscence Utilization Metrics JVMMemoryPressure: Proportion of heap reminiscence used throughout information nodes.

Notice: With Rubbish First Rubbish Collector (G1GC), JVM might delay collections to optimize efficiency. Consider JVMMemoryPressure along with GC metrics (Outdated Gen utilization and GC pause time) to substantiate true stress tendencies.

MasterJVMMemoryPressure: Heap utilization on devoted major nodes.

Notice: Occasional spikes are regular throughout state updates; sustained excessive reminiscence stress warrants scaling or tuning.

Storage Metrics StorageUtilization: Proportion of space for storing used.

FreeStorageSpace: Out there storage in MB.

  • Crucial threshold: When approaching the read-only threshold.

Node Degree Search and Indexing Efficiency

(These latencies should not per-request latencies or charge, however at node degree based mostly on shards assigned to a node.)

SearchLatency: Common time for search requests.

  • Baseline institution: Monitor throughout regular operations.

IndexingLatency: Common time for indexing operations.

  • Influence: Can point out CPU or I/O bottlenecks.

SearchRate and IndexingRate: Requests per minute for search and indexing.

  • Utilization: Correlate with latency metrics to know efficiency affect.
Cluster Well being Indicators ClusterStatus.yellow and ClusterStatus.crimson:

  • Yellow standing: Some duplicate shards are unassigned.
  • Pink standing: Some major shards are unassigned (information loss threat).

Nodes

  • What it measures: Variety of nodes within the cluster.
  • Utilization: Monitor node failures and restoration patterns.

Indicators of under-provisioning

Below-provisioned domains wrestle to deal with workload calls for, resulting in efficiency degradation and cluster instability. Search for sustained useful resource stress and operational errors that sign the cluster is working past its limits. For monitoring, you’ll be able to set CloudWatch alarms to catch early indicators of stress and stop outages or degraded efficiency. The next are important warning indicators:

  • Excessive CPU utilization for information nodes (>80%) sustained over time (similar to greater than 10 minutes)
  • Excessive CPU utilization for major nodes (>60%) sustained over time (similar to greater than 10 minutes)
  • JVM reminiscence stress persistently excessive (>85%) for information and first nodes
  • Storage utilization reaching excessive (>85%)
  • Rising search latency with steady question patterns (growing by 50% from baseline)
  • Frequent cluster standing yellow/crimson occasions
  • Node failures beneath regular load circumstances

When sources are constrained, the end-user expertise suffers with slower searches, failed indexing, and system errors. The next are key efficiency affect indicators:

Remediation suggestions

The next desk summarizes CloudWatch metric signs, potential causes, and potential options.

CloudWatch metric symptom Causes and resolution
FreeStorageSpace drops <20%

Storage stress happens when information quantity outgrows native storage because of excessive ingestion, lengthy retention with out cleanup, or unbalanced shards. Lack of tiering (similar to UltraWarm) additional worsens capability points.

Resolution: Unencumber area by deleting unused indexes or automating cleanup with ISM and use power merge on read-only indexes to reclaim storage. If stress persists, scale vertically or horizontally, use UltraWarm or chilly storage for older information, and alter shard counts at rollover for higher stability.

CPUUtilization and JVMMemoryPressure persistently >70%

Excessive CPU or JVM stress arises when occasion sizes are too small or shard counts per node are extreme, resulting in frequent GC pauses. Inefficient shard technique, uneven distribution, and poorly optimized queries or mappings additional spike reminiscence utilization beneath heavy workloads.

Resolution: Tackle excessive CPU/JVM stress by scaling vertically to bigger situations (similar to from r6g.massive to r6g.xlarge) or including nodes horizontally. Optimize shard counts relative to heap measurement, easy out peak visitors, and use gradual logs to pinpoint and tune resource-heavy queries.

SearchLatency or IndexingLatency spikes >500 milliseconds

Thread pool rejections typically stem from useful resource competition like excessive CPU/JVM stress or GC pauses. Inefficient shard sizing, over-sharding, and overly complicated queries (deep aggregations, frequent cache evictions) additional enhance overhead and push duties into rejection.

Resolution: Cut back question latency by optimizing queries with profiling, tuning shard sizes (10–50 GB every), and avoiding over-sharding. Enhance parallelism by scaling the cluster, including replicas for learn capability, growing cache by means of bigger nodes, and setting acceptable question timeouts.

ThreadpoolRejected metrics point out queued requests

Thread pool rejections happen when excessive concurrent requests overflow queues past capability, particularly with undersized nodes restricted by vCPU-based threads. Sudden unscaled visitors spikes additional overwhelm swimming pools, inflicting duties to be dropped or delayed.

Resolution: Mitigate thread pool rejections by imposing shard stability throughout nodes, scaling horizontally to spice up thread capability, and managing consumer load with retries and decreased concurrency. Monitor search queues, right-size situations for vCPUs, and cautiously tune thread pool settings to deal with bursty workloads.

ThroughputThrottle or IopsThrottle attain 1

I/O throttling arises when Amazon EBS or Amazon EC2 limits are exceeded, similar to gp3’s 125 MBps baseline, or when burst credit are depleted because of sustained spikes. Mismatched quantity varieties and heavy operations like bulk indexing with out optimized storage additional amplify throughput bottlenecks.

Resolution: Tackle I/O throttling by upgrading to gp3 volumes with greater baseline or provisioning additional IOPS and contemplate I/O-optimized situations like i3/i4 households whereas monitoring burst stability. For sustained workloads, scale nodes or schedule heavy operations throughout off-peak hours to keep away from hitting throughput caps.

Indicators of over-provisioning

Over-provisioned clusters present persistently low utilization throughout CPU, reminiscence, and storage, suggesting sources far exceed workload calls for. Figuring out these inefficiencies helps cut back pointless spend with out impacting efficiency. You need to use CloudWatch alarms to trace cluster well being and cost-efficiency metrics over 2–4 weeks to substantiate sustained underutilization:

  • Low CPU utilization for information and first nodes (<40%) sustained over time
  • Low JVM reminiscence stress for information and first nodes (<50%)
  • Extreme free storage (>70% unused)
  • Underutilized occasion varieties for workload patterns

Monitor cluster indexing and search latencies consistently because the cluster is being downsized—these latencies mustn’t enhance if the cluster is eliminating unused capability. Additionally, it’s advisable to scale back nodes one after the other and proceed to look at latencies to proceed additional downturn. By right-sizing situations, lowering node counts, and adopting cost-efficient storage choices, you’ll be able to align sources to precise utilization. Optimizing shard allocation additional helps balanced efficiency at a decrease price.

Greatest practices for right-sizing

On this part, we focus on greatest practices for right-sizing.

Iterate and optimize

Proper-sizing is an ongoing course of, not a one-time train. As workloads evolve, repeatedly monitor CPU, JVM reminiscence stress, and storage utilization utilizing CloudWatch to ensure they continue to be inside wholesome thresholds. Rising latency, queue buildup, or unassigned shards typically sign capability or configuration points that require consideration.

Repeatedly overview gradual logs, question latency, and ingestion tendencies to establish efficiency bottlenecks early. If search or indexing efficiency degrades, contemplate scaling, rebalancing shards, or adjusting retention insurance policies. Periodic critiques of occasion sizes and node depend assist align price with demand, sustaining 200-millisecond latency targets whereas avoiding over-provisioning. Constant iteration helps your OpenSearch Service area stay performant and cost-efficient over time.

Set up baselines

Monitor for two–4 weeks after preliminary deployment and doc peak utilization patterns and seasonal differences. File efficiency throughout completely different workload varieties. Set acceptable CloudWatch alarm thresholds based mostly in your baselines.

Common overview course of

Conduct weekly metric critiques throughout preliminary optimization and month-to-month assessments for steady workloads. Conduct quarterly right-sizing workouts for price optimization.

Scaling methods

Think about the next scaling methods:

Vertical scaling (occasion varieties) – Use bigger occasion varieties when efficiency constraints stem from CPU, reminiscence, or JVM stress, and general information quantity is inside a single node’s capability. Select memory-optimized situations (similar to r8g, r7g, or r7i) for heavy aggregation or indexing workloads. Use compute-optimized situations (c8g, c7g, or c7i) for CPU-bound workloads similar to query-heavy or log-processing environments. Vertical scaling is good for smaller clusters or testing environments the place simplicity and cost-efficiency are priorities.

Horizontal scaling (node depend) – Add extra information nodes when storage, shard depend, or question concurrency will increase past what a single node can deal with. Preserve an odd variety of primary-eligible nodes (sometimes three or 5) and use devoted major nodes for clusters with greater than 10 information nodes. Deploy throughout three Availability Zones for top availability in manufacturing. Horizontal scaling is most popular for big, production-grade workloads requiring fault tolerance and sustained development. Use _cat/allocation?v to confirm shard distribution and node stability:

GET /_cat/allocation/node_name_1,node_name_2,node_name_3

Optimize storage configuration

Use the newest era of Amazon EBS Basic Function (gp) volumes for improved efficiency and cost-efficiency in comparison with earlier variations. Monitor storage development tendencies utilizing ClusterUsedSpace and FreeStorageSpace metrics. Preserve information utilization under 50% of complete storage capability to permit for development and snapshots.

Select storage tiers based mostly on efficiency and entry patterns—for instance, allow UltraWarm or chilly storage for big, occasionally accessed datasets. Transfer older or compliance-related information to cost-efficient tiers (for analytics or WORM workloads) solely after making certain the info is immutable.

Use the _cat/indices?v API to observe index sizes and refine retention or rollover insurance policies accordingly:

GET /_cat/indices/index1,index2,index3

Analyze shard configuration

Shards instantly have an effect on efficiency and useful resource utilization, so an acceptable shard technique ought to be used. The indexes which have heavy ingestion and searches ought to have numerous shards within the order of variety of nodes for higher effectivity throughout all information nodes within the cluster. We suggest conserving shard sizes between 10–30 GB for search workloads and as much as 50 GB for log analytics workloads and restrict to <20 shards per GB of JVM heap.

Run _cat/shards?v to substantiate even shard distribution and no unassigned shards. Consider over-sharding by checking JVMMemoryPressure (>80%) or SearchLatency spikes (>200 milliseconds) from extreme shard coordination. Assess under-sharding if IndexingLatency (>200 milliseconds) or low SearchRate signifies restrict parallelism. Use _cat/allocation?v to establish unbalanced shard sizes or scorching spots on nodes:

GET /_cat/allocation/node_name_1,node_name_2,node_name_3

Dealing with surprising visitors spikes

Even effectively right-sized OpenSearch Service domains can face efficiency challenges throughout sudden workload surges, similar to log bursts, search visitors peaks, or seasonal load patterns. To deal with such surprising spikes successfully, contemplate implementing the next greatest practices:

  • Allow Auto-Tune – Mechanically alter cluster settings based mostly on present utilization and visitors patterns
  • Distribute shards successfully – Keep away from shard hotspots through the use of balanced shard allocation and index rollover insurance policies
  • Pre-warm clusters for identified occasions – For anticipated peak durations (end-of-month reviews, advertising and marketing campaigns), quickly scale up earlier than the spike and scale down afterward
  • Monitor with CloudWatch alarms – Set proactive alarms for CPU, JVM reminiscence, and thread pool rejections to catch early stress indicators

Deploy CloudWatch alarms

CloudWatch alarms carry out an motion when a CloudWatch metric exceeds a specified worth for some period of time to take remediation motion proactively.

Conclusion

Proper-sizing is a steady means of observing, analyzing, and optimizing. Through the use of CloudWatch metrics, OpenSearch Dashboards, and greatest practices round shard sizing and workload profiling, you may make certain your area is environment friendly, performant, and cost-effective. Proper-sizing your OpenSearch Service area helps present optimum efficiency, cost-efficiency, and scalability. By monitoring key metrics, optimizing shards, and utilizing AWS instruments like CloudWatch, ISM, and Auto Scaling, you’ll be able to preserve a high-performing cluster with out over-provisioning.

For extra details about right-sizing OpenSearch Service domains, seek advice from Sizing Amazon OpenSearch Service domains.


Nikhil Agarwal

Nikhil Agarwal

Nikhil is a Sr. Technical Supervisor with Amazon Net Providers. He’s obsessed with serving to prospects obtain operational excellence of their cloud journey and dealing actively on technical options. He’s additionally keen about AI/ML, generative AI, and analytics, and deep dives into prospects’ generative AI and Amazon OpenSearch Service particular use instances. Outdoors of labor, he enjoys touring with household and exploring completely different devices.

Rick Balwani

Rick Balwani

Rick is an Enterprise Help Supervisor main a crew of Technical Account Managers (TAMs) devoted to AWS impartial software program vendor (ISV) buyer success. He companions with prospects to assist them use AWS companies successfully whereas constructing revolutionary, cutting-edge options. With deep experience in DevOps and techniques engineering, Rick brings technical depth and strategic perception to assist ISVs scale and optimize their AWS environments.

Arun Lakshmanan

Arun Lakshmanan

Arun is a Search Specialist with Amazon OpenSearch Service based mostly out of Chicago, IL. He works carefully with prospects on their OpenSearch journey throughout varied use instances, together with vector search, observability, and safety analytics.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

[td_block_social_counter facebook="tagdiv" twitter="tagdivofficial" youtube="tagdiv" style="style8 td-social-boxed td-social-font-icons" tdc_css="eyJhbGwiOnsibWFyZ2luLWJvdHRvbSI6IjM4IiwiZGlzcGxheSI6IiJ9LCJwb3J0cmFpdCI6eyJtYXJnaW4tYm90dG9tIjoiMzAiLCJkaXNwbGF5IjoiIn0sInBvcnRyYWl0X21heF93aWR0aCI6MTAxOCwicG9ydHJhaXRfbWluX3dpZHRoIjo3Njh9" custom_title="Stay Connected" block_template_id="td_block_template_8" f_header_font_family="712" f_header_font_transform="uppercase" f_header_font_weight="500" f_header_font_size="17" border_color="#dd3333"]
- Advertisement -spot_img

Latest Articles