This publish was written by Eunice Aguilar and Francisco Rodera from REA Group.
Enterprises that must share and entry massive quantities of knowledge throughout a number of domains and providers must construct a cloud infrastructure that scales as want adjustments. REA Group, a digital enterprise that makes a speciality of actual property property, solved this downside utilizing Amazon Managed Streaming for Apache Kafka (Amazon MSK) and a knowledge streaming platform known as Hydro.
REA Group’s staff of greater than 3,000 folks is guided by our function: to vary the best way the world experiences property. We assist folks with all elements of their property expertise—not simply shopping for, promoting, and renting—by way of the richest content material, information and insights, valuation estimates, and residential financing options. We ship unparalleled worth to our prospects, Australia’s actual property brokers, by offering entry to the biggest and most engaged viewers of property seekers.
To realize this, the completely different technical merchandise inside the firm recurrently want to maneuver information throughout domains and providers effectively and reliably.
Throughout the Knowledge Platform staff, we now have constructed a knowledge streaming platform known as Hydro to offer this functionality throughout the entire group. Hydro is powered by Amazon MSK and different instruments with which groups can transfer, remodel, and publish information at low latency utilizing event-driven architectures. This kind of construction is foundational at REA for constructing microservices and well timed information processing for real-time and batch use instances like time-sensitive outbound messaging, personalization, and machine studying (ML).
On this publish, we share our method to MSK cluster capability planning.
The issue
Hydro manages a large-scale Amazon MSK infrastructure by offering configuration abstractions, permitting customers to deal with delivering worth to REA with out the cognitive overhead of infrastructure administration. As using Hydro grows inside REA, it’s essential to carry out capability planning to fulfill person calls for whereas sustaining optimum efficiency and cost-efficiency.
Hydro makes use of provisioned MSK clusters in improvement and manufacturing environments. In every setting, Hydro manages a single MSK cluster that hosts a number of tenants with differing workload necessities. Correct capability planning makes certain the clusters can deal with excessive site visitors and supply all customers with the specified stage of service.
Actual-time streaming is a comparatively new know-how at REA. Many customers aren’t but acquainted with Apache Kafka, and precisely assessing their workload necessities may be difficult. Because the custodians of the Hydro platform, it’s our accountability to discover a option to carry out capability planning to proactively assess the affect of the person workloads on our clusters.
Objectives
Capability planning entails figuring out the suitable dimension and configuration of the cluster primarily based on present and projected workloads, in addition to contemplating components equivalent to information replication, community bandwidth, and storage capability.
With out correct capability planning, Hydro clusters can change into overwhelmed by excessive site visitors and fail to offer customers with the specified stage of service. Subsequently, it’s crucial to us to take a position time and assets into capability planning to ensure Hydro clusters can ship the efficiency and availability that trendy purposes require.
The capability planning method we comply with for Hydro covers three fundamental areas:
- The fashions used for the calculation of present and estimated future capability wants, together with the attributes used as variables in them
- The fashions used to evaluate the approximate anticipated capability required for a brand new Hydro workload becoming a member of the platform
- The tooling out there to operators and custodians to evaluate the historic and present capability consumption of the platform and, primarily based on them, the out there headroom
The next diagram exhibits the interplay of capability utilization and the precalculated most utilization.

Though we don’t have this functionality but, the objective is to take this method one step additional sooner or later and predict the approximate useful resource depletion time, as proven within the following diagram.

To ensure our digital operations are resilient and environment friendly, we should keep a complete observability of our present capability utilization. This detailed oversight permits us not solely to know the efficiency limits of our present infrastructure, but in addition to determine potential bottlenecks earlier than they affect our providers and customers.
By proactively setting and monitoring well-understood thresholds, we will obtain well timed alerts and take vital scaling actions. This method makes certain our infrastructure can meet demand spikes with out compromising on efficiency, in the end supporting a seamless person expertise and sustaining the integrity of our system.
Answer overview
The MSK clusters in Hydro are configured with a PER_TOPIC_PER_BROKER stage of monitoring, which supplies metrics on the dealer and subject ranges. These metrics assist us decide the attributes of the cluster utilization successfully.
Nevertheless, it wouldn’t be sensible to show an extreme variety of metrics on our monitoring dashboards as a result of that might result in much less readability and slower insights on the cluster. It’s extra worthwhile to decide on probably the most related metrics for capability planning fairly than displaying quite a few metrics.
Cluster utilization attributes
Based mostly on the Amazon MSK finest practices pointers, we now have recognized a number of key attributes to evaluate the well being of the MSK cluster. These attributes embody the next:
- In/out throughput
- CPU utilization
- Disk house utilization
- Reminiscence utilization
- Producer and client latency
- Producer and client throttling
For extra data on right-sizing your clusters, see Finest practices for right-sizing your Apache Kafka clusters to optimize efficiency and value, Finest practices for Normal brokers, Monitor CPU utilization, Monitor disk house, and Monitor Apache Kafka reminiscence.
The next desk incorporates the detailed record of all of the attributes we use for MSK cluster capability planning in Hydro.
| Attribute Title | Attribute Kind | Items | Feedback |
|---|---|---|---|
| Bytes in | Throughput | Bytes per second | Depends on the mixture Amazon EC2 community, Amazon EBS community, and Amazon EBS storage throughput |
| Bytes out | Throughput | Bytes per second | Depends on the mixture Amazon EC2 community, Amazon EBS community, and Amazon EBS storage throughput |
| Client latency | Latency | Milliseconds | Excessive or unacceptable latency values normally point out person expertise degradation earlier than reaching precise useful resource (for instance, CPU and reminiscence) depletion |
| CPU utilization | Capability limits | % CPU person + CPU system | Ought to keep below 60% |
| Disk house utilization | Persistent storage | Bytes | Ought to keep below 85% |
| Reminiscence utilization | Capability limits | % Reminiscence in use | Ought to keep below 60% |
| Producer latency | Latency | Milliseconds | Excessive or unacceptable sustained latency values normally point out person expertise degradation earlier than reaching precise capability limits or precise useful resource (for instance, CPU or reminiscence) depletion |
| Throttling | Capability limits | Milliseconds, bytes, or messages | Excessive or unacceptable sustained throttling values point out capability limits are being reached earlier than precise useful resource (for instance, CPU or reminiscence) depletion |
By monitoring these attributes, we will shortly consider the efficiency of the clusters as we add extra workloads to the platform. We then match these attributes to the related MSK metrics out there.
Cluster capability limits
Throughout the preliminary capability planning, our MSK clusters weren’t receiving sufficient site visitors to offer us with a transparent thought of their capability limits. To handle this, we used the AWS efficiency testing framework for Apache Kafka to guage the theoretical efficiency limits. We carried out efficiency and capability exams on the check MSK clusters that had the identical cluster configurations as our improvement and manufacturing clusters. We obtained a extra complete understanding of the cluster’s efficiency by conducting these varied check situations. The next determine exhibits an instance of a check cluster’s efficiency metrics.

To carry out the exams inside a selected timeframe and price range, we targeted on the check situations that might effectively measure the cluster’s capability. As an illustration, we carried out exams that concerned sending high-throughput site visitors to the cluster and creating subjects with many partitions.
After each check, we collected the metrics of the check cluster and extracted the utmost values of the important thing cluster utilization attributes. We then consolidated the outcomes and decided probably the most acceptable limits of every attribute. The next screenshot exhibits an instance of the exported check cluster’s efficiency metrics.
![]() |
Capability monitoring dashboards
As a part of our platform administration course of, we conduct month-to-month operational opinions to keep up optimum efficiency. This entails analyzing an automatic operational report that covers all of the techniques on the platform. Throughout the overview, we consider the service stage goals (SLOs) primarily based on choose service stage indicators (SLIs) and assess the monitoring alerts triggered from the earlier month. By doing so, we will determine any points and take corrective actions.
To help us in conducting the operational opinions and to offer us with an summary of the cluster’s utilization, we developed a capability monitoring dashboard, as proven within the following screenshot, for every setting. We constructed the dashboard as infrastructure as code (IaC) utilizing the AWS Cloud Growth Equipment (AWS CDK). The dashboard is generated and managed robotically as a element of the platform infrastructure, together with the MSK cluster.

By defining the utmost capability limits of the MSK cluster in a configuration file, the boundaries are robotically loaded into the capability dashboard as annotations within the Amazon CloudWatch graph widgets. The capability limits annotations are clearly seen and supply us with a view of the cluster’s capability headroom primarily based on utilization.
We decided the capability limits for throughput, latency, and throttling by way of the efficiency testing. Capability limits of the opposite metrics, equivalent to CPU, disk house, and reminiscence, are primarily based on the Amazon MSK finest practices pointers.
Throughout the operational opinions, we proactively assess the capability monitoring dashboards to find out if extra capability must be added to the cluster. This method permits us to determine and tackle potential efficiency points earlier than they’ve a big affect on person workloads. It’s a preventative measure fairly than a reactive response to a efficiency degradation.
Preemptive CloudWatch alarms
Now we have applied preemptive CloudWatch alarms along with the capability monitoring dashboards. These alarms are configured to alert us earlier than a selected capability metric reaches its threshold, notifying us when the sustained worth reaches 80% of the capability restrict. This technique of monitoring allows us to take rapid motion as a substitute of ready for our month-to-month overview cadence.
Worth added by our capability planning method
As operators of the Hydro platform, our method to capability planning has supplied a constant option to assess how far we’re from the theoretical capability limits of all our clusters, no matter their configuration. Our capability monitoring dashboards are a key observability instrument that we overview frequently; they’re additionally helpful whereas troubleshooting efficiency points. They assist us shortly inform if capability constraints may very well be a possible root reason for any ongoing points. Because of this we will use our present capability planning method and tooling each proactively or reactively, relying on the scenario and want.
One other advantage of this method is that we calculate the theoretical most utilization values {that a} given cluster with a selected configuration can face up to from a separate cluster with out impacting any precise customers of the platform. We spin up short-lived MSK clusters by way of our AWS CDK primarily based automation and carry out capability exams on them. We do that very often to evaluate the affect, if any, that adjustments made to the cluster’s configurations have on the recognized capability limits. In keeping with our present suggestions loop, if these newly calculated limits change from the beforehand recognized ones, they’re used to robotically replace our capability dashboards and alarms in CloudWatch.
Future evolution
Hydro is a platform that’s consistently enhancing with the introduction of recent options. One in all these options consists of the flexibility to conveniently create Kafka consumer purposes. To satisfy the rising demand, it’s important to remain forward of capability planning. Though the method mentioned right here has served us effectively to date, it’s under no circumstances the ultimate stage , and there are capabilities that we have to prolong and areas we have to enhance on.
Multi-cluster structure
To assist essential workloads, we’re contemplating utilizing a multi-cluster structure utilizing Amazon MSK, which might additionally have an effect on our capability planning. Sooner or later, we plan to profile workloads primarily based on metadata, cross-check them with capability metrics, and place them within the acceptable MSK cluster. Along with the prevailing provisioned MSK clusters, we’ll consider how the Amazon MSK Serverless cluster kind can complement our platform structure.
Utilization traits
Now we have added CloudWatch anomaly detection graphs to our capability monitoring dashboards to trace any uncommon traits. Nevertheless, as a result of the CloudWatch anomaly detection algorithm solely evaluates as much as 2 weeks of metric information, we’ll reassess its usefulness as we onboard extra workloads. Apart from figuring out utilization traits, we’ll discover choices to implement an algorithm with predictive capabilities to detect when MSK cluster assets degrade and deplete.
Conclusion
Preliminary capability planning lays a stable basis for future enhancements and supplies a secure onboarding course of for workloads. To realize optimum efficiency of our platform, we should be sure that our capability planning technique evolves according to the platform’s development. In consequence, we keep an in depth collaboration with AWS to repeatedly develop further options that meet our enterprise wants and are in sync with the Amazon MSK roadmap. This makes certain we keep forward of the curve and might ship the very best expertise to our customers.
We suggest all Amazon MSK customers not miss out on maximizing their cluster’s potential and to start out planning their capability. Implementing the methods listed on this publish is a good first step and can result in smoother operations and vital financial savings in the long term.
In regards to the Authors
Eunice Aguilar is a Employees Knowledge Engineer at REA. She has labored in software program engineering in varied industries all through the years and lately for property information. She’s additionally an advocate for ladies interested by transitioning into tech, together with the well-versed who she takes inspiration from.
Francisco Rodera is a Employees Programs Engineer at REA. He has in depth expertise constructing and working large-scale distributed techniques. His pursuits are automation, observability, and making use of SRE practices to business-critical providers and platforms.
Khizer Naeem is a Technical Account Supervisor at AWS. He makes a speciality of Environment friendly Compute and has a deep ardour for Linux and open-source applied sciences, which he leverages to assist enterprise prospects modernize and optimize their cloud workloads.

