This can be a visitor publish by Oleh Khoruzhenko, Senior Employees DevOps Engineer at Bazaarvoice, in partnership with AWS.
Bazaarvoice is an Austin-based firm powering a world-leading critiques and scores platform. Our system processes billions of shopper interactions via scores, critiques, photographs, and movies, serving to manufacturers and retailers construct shopper confidence and drive gross sales by utilizing genuine user-generated content material (UGC) throughout the client journey. The Bazaarvoice Belief Mark is the gold commonplace in authenticity.
Apache Kafka is among the core parts of our infrastructure, enabling real-time information streaming for the worldwide evaluation platform. Though Kafka’s distributed structure met our wants for high-throughput, fault-tolerant streaming, self-managing this complicated system diverted important engineering sources away from our core product improvement. Every part of our Kafka infrastructure required specialised experience, starting from configuring low-level parameters to sustaining the complicated distributed techniques that our clients depend on. The dynamic nature of the environment demanded steady care and funding in automation. We discovered ourselves always managing upgrades, making use of safety patches, implementing fixes, and addressing scaling wants as our information volumes grew.
On this publish, we present you the steps we took emigrate our workloads from self-hosted Kafka to Amazon Managed Streaming for Apache Kafka (Amazon MSK). We stroll you thru our migration course of and spotlight the enhancements we achieved after this transition. We present how we minimized operational overhead, enhanced our safety and compliance posture, automated key processes, and constructed a extra resilient platform whereas sustaining the excessive efficiency our international buyer base expects.
The necessity for modernization
As our platform grew to course of billions of every day shopper interactions, we would have liked to discover a option to scale our Kafka clusters effectively whereas sustaining a small group to handle the infrastructure. The constraints of self-managed Kafka clusters manifested in a number of key areas:
- Scaling operations – Though scaling our self-hosted Kafka clusters wasn’t inherently complicated, it required cautious planning and execution. Every time we would have liked so as to add new brokers to deal with elevated workload, our group confronted a multi-step course of involving capability planning, infrastructure provisioning, and configuration updates.
- Configuration complexity – Kafka gives lots of of configuration parameters. Though we didn’t actively handle all of those, understanding their affect was necessary. Key settings like I/O threads, reminiscence buffers, and retention insurance policies wanted ongoing consideration as we scaled. Even minor changes may have important downstream results, requiring our group to take care of deep experience in these parameters and their interactions to make sure optimum efficiency and stability.
- Infrastructure administration and capability planning – Self-hosting Kafka required us to handle a number of scaling dimensions, together with compute, reminiscence, community throughput, storage throughput, and storage quantity. We would have liked to fastidiously plan capability for all these parts, typically making complicated trade-offs. Past capability planning, we have been chargeable for real-time administration of our Kafka infrastructure. This included promptly detecting and addressing part failures and efficiency points. Our group wanted to be extremely conscious of alerts, typically requiring quick motion to take care of system stability.
- Specialised experience necessities – Working Kafka at scale demanded deep technical experience throughout a number of domains. The group wanted to:
- Monitor and analyze lots of of efficiency metrics
- Conduct complicated root trigger evaluation for efficiency points
- Handle ZooKeeper ensemble coordination
- Execute rolling updates for zero-downtime upgrades and safety patches
These challenges have been compounded throughout peak enterprise intervals, equivalent to Black Friday and Cyber Monday, when sustaining optimum efficiency was important for Bazaarvoice’s retail clients.
Selecting Amazon MSK
After evaluating numerous choices, we chosen Amazon MSK as our modernization answer. The choice was pushed by the service’s skill to reduce operational overhead, present excessive availability out of the field with its three Availability Zone structure, and provide seamless integration with our current AWS infrastructure.
Key capabilities that made Amazon MSK the clear alternative:
- AWS integration – We already used AWS companies for information processing and analytics. Amazon MSK related straight with these companies, assuaging the necessity to construct and keep customized integrations. This meant our current information pipelines would proceed working with minimal modifications.
- Automated operations administration – Amazon MSK automated our most time-consuming duties. We not have to manually monitor cases and storage for failures or reply to those points ourselves.
- Enterprise-grade reliability – The platform’s structure matched our reliability necessities out of the field. Multi-AZ distribution and built-in replication gave us the identical fault tolerance we’d fastidiously constructed into our self-hosted system, now backed by AWS’s service ensures.
- Simplified improve course of – Earlier than Amazon MSK, model upgrades for our Kafka clusters required cautious planning and execution. The method was complicated, involving a number of steps and dangers. Amazon MSK simplified our improve operations. We now use automated upgrades for dev and check workloads and keep management over manufacturing environments. This shift lowered the necessity for intensive planning classes and a number of engineers. In consequence, we keep present with the newest Kafka variations and safety patches, bettering our system reliability and efficiency.
- Enhanced safety controls – Our platform required ISO 27001 compliance, which usually concerned months of documentation and safety controls implementation. Amazon MSK got here with this certification built-in, assuaging the necessity for separate compliance work. Amazon MSK encrypted our information, managed community entry, and built-in with our current safety instruments.
With Amazon MSK chosen as our goal platform, we started planning the complicated job of migrating our important streaming infrastructure with out disrupting the billions of shopper interactions flowing via our system.
Bazaarvoice’s migration journey
Shifting our complicated Kafka infrastructure to Amazon MSK required cautious planning and exact execution. Our platform processes information via two major parts: an Apache Kafka Streams pipeline that handles information processing and augmentation, and consumer purposes that transfer this enriched information to downstream techniques. With 40 TB of state throughout 250 inside matters, this migration demanded a methodical method.
Planning part
Working with AWS Options Architects proved important for validating our migration technique. Our platform’s distinctive traits required particular consideration:
- Multi-Area deployment throughout the US and EU
- Complicated stateful purposes with strict information consistency wants
- Very important enterprise companies requiring zero downtime
- Numerous shopper ecosystem with completely different migration necessities
Migration challenges
The largest hurdle was migrating our stateful Kafka Streams purposes. Our information processing runs as a directed acyclic graph (DAG) of purposes throughout areas, utilizing static group membership to forestall disruptive rebalancing. It’s necessary to notice that Kafka Streams retains its state in inside Kafka matters. For purposes to get well correctly, replicating this state precisely is essential. This attribute of Kafka Streams added complexity to our migration course of. Initially, we thought of MirrorMaker2, the usual device for Kafka migrations. Nonetheless, two basic limitations made it difficult:
- Threat of shedding state or incorrectly replicating state throughout our purposes.
- Incapability to run two cases of our purposes concurrently, which meant we would have liked to close down the principle utility and await it to get well from the state within the MSK cluster. Given the scale of our state, this restoration course of exceeded our 30-minute SLA for downtime.
Our answer
We determined to deploy a parallel stack of Kafka Streams purposes studying and writing information from Amazon MSK. This method gave us adequate time for testing and verification, and enabled the purposes to hydrate their state earlier than we delivered the output to our information warehouse for analytics. We used MirrorMaker2 for enter subject replication, whereas our answer provided a number of benefits:
- Simplified monitoring of the replication course of
- Prevented consistency points between state shops and inside matters
- Allowed for gradual, managed migration of customers
- Enabled thorough validation earlier than cutover
- Required a coordinated transition plan for all customers, as a result of we couldn’t switch shopper offsets throughout clusters
Shopper migration technique
Every shopper kind required a fastidiously tailor-made method:
- Customary customers – For purposes supporting Kafka Shopper Group protocol, we applied a four-step migration. This method risked some duplicate processing, however our purposes have been designed to deal with this situation. The steps have been as follows:
- Configure customers with
auto.offset.reset: newest. - Cease all DAG producers.
- Look ahead to current customers to course of remaining messages.
- Reduce over shopper purposes to Amazon MSK.
- Configure customers with
- Apache Kafka Join Sinks – Our sink connectors served two important databases:
- A distributed search and analytics engine – Doc versioning trusted Kafka file offsets, making direct migration inconceivable. To handle this, we applied an answer that concerned constructing new search engine clusters from scratch.
- A document-oriented NoSQL database – This supported direct migration with out requiring new database cases, simplifying the method considerably.
- Apache Spark and Flink purposes – These offered distinctive challenges on account of their inside checkpointing mechanisms:
- Offsets managed exterior Kafka’s shopper teams
- Checkpoints incompatible between supply and goal clusters
- Required full information reprocessing from the start
We scheduled these migrations throughout off-peak hours to reduce affect.
Technical advantages and enhancements
Shifting to Amazon MSK basically modified how we handle our Kafka infrastructure. The transformation is greatest illustrated by evaluating key operational duties earlier than and after the migration, summarized within the following desk.
| Exercise | Earlier than: Self-Hosted Kafka | After: Amazon MSK |
| Safety patching | Required devoted group time for Kafka and OS updates | Absolutely automated |
| Dealer restoration | Wanted handbook monitoring and intervention | Absolutely automated |
| Shopper authentication | Complicated password rotation procedures | AWS Id and Entry Administration (IAM) |
| Model upgrades | Complicated process requiring intensive planning | Absolutely automated |
The small print of the duties are as follows:
- Safety patching – Beforehand, our group spent 8 hours month-to-month making use of Kafka and working system (OS) safety patches throughout our dealer fleet. Amazon MSK now handles these updates routinely, sustaining our safety posture with out engineering intervention.
- Dealer restoration – Though our self-hosted Kafka had computerized restoration capabilities, every incident required cautious monitoring and occasional handbook intervention. With Amazon MSK, node failures and storage degradation points equivalent to Amazon Elastic Block Retailer (Amazon EBS) slowdowns are dealt with completely by AWS and resolved inside minutes with out our involvement.
- Authentication administration – Our self-hosted implementation required password rotations for SASL/SCRAM authentication, a course of that took two engineers a number of days to coordinate. The direct integration between Amazon MSK and AWS Id and Entry Administration (IAM) minimized this overhead whereas strengthening our safety controls.
- Model upgrades – Kafka model upgrades in our self-hosted setting required weeks of planning and testing in addition to weekend upkeep home windows. Amazon MSK manages these upgrades routinely throughout off-peak hours, sustaining our SLAs with out disruption.
These enhancements proved particularly priceless throughout high-traffic intervals like Black Friday, when our group beforehand wanted intensive operational readiness plans. Now, the built-in resiliency of Amazon MSK gives us with dependable Kafka clusters that function mission-critical infrastructure for our enterprise. The migration made it attainable to interrupt our monolithic clusters into smaller, devoted MSK clusters. This improved our information isolation, supplied higher useful resource allocation, and enhanced efficiency predictability for high-priority workloads.
Classes discovered
Our migration to Amazon MSK revealed a number of key insights that may assist different organizations modernize their Kafka infrastructure:
- Professional validation – Working with AWS Options Architects to validate our migration technique caught a number of important points early. Though our group knew our purposes effectively, exterior Kafka specialists recognized potential issues with state administration and shopper offset dealing with that we hadn’t thought of. This validation prevented expensive missteps through the migration.
- Knowledge verification – Evaluating information throughout Kafka clusters proved difficult. We constructed instruments to seize subject snapshots in Parquet format on Amazon Easy Storage Service (Amazon S3), enabling fast comparisons utilizing Amazon Athena queries. This method gave us confidence that information remained constant all through the migration.
- Begin small – Starting with our smallest information universe in QA helped us refine our course of. Every subsequent migration went smoother as we utilized classes from earlier iterations. This gradual method helped us keep system stability whereas constructing group confidence.
- Detailed planning – We created particular migration plans with every group, contemplating their distinctive necessities and constraints. For instance, our machine studying pipeline wanted particular dealing with on account of strict offset administration necessities. This granular planning prevented downstream disruptions.
- Efficiency optimization – We discovered that using Amazon MSK provisioned throughput provided clear value benefits when storage throughput grew to become a bottleneck. This characteristic made it attainable to enhance cluster efficiency with out scaling occasion sizes or including brokers, offering a extra environment friendly answer to our throughput challenges.
- Documentation – Sustaining detailed migration runbooks proved invaluable. After we encountered related points throughout completely different migrations, having documented options saved important troubleshooting time.
Conclusion
On this publish, we confirmed you the way we modernized our Kafka infrastructure by migrating to Amazon MSK. We walked via our decision-making course of, challenges confronted, and techniques employed. Our journey reworked Kafka operations from a resource-intensive, self-managed infrastructure to a streamlined, managed service, bettering operational effectivity, platform reliability, and group productiveness. For enterprises managing self-hosted Kafka infrastructure, our expertise demonstrates that profitable transformation is achievable with correct planning and execution. As information streaming wants develop, modernizing infrastructure turns into a strategic crucial for sustaining aggressive benefit.
For extra info, go to the Amazon MSK product web page, and discover the great Developer Information to be taught concerning the options out there that will help you construct scalable and dependable streaming information purposes on AWS.
In regards to the authors
