3.5 C
Canberra
Monday, July 14, 2025

Construct multi-Area resilient Apache Kafka functions with equivalent subject names utilizing Amazon MSK and Amazon MSK Replicator


Resilience has at all times been a prime precedence for purchasers operating mission-critical Apache Kafka functions. Amazon Managed Streaming for Apache Kafka (Amazon MSK) is deployed throughout a number of Availability Zones and offers resilience inside an AWS Area. Nevertheless, mission-critical Kafka deployments require cross-Area resilience to attenuate downtime throughout service impairment in a Area. With Amazon MSK Replicator, you possibly can construct multi-Area resilient streaming functions to offer enterprise continuity, share knowledge with companions, mixture knowledge from a number of clusters for analytics, and serve international purchasers with decreased latency. This publish explains easy methods to use MSK Replicator for cross-cluster knowledge replication and particulars the failover and failback processes whereas conserving the identical subject identify throughout Areas.

MSK Replicator overview

Amazon MSK gives two cluster varieties: Provisioned and Serverless. Provisioned cluster helps two dealer varieties: Normal and Specific. With the introduction of Amazon MSK Specific brokers, now you can deploy MSK clusters that considerably scale back restoration time by as much as 90% whereas delivering constant efficiency. Specific brokers present as much as 3 occasions the throughput per dealer and scale as much as 20 occasions sooner in comparison with Normal brokers operating Kafka. MSK Replicator works with each dealer varieties in Provisioned clusters and together with Serverless clusters.

MSK Replicator helps an equivalent subject identify configuration, enabling seamless subject identify retention throughout each active-active or active-passive replication. This avoids the danger of infinite replication loops generally related to third-party or open supply replication instruments. When deploying an active-passive cluster structure for regional resilience, the place one cluster handles stay visitors and the opposite acts as a standby, an equivalent subject configuration simplifies the failover course of. Functions can transition to the standby cluster with out reconfiguration as a result of subject names stay constant throughout the supply and goal clusters.

To arrange an active-passive deployment, you must allow multi-VPC connectivity for the MSK cluster within the main Area and deploy an MSK Replicator within the secondary Area. The replicator will devour knowledge from the first Area’s MSK cluster and asynchronously replicate it to the secondary Area. You join the purchasers initially to the first cluster however fail over the purchasers to the secondary cluster within the case of main Area impairment. When the first Area recovers, you deploy a brand new MSK Replicator to duplicate knowledge again from the secondary cluster to the first. You want to cease the consumer functions within the secondary Area and restart them within the main Area.

As a result of replication with MSK Replicator is asynchronous, there’s a chance of duplicate knowledge within the secondary cluster. Throughout a failover, shoppers would possibly reprocess some messages from Kafka subjects. To deal with this, deduplication ought to happen on the buyer aspect, resembling through the use of an idempotent downstream system like a database.

Within the subsequent sections, we show easy methods to deploy MSK Replicator in an active-passive structure with equivalent subject names. We offer a step-by-step information for failing over to the secondary Area throughout a main Area impairment and failing again when the first Area recovers. For an active-active setup, confer with Create an active-active setup utilizing MSK Replicator.

Resolution overview

On this setup, we deploy a main MSK Provisioned cluster with Specific brokers within the us-east-1 Area. To offer cross-Area resilience for Amazon MSK, we set up a secondary MSK cluster with Specific brokers within the us-east-2 Area and replicate subjects from the first MSK cluster to the secondary cluster utilizing MSK Replicator. This configuration offers excessive resilience inside every Area through the use of Specific brokers, and cross-Area resilience is achieved by means of an active-passive structure, with replication managed by MSK Replicator.

The next diagram illustrates the answer structure.

The first Area MSK cluster handles consumer requests. Within the occasion of a failure to speak to MSK cluster on account of main area impairment, you want to fail over the purchasers to the secondary MSK cluster. The producer writes to the buyer subject within the main MSK cluster, and the buyer with the group ID msk-consumer reads from the identical subject. As a part of the active-passive setup, we configure MSK Replicator to make use of equivalent subject names, ensuring that the buyer subject stays constant throughout each clusters with out requiring modifications from the purchasers. All the setup is deployed inside a single AWS account.

Within the subsequent sections, we describe easy methods to arrange a multi-Area resilient MSK cluster utilizing MSK Replicator and likewise present the failover and failback technique.

Provision an MSK cluster utilizing AWS CloudFormation

We offer AWS CloudFormation templates to provision sure sources:

This can create the digital non-public cloud (VPC), subnets, and the MSK Provisioned cluster with Specific brokers throughout the VPC configured with AWS Id and Entry Administration (IAM) authentication in every Area. It’ll additionally create a Kafka consumer Amazon Elastic Compute Cloud (Amazon EC2) occasion, the place we will use the Kafka command line to create and think about a Kafka subject and produce and devour messages to and from the subject.

Configure multi-VPC connectivity within the main MSK cluster

After the clusters are deployed, you want to allow the multi-VPC connectivity within the main MSK cluster deployed in us-east-1. This can permit MSK Replicator to hook up with the first MSK cluster utilizing multi-VPC connectivity (powered by AWS PrivateLink). Multi-VPC connectivity is just required for cross-Area replication. For same-Area replication, MSK Replicator makes use of an IAM coverage to hook up with the first MSK cluster.

MSK Replicator makes use of IAM authentication solely to hook up with each main and secondary MSK clusters. Subsequently, though different Kafka purchasers can nonetheless proceed to make use of SASL/SCRAM or mTLS authentication, for MSK Replicator to work, IAM authentication must be enabled.

To allow multi-VPC connectivity, full the next steps:

  1. On the Amazon MSK console, navigate to the MSK cluster.
  2. On the Properties tab, beneath Community settings, select Activate multi-VPC connectivity on the Edit dropdown menu.

  1. For Authentication kind, choose IAM role-based authentication.
  2. Select Activate choice.

Enabling multi-VPC connectivity is a one-time setup and it will probably take roughly 30–45 minutes relying on the variety of brokers. After that is enabled, you want to present the MSK cluster useful resource coverage to permit MSK Replicator to speak to the first cluster.

  1. Below Safety settings¸ select Edit cluster coverage.
  2. Choose Embody Kafka service principal.

Now that the cluster is enabled to obtain requests from MSK Replicator utilizing PrivateLink, we have to arrange the replicator.

Create a MSK Replicator

Full the next steps to create an MSK Replicator:

  1. Within the secondary Area (us-east-2), open the Amazon MSK console.
  2. Select Replicators within the navigation pane.
  3. Select Create replicator.
  4. Enter a reputation and elective description.

  1. Within the Supply cluster part, present the next info:
    1. For Cluster area, select us-east-1.
    2. For MSK cluster, enter the Amazon Useful resource Title (ARN) for the first MSK cluster.

For cross-Area setup, the first cluster will seem disabled if the multi-VPC connectivity is just not enabled and the cluster useful resource coverage is just not configured within the main MSK cluster. After you select the first cluster, it routinely selects the subnets related to main cluster. Safety teams aren’t required as a result of the first cluster’s entry is ruled by the cluster useful resource coverage.

Subsequent, you choose the goal cluster. The goal cluster Area is defaulted to the Area the place the MSK Replicator is created. On this case, it’s us-east-2.

  1. Within the Goal cluster part, present the next info:
    1. For MSK cluster, enter the ARN of the secondary MSK cluster. This can routinely choose the cluster subnets and the safety group related to the secondary cluster.
    2. For Safety teams, select any further safety teams.

Make it possible for the safety teams have outbound guidelines to permit visitors to your secondary cluster’s safety teams. Additionally be sure that your secondary cluster’s safety teams have inbound guidelines that settle for visitors from the MSK Replicator safety teams offered right here.

Now let’s present the MSK Replicator settings.

  1. Within the Replicator settings part, enter the next info:
    1. For Subjects to duplicate, we maintain the subjects to duplicate as a default worth that replicates all subjects from the first to secondary cluster.
    2. For Replication beginning place, we select Earliest, in order that we will get all of the occasions from the beginning of the supply subjects.
    3. For Copy settings, choose Hold the identical subject names to configure the subject identify within the secondary cluster as equivalent to the first cluster.

This makes positive that the MSK purchasers don’t want so as to add a prefix to the subject names.

  1. For this instance, we maintain the Shopper group replication setting as default and set Goal compression kind as None.

Additionally, MSK Replicator will routinely create the required IAM insurance policies.

  1. Select Create to create the replicator.

The method takes round 15–20 minutes to deploy the replicator. After the MSK Replicator is operating, this will likely be mirrored within the standing.

Configure the MSK consumer for the first cluster

Full the next steps to configure the MSK consumer:

  1. On the Amazon EC2 console, navigate to the EC2 occasion of the first Area (us-east-1) and connect with the EC2 occasion dr-test-primary-KafkaClientInstance1 utilizing Session Supervisor, a functionality of AWS Methods Supervisor.

After you might have logged in, you want to configure the first MSK cluster bootstrap handle to create a subject and publish knowledge to the cluster. You may get the bootstrap handle for IAM authentication on the Amazon MSK console beneath View Consumer Info on the cluster particulars web page.

  1. Configure the bootstrap handle with the next code:
sudo su - ec2-user

export BS_PRIMARY=<>

  1. Configure the consumer configuration for IAM authentication to speak to the MSK cluster:
echo -n "safety.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software program.amazon.msk.auth.iam.IAMLoginModule required;
sasl.consumer.callback.handler.class=software program.amazon.msk.auth.iam.IAMClientCallbackHandler
" > /residence/ec2-user/kafka/config/client_iam.properties

Create a subject and produce and devour messages to the subject

Full the next steps to create a subject after which produce and devour messages to it:

  1. Create a buyer subject:
/residence/ec2-user/kafka/bin/kafka-topics.sh --bootstrap-server=$BS_PRIMARY 
--create --replication-factor 3 --partitions 3 
--topic buyer 
--command-config=/residence/ec2-user/kafka/config/client_iam.properties

  1. Create a console producer to put in writing to the subject:
/residence/ec2-user/kafka/bin/kafka-console-producer.sh 
--bootstrap-server=$BS_PRIMARY --topic buyer 
--producer.config=/residence/ec2-user/kafka/config/client_iam.properties

  1. Produce the next pattern textual content to the subject:
This can be a buyer subject
That is the 2nd message to the subject.

  1. Press Ctrl+C to exit the console immediate.
  2. Create a shopper with group.id msk-consumer to learn all of the messages from the start of the client subject:
/residence/ec2-user/kafka/bin/kafka-console-consumer.sh 
--bootstrap-server=$BS_PRIMARY --topic buyer --from-beginning 
--consumer.config=/residence/ec2-user/kafka/config/client_iam.properties 
--consumer-property group.id=msk-consumer

This can devour each the pattern messages from the subject.

  1. Press Ctrl+C to exit the console immediate.

Configure the MSK consumer for the secondary MSK cluster

Go to the EC2 cluster of the secondary Area us-east-2 and comply with the beforehand talked about steps to configure an MSK consumer. The one distinction from the earlier steps is that you must use the bootstrap handle of the secondary MSK cluster because the surroundings variable. Configure the variable $BS_SECONDARY to configure the secondary Area MSK cluster bootstrap handle.

Confirm replication

After the consumer is configured to speak to the secondary MSK cluster utilizing IAM authentication, listing the subjects within the cluster. As a result of the MSK Replicator is now operating, the buyer subject is replicated. To confirm it, let’s see the listing of subjects within the cluster:

/residence/ec2-user/kafka/bin/kafka-topics.sh --bootstrap-server=$BS_SECONDARY 
--list --command-config=/residence/ec2-user/kafka/config/client_iam.properties

The subject identify is buyer with none prefix.

By default, MSK Replicator replicates the main points of all the buyer teams. Since you used the default configuration, you possibly can confirm utilizing the next command if the buyer group ID msk-consumer can be replicated to the secondary cluster:

/residence/ec2-user/kafka/bin/kafka-consumer-groups.sh --bootstrap-server=$BS_SECONDARY 
--list --command-config=/residence/ec2-user/kafka/config/client_iam.properties

Now that we’ve got verified the subject is replicated, let’s perceive the important thing metrics to watch.

Monitor replication

Monitoring MSK Replicator is essential to be sure that replication of knowledge is going on quick. This reduces the danger of knowledge loss in case an unplanned failure happens. Some necessary MSK Replicator metrics to watch are ReplicationLatency, MessageLag, and ReplicatorThroughput. For an in depth listing, see Monitor replication.

To grasp what number of bytes are processed by MSK Replicator, you must monitor the metric ReplicatorBytesInPerSec. This metric signifies the typical variety of bytes processed by the replicator per second. Knowledge processed by MSK Replicator consists of all knowledge MSK Replicator receives. This contains the info replicated to the goal cluster and filtered by MSK Replicator. This metric is relevant when you use Hold similar subject identify within the MSK Replicator copy settings. Throughout a failback situation, MSK Replicator begins to learn from the earliest offset and replicates data from the secondary again to the first. Relying on the retention settings, some knowledge would possibly exist within the main cluster. To forestall duplicates, MSK Replicator processes the info however routinely filters out duplicate knowledge.

Fail over purchasers to the secondary MSK cluster

Within the case of an surprising occasion within the main Area wherein purchasers can’t connect with the first MSK cluster or the purchasers are receiving surprising produce and devour errors, this may very well be an indication that the first MSK cluster is impacted. You might discover a sudden spike in replication latency. If the latency continues to rise, it might point out a regional impairment in Amazon MSK. To confirm this, you possibly can verify the AWS Well being Dashboard, although there’s a likelihood that standing updates could also be delayed. When you establish indicators of a regional impairment in Amazon MSK, you must put together to fail over the purchasers to the secondary area.

For essential workloads we suggest not taking a dependency on management aircraft actions for failover. To mitigate this danger, you possibly can implement a pilot mild deployment, the place important parts of the stack are saved operating in a secondary area and scaled up when the first area is impaired. Alternatively, for sooner and smoother failover with minimal downtime, a sizzling standby method is really helpful. This includes pre-deploying all the stack in a secondary area in order that, in a catastrophe restoration situation, the pre-deployed purchasers might be rapidly activated within the secondary area.

Failover course of

To carry out the failover, you first have to cease the purchasers pointed to the first MSK cluster. Nevertheless, for the aim of the demo, we’re utilizing console producer and shoppers, so our purchasers are already stopped.

In an actual failover situation, utilizing main Area purchasers to speak with the secondary Area MSK cluster is just not really helpful, because it breaches fault isolation boundaries and results in elevated latency. To simulate the failover utilizing the previous setup, let’s begin a producer and shopper within the secondary Area (us-east-2). For this, run a console producer within the EC2 occasion (dr-test-secondary-KafkaClientInstance1) of the secondary Area.

The next diagram illustrates this setup.

Full the next steps to carry out a failover:

  1. Create a console producer utilizing the next code:
/residence/ec2-user/kafka/bin/kafka-console-producer.sh 
--bootstrap-server=$BS_SECONDARY --topic buyer 
--producer.config=/residence/ec2-user/kafka/config/client_iam.properties

  1. Produce the next pattern textual content to the subject:
That is the third message to the subject.
That is the 4th message to the subject.

Now, let’s create a console shopper. It’s necessary to verify the buyer group ID is strictly the identical as the buyer hooked up to the first MSK cluster. For this, we use the group.id msk-consumer to learn the messages from the buyer subject. This simulates that we’re mentioning the identical shopper hooked up to the first cluster.

  1. Create a console shopper with the next code:
/residence/ec2-user/kafka/bin/kafka-console-consumer.sh 
--bootstrap-server=$BS_SECONDARY --topic buyer --from-beginning 
--consumer.config=/residence/ec2-user/kafka/config/client_iam.properties 
--consumer-property group.id=msk-consumer

Though the buyer is configured to learn all the info from the earliest offset, it solely consumes the final two messages produced by the console producer. It’s because MSK Replicator has replicated the buyer group particulars together with the offsets learn by the buyer with the buyer group ID msk-consumer. The console shopper with the identical group.id mimic the behaviour that the buyer is failed over to the secondary Amazon MSK cluster.

Fail again purchasers to the first MSK cluster

Failing again purchasers to the first MSK cluster is the frequent sample in an active-passive situation, when the service within the main area has recovered. Earlier than we fail again purchasers to the first MSK cluster, it’s necessary to sync the first MSK cluster with the secondary MSK cluster. For this, we have to deploy one other MSK Replicator within the main Area configured to learn from the earliest offset from the secondary MSK cluster and write to the first cluster with the identical subject identify. The MSK Replicator will copy the info from the secondary MSK cluster to the first MSK cluster. Though the MSK Replicator is configured to start out from the earliest offset, it won’t duplicate the info already current within the main MSK cluster. It’ll routinely filter out the prevailing messages and can solely write again the brand new knowledge produced within the secondary MSK cluster when the first MSK cluster was down. The replication step from secondary to main wouldn’t be required when you don’t have a enterprise requirement of conserving the info similar throughout each clusters.

After the MSK Replicator is up and operating, monitor the MessageLag metric of MSK Replicator. This metric signifies what number of messages are but to be replicated from the secondary MSK cluster to the first MSK cluster. The MessageLag metric ought to come down near 0. Now you must cease the producers writing to the secondary MSK cluster and restart connecting to the first MSK cluster. You also needs to permit the shoppers to learn knowledge from the secondary MSK cluster till the MaxOffsetLag metric for the shoppers is just not 0. This makes positive that the shoppers have already processed all of the messages from the secondary MSK cluster. The MessageLag metric ought to be 0 by this time as a result of no producer is producing data within the secondary cluster. MSK Replicator replicated all messages from the secondary cluster to the first cluster. At this level, you must begin the buyer with the identical group.id within the main Area. You’ll be able to delete the MSK Replicator created to repeat messages from the secondary to the first cluster. Make it possible for the beforehand current MSK Replicator is in RUNNING standing and efficiently replicating messages from the first to secondary. This may be confirmed by trying on the ReplicatorThroughput metric, which ought to be larger than 0.

Failback course of

To simulate a failback, you first have to allow multi-VPC connectivity within the secondary MSK cluster (us-east-2) and add a cluster coverage for the Kafka service principal like we did earlier than.

Deploy the MSK Replicator within the main Area (us-east-1) with the supply MSK cluster pointed to us-east-2 and the goal cluster pointed to us-east-1. Configure Replication beginning place as Earliest and Copy settings as Hold the identical subject names.

The next diagram illustrates this setup.

After the MSK Replicator is in RUNNING standing, let’s confirm there is no such thing as a duplicate whereas replicating the info from the secondary to the first MSK cluster.

Run a console shopper with out the group.id within the EC2 occasion (dr-test-primary-KafkaClientInstance1) of the first Area (us-east-1):

/residence/ec2-user/kafka/bin/kafka-console-consumer.sh 
--bootstrap-server=$BS_PRIMARY --topic buyer --from-beginning 
--consumer.config=/residence/ec2-user/kafka/config/client_iam.properties

This could present the 4 messages with none duplicates. Though within the shopper we specify to learn from the earliest offset, MSK Replicator makes positive the duplicate knowledge isn’t replicated again to the first cluster from the secondary cluster.

This can be a buyer subject
That is the 2nd message to the subject.
That is the third message to the subject.
That is the 4th message to the subject.

Now you can level the purchasers to start out producing to and consuming from the first MSK cluster.

Clear up

At this level, you possibly can tear down the MSK Replicator deployed within the main Area.

Conclusion

This publish explored easy methods to improve Kafka resilience by organising a secondary MSK cluster in one other Area and synchronizing it with the first cluster utilizing MSK Replicator. We demonstrated easy methods to implement an active-passive catastrophe restoration technique whereas sustaining constant subject names throughout each clusters. We offered a step-by-step information for configuring replication with equivalent subject names and detailed the processes for failover and failback. Moreover, we highlighted key metrics to watch and outlined actions to offer environment friendly and steady knowledge replication.

For extra info, confer with What’s Amazon MSK Replicator? For a hands-on expertise, check out the Amazon MSK Replicator Workshop. We encourage you to check out this function and share your suggestions with us.


In regards to the Creator

Subham Rakshit is a Senior Streaming Options Architect for Analytics at AWS based mostly within the UK. He works with prospects to design and construct streaming architectures to allow them to get worth from analyzing their streaming knowledge. His two little daughters maintain him occupied more often than not exterior work, and he loves fixing jigsaw puzzles with them. Join with him on LinkedIn.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

[td_block_social_counter facebook="tagdiv" twitter="tagdivofficial" youtube="tagdiv" style="style8 td-social-boxed td-social-font-icons" tdc_css="eyJhbGwiOnsibWFyZ2luLWJvdHRvbSI6IjM4IiwiZGlzcGxheSI6IiJ9LCJwb3J0cmFpdCI6eyJtYXJnaW4tYm90dG9tIjoiMzAiLCJkaXNwbGF5IjoiIn0sInBvcnRyYWl0X21heF93aWR0aCI6MTAxOCwicG9ydHJhaXRfbWluX3dpZHRoIjo3Njh9" custom_title="Stay Connected" block_template_id="td_block_template_8" f_header_font_family="712" f_header_font_transform="uppercase" f_header_font_weight="500" f_header_font_size="17" border_color="#dd3333"]
- Advertisement -spot_img

Latest Articles