This is the sixth in a series of blog posts sharing examples of ways to use Mirror Maker 2 with IBM Event Streams.
- Using Mirror Maker 2 to aggregate events from multiple regions
- Using Mirror Maker 2 to broadcast events to multiple regions
- Using Mirror Maker 2 to share topics across multiple regions
- Using Mirror Maker 2 to create a failover cluster
- Using Mirror Maker 2 to restore events from a backup cluster
- Using Mirror Maker 2 to migrate to a different region
Mirror Maker 2 is a powerful and flexible tool for moving Kafka events between Kafka clusters.
For this sixth post, I’ll look at using Mirror Maker to migrate your Kafka cluster to a new region.
I’ve broken this down into multiple stages. For each stage, I’ll explain the intent and share a demo script I’ve created to let you try this for yourself.
Initial setup
Overview
The starting point for this scenario is an existing, established Kafka cluster, being used by multiple applications:
Three Kubernetes namespaces (“north-america”, “south-america”, “europe”) represent three different regions.
The “North America region” represents the current location of the Kafka cluster.
The “Europe region” represents a new environment that we will be migrating the Kafka cluster to.
Applications run in the “South America region” and produce and consume from topics in the Kafka cluster. This will continue – during and after the migration process.
As with my previous posts, the producer application is regularly producing randomly generated events, themed around a fictional clothing retailer, Loosehanger Jeans.
To create the demo for yourself
There is an Ansible playbook here which sets up this initial scenario:
github.com/dalelane/eventstreams-mirrormaker2-demos/blob/master/07-migration/initial-setup.yaml
An example of how to run it can be found in the script at: setup-07-migration.sh
This script will also display the URL and username/password for the Event Streams web UI, to make it easier to log in and see the events.
Once you’ve created the demo, you can run the consumer-southamerica.sh
script to see the events being received by the consumer application in the “South America region”.
If you leave this running for a while before continuing, it will give a more realistic demo of migrating an established cluster with a lot of events already on the topics.
Preparing the new cluster
Overview
In this stage, a new Kafka cluster is created in the “Europe region” and we start to migrate the topics over to it.
For now, applications will continue to use the Kafka cluster in the “North America region” while we prepare the new cluster.
To create the demo for yourself
There is an Ansible playbook here which sets up this stage:
github.com/dalelane/eventstreams-mirrormaker2-demos/blob/master/07-migration/migrate-topics.yaml
An example of how to run it can be found in the script at: setup-07-migrate-topics.sh
This script will also display the URL and username/password for the Event Streams web UIs in both “regions”. (Notice that we’re using the same usernames/passwords in both “regions” – illustrating how you would likely want to migrate things like credentials and truststores, to minimise the disruption that a migration causes.)
You can use the username/password to log in to both the “North American region” and “Europe region” clusters. Use the web UIs to monitor when Mirror Maker has caught up with the backlog of events in the existing cluster.
How long this will take will depend on how long you left the applications running before starting this stage. Once the topic on the new cluster has mostly caught up, you can continue to the next stage.
If you log in to the Event Streams web UI for the existing cluster in the “North America region”, you will see information about the consumer application listed there.
If you log in to the Event Streams web UI for the new migration cluster in the “Europe region”, you will also see the consumer application listed there as well (although without a client ID and with an “Empty” state).
This isn’t a separate application. There is no consumer running connected to the “Europe region” yet.
For this scenario, Mirror Maker is backing up the state of consumer applications as well as topics – this is a mirrored record of the same application consuming from the “North America region”.
However, you will see that there is a small lag, and that the state of the consumer application in the “Europe region” is never quite current. We will give it a chance to catch up to be completely up to date in the next step.
How the demo is configured
The Mirror Maker config can be found here: mm2.yaml
.
The spec is commented so that is the main file to read if you want to see how to configure Mirror Maker to perform a migration.
Pausing the producer applications
Overview
The goal for this stage is to pause the producer application, so that we can be sure that Mirror Maker has migrated absolutely every event from the existing cluster before we continue.
To create the demo for yourself
There is an Ansible playbook here which does this for you:
github.com/dalelane/eventstreams-mirrormaker2-demos/blob/master/07-migration/pause-producers.yaml
An example of how to run it can be found in the script at: setup-07-pause-producers.sh
All it is doing is setting the replicas
for the producer application to 0
, temporarily scaling down the producer while we complete the migration.
The consumer application can be left running, giving it the opportunity to finish processing every last event from the “North American region” and for Mirror Maker to sync this offset across to the “Europe region”.
You can use the Event Streams web UI to monitor the state of the consumer application in the “Europe region”. Compare it with the state of the lag in the previous stage. Wait for the lag for all consumers for all partitions to reduce to 1 before continuing to the next stage.
Migrating the consumer applications
Overview
The consumer application has consumed all of the events in the “North America region”, and these committed offsets have all been safely mirrored to the new migrated cluster in the “Europe region”.
It is now time to update the consumer application so that it starts consuming from topics in the “Europe region”.
To create the demo for yourself
There is an Ansible playbook here which does this for you:
github.com/dalelane/eventstreams-mirrormaker2-demos/blob/master/07-migration/migrate-consumers.yaml
An example of how to run it can be found in the script at: setup-07-migrate-consumers.sh
This is updating the bootstrap servers address used by the consumer so that it starts consuming from cluster in the “Europe region”.
Once you’ve done this, you can run the consumer-southamerica.sh
script to see the output from the consumer application running in the “South America region”.
You should see that it doesn’t re-consume any of the events that it had already processed from the “North America” region, and is idle – waiting for new events to be produced.
You can also use the Event Streams web UI for the “Europe region” to verify that the consumer is now active.
Resuming the producer applications
Overview
The producer application, paused briefly to avoid interfering with the migration, can now be safely resumed. It is migrated so that it produces directly to the new migrated cluster in the “Europe region”.
To create the demo for yourself
There is an Ansible playbook here which does this for you:
github.com/dalelane/eventstreams-mirrormaker2-demos/blob/master/07-migration/migrate-producers.yaml
An example of how to run it can be found in the script at: setup-07-migrate-producers.sh
Once you’ve done this, you can run the consumer-southamerica.sh
script to see the events being received again by the consumer application running in the “South America region”.
Clean up
The migration is now complete. There is an additional clean-up stage to delete the resources that were used by Mirror Maker 2 to perform the migration.
Mirror Maker stores config and state information on Kafka topics, so these can be safely deleted now that the migration is complete.
Additionally, the credentials that were created for use by Mirror Maker should now be deleted, as they are aren’t required.
To create the demo for yourself
There is an Ansible playbook here which does this clean-up step for you:
github.com/dalelane/eventstreams-mirrormaker2-demos/blob/master/07-migration/cleanup.yaml
An example of how to run it can be found in the script at: setup-07-cleanup.sh
Summary
Many of the Mirror Maker 2 use cases I’ve demonstrated in this series of posts has shown using Mirror Maker as a continuous background mirroring process. However, in this post, I’ve shown that it can also be very useful to enable a one-off data migration.
Tags: apachekafka, ibmeventstreams, kafka