How to scale IBM MQ clusters and client applications in OpenShift

Overview

You’re running a cluster of IBM MQ queue managers in Red Hat OpenShift, together with a large number of client applications putting and getting messages to them. This workload will vary over time, so you need flexibility in how you scale all of this.

This tutorial will show how you can easily scale the number of instances of your client applications up and down, without having to reconfigure their connection details and without needing to manually distribute or load balance them.

And it will show how to quickly and easily grow the queue manager cluster – adding a new queue manager to the cluster without complex, new, custom configuration.

Background

The IBM MQ feature demonstrated in this tutorial is Uniform Clusters. Dave Ware has a great introduction and demo of Uniform Clusters, so if you’re looking for background about how the feature works, I’d highly recommend it.

This tutorial is heavily inspired by that demo (thanks, Dave!), but my focus here is mainly on how to apply the techniques that Dave showed in OpenShift.

Demo

Pre-requisites

The best way to understand how this all works is to set it up for yourself. If you want to create a demo, you can simply follow the instructions below.

You will need:

But if you would rather just see it in action, I’ll also include a description of how the setup works and screen-recording videos of the demo running.

Creating an IBM MQ Uniform Cluster in OpenShift

./01-setup.sh

You can see this between 00:00 and 00:37 in the demo video

The 01-setup.sh demo script creates a queue manager uniform cluster with three queue managers. It looks like this:

diagram

There are a few elements to this.

A ConfigMap with a config.ini is used by all three queue managers. This identifies which of the three queue managers are going to maintain the full repository of information about the cluster. In this demo config, QM1 and QM2 will be the full repositories.

apiVersion: v1
kind: ConfigMap
metadata:
  name: mq-uniform-cluster-ini-cm
data:
  config.ini: |-
    AutoCluster:
      Repository2Conname=uniform-cluster-qm1-ibm-mq.uniform-cluster.svc(1414)
      Repository2Name=QM1
      Repository1Conname=uniform-cluster-qm2-ibm-mq.uniform-cluster.svc(1414)
      Repository1Name=QM2
      ClusterName=DEMOCLUSTER
      Type=Uniform

Another ConfigMap holds the MQSC commands that all three queue managers use to define the channel they will need to be members of the cluster. In this demo config, that MQSC file is called common_config.mqsc.

apiVersion: v1
kind: ConfigMap
metadata:
  name: mq-uniform-cluster-mqsc-cm
data:
  common_config.mqsc: |-
    define channel('+AUTOCL+_+QMNAME+') chltype(clusrcvr) trptype(tcp) conname(+CONNAME+) cluster('+AUTOCL+') replace

Each queue manager then has it’s own ConfigMap with an additional MQSC file defining the addresses for the channels it will use to join the cluster. For example, the config for QM3 in this demo defines the cluster sender channels to the two full repositories, and the cluster receiver that QM3 uses to receive connections from other cluster members.

apiVersion: v1
kind: ConfigMap
metadata:
  name: mq-uniform-cluster-qm3-mqsc-cm
data:
  qm3-config.mqsc: |-
    alter chl(DEMOCLUSTER_QM1) chltype(CLUSSDR) conname('uniform-cluster-qm1-ibm-mq.uniform-cluster.svc(1414)')
    alter chl(DEMOCLUSTER_QM2) chltype(CLUSSDR) conname('uniform-cluster-qm2-ibm-mq.uniform-cluster.svc(1414)')
    alter chl(DEMOCLUSTER_QM3) chltype(CLUSRCVR) conname('uniform-cluster-qm3-ibm-mq.uniform-cluster.svc(1414)')

The QueueManager specifications for each of the queue managers just need to point at these ConfigMaps. For example, the config for QM3 in this demo starts with this:

apiVersion: mq.ibm.com/v1beta1
kind: QueueManager
metadata:
  name: uniform-cluster-qm3
spec:
  queueManager:
    name: QM3
    ini:
      - configMap:
          items:
          - config.ini
          name: mq-uniform-cluster-ini-cm
    mqsc:
      - configMap:
          items:
          - common_config.mqsc
          name: mq-uniform-cluster-mqsc-cm
      - configMap:
          name: mq-uniform-cluster-qm3-mqsc-cm
          items:
            - qm3-config.mqsc

Creating all of this results in a uniform cluster running in OpenShift with three queue managers, as shown in the diagram above.

Load balancing client applications

Building JMS client apps to run in OpenShift

./02-build-apps.sh

You can see this between 00:39 and 00:59 in the demo video

A Deployment is created that runs a simple JMS application that puts messages to the APPQ1 queue in the uniform cluster.

A second Deployment is created to run a JMS application that gets messages from the APPQ1 queue in the uniform cluster.

To start with, the config in this demo runs twenty-one instances of the getter application.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-app-getter
  labels:
    app: jms-getter
spec:
  replicas: 21
  selector:
    matchLabels:
      app: jms-getter
  template:
    metadata:
      labels:
        app: jms-getter
    spec:
      volumes:
        - name: ccdtfile
          configMap:
            name: ccdt
      containers:
          - volumeMounts:
              - name: ccdtfile
                mountPath: /opt/app/config

This will create twenty-one Kubernetes pods, each one running a separate instance of the JMS application. All twenty-one instances of the application have a unique client id, however they set their App Name to the same value, so that IBM MQ recognises that they are all instances of the same application. You can see this in the Java source for the JMS application.

cf.setAppName("test-getter");

All twenty-one instances of the applications also share a common set of connection information to IBM MQ, which is defined in a ConfigMap.

In this demo config, the configmap is called “ccdt”, and contains connection information for:

  • each queue manager as a member of the cluster (address of the queue manager, with a queueManager name of DEMOCLUSTER)
  • each queue manager itself (address of the queue manager with it’s own queueManager name)
kind: ConfigMap
apiVersion: v1
metadata:
  name: ccdt
data:
  ibm-mq-ccdt.json: |-
    {
        "channel": [
          {
            "name": "DEF.SVRCONN",
            "clientConnection": {
              "connection": [
                {
                  "host": "uniform-cluster-qm1-ibm-mq.uniform-cluster",
                  "port": 1414
                }
              ],
              "queueManager": "DEMOCLUSTER"
            },
            "transmissionSecurity": {
              "cipherSpecification": "ANY_TLS12_OR_HIGHER"
            },
            "type": "clientConnection"
          },
          {
            "name": "DEF.SVRCONN",
            "clientConnection": {
              "connection": [
                {
                  "host": "uniform-cluster-qm2-ibm-mq.uniform-cluster",
                  "port": 1414
                }
              ],
              "queueManager": "DEMOCLUSTER"
            },
            "transmissionSecurity": {
              "cipherSpecification": "ANY_TLS12_OR_HIGHER"
            },
            "type": "clientConnection"
          },
          {
            "name": "DEF.SVRCONN",
            "clientConnection": {
              "connection": [
                {
                  "host": "uniform-cluster-qm3-ibm-mq.uniform-cluster",
                  "port": 1414
                }
              ],
              "queueManager": "DEMOCLUSTER"
            },
            "transmissionSecurity": {
              "cipherSpecification": "ANY_TLS12_OR_HIGHER"
            },
            "type": "clientConnection"
          },
          {
            "name": "DEF.SVRCONN",
            "clientConnection": {
              "connection": [
                {
                  "host": "uniform-cluster-qm1-ibm-mq.uniform-cluster",
                  "port": 1414
                }
              ],
              "queueManager": "QM1"
            },
            "transmissionSecurity": {
              "cipherSpecification": "ANY_TLS12_OR_HIGHER"
            },
            "type": "clientConnection"
          },
          {
            "name": "DEF.SVRCONN",
            "clientConnection": {
              "connection": [
                {
                  "host": "uniform-cluster-qm2-ibm-mq.uniform-cluster",
                  "port": 1414
                }
              ],
              "queueManager": "QM2"
            },
            "transmissionSecurity": {
              "cipherSpecification": "ANY_TLS12_OR_HIGHER"
            },
            "type": "clientConnection"
          },
          {
            "name": "DEF.SVRCONN",
            "clientConnection": {
              "connection": [
                {
                  "host": "uniform-cluster-qm3-ibm-mq.uniform-cluster",
                  "port": 1414
                }
              ],
              "queueManager": "QM3"
            },
            "transmissionSecurity": {
              "cipherSpecification": "ANY_TLS12_OR_HIGHER"
            },
            "type": "clientConnection"
          }
        ]
      }

The volume mount of the ccdt ConfigMap in the Deployment means that the CCDT JSON is available in the Kubernetes pod as a file. This means it is easy for the JMS application to get the MQ connection config from the ConfigMap.

public static final String CCDT_LOCATION = "/opt/app/ibm-mq-ccdt.json";

File ccdtfile = new File(Config.CCDT_LOCATION);
MQConnectionFactory cf = new MQConnectionFactory();
cf.setCCDTURL(ccdtfile.toURI().toURL());

Configuring JMS client apps in OpenShift

./demo-scripts/deploy-test-apps-start-with-qm1.sh

You can see this between 01:11 and 01:58 in the demo video

The deploy-test-apps-start-with-qm1.sh script shows how you can choose to configure client apps to initially connect to the first queue manager in the CCDT list.

If you choose to do this, as you can see from the demo video (at approx 1min 32secs) all twenty-one instances of the application all initially connect to QM1 (as it is the first queue manager defined in the ccdt ConfigMap).

You can see this for yourself in the output from the current-app-status.sh script shown in the top-right of the video. If you look at the source for the script you can see that it is just displaying output from DIS APSTATUS.

The result looks like this:

diagram

Without any manual intervention or reconfiguration, this is quickly rebalanced. Some of the client applications are automatically instructed to reconnect to QM2 and QM3, so that the client applications are evenly distributed amongst the three queue managers.

As you can see from the demo video (at approx 1min 52secs) the application instances are rebalanced so that there are seven instances connected to each queue manager.

You can see this for yourself in the DIS APSTATUS output from the current-app-status.sh script shown in the top-right of the screen.

The result is that the application instances are connected like this:

diagram

./demo-scripts/deploy-test-apps-randomly-distributed.sh

You can see this between 03:28 and 04:03 in the demo video

Alternatively, you can choose to configure client apps to choose a queue manager in the cluster at random to initially connect to. The deploy-test-apps-randomly-distributed.sh script is an example of how to do this.

This adds connectionManagement options to the CCDT file, specifying an affinity of none and giving each queue manager an equal client weight.

The options look like this:

"connectionManagement":
{
    "clientWeight": 1,
    "affinity": "none"
},

The demo config for the CCDT file shows how these options need to be added to each queue manager.

If you choose to do this, as you can see from the demo video (at approx 3min 44secs) all twenty-one instances of the application choose a queue manager at random to connect to do.

The result in the demo video looked like this:

diagram

Note that as each client made a random selection of queue manager, your distribution will be slightly different. But it should still be roughly even.

In the same way as if you choose for client apps to make an initial connection to QM1, without any manual intervention or reconfiguration, the applications are quickly rebalanced and redistributed evenly amongst the three queue managers.

As you can see from the demo video (at approx 4min 1secs) the application instances are again rebalanced so that there are seven instances connected to each queue manager.

You can see this for yourself in the DIS APSTATUS output from the current-app-status.sh script shown in the top-right of the screen.

The result is that the application instances are connected like this:

diagram

Load balancing when client apps are scaled up

./demo-scripts/scale-up-getter.sh

You can see this between 04:52 and 05:37 in the demo video

The scale-up-getter.sh script scales up the getter application from twenty-one instances to thirty instances.

You can see the list of the application instance pods in the output from the show-getter-apps.sh script in the bottom-right of the video (source).

As you can see from the source for the demo script, this is simply done by scaling up the number of replicas for the application Deployment.

oc scale deploy test-app-getter --replicas=30

As you can see from the demo video (at approx 5min 4secs) the new instances of the application all initially connect to QM1, as before. (To make this part of the demo clearer, the CCDT file was configured so that new application instances initially connect to QM1 because it is the first queue manager, however as described above you can choose to configure the new instances to choose a queue manager at random instead.)

The result is that the application instances are connected like this:

diagram

As with the initial deployment, without any manual intervention or reconfiguration, the application is again quickly rebalanced. Some of the new application instances are automatically instructed to reconnect to QM2 and QM3, so that the client applications are again evenly distributed amongst the three queue managers.

As you can see from the demo video (at approx 5min 32secs) the application instances are rebalanced so that there are now ten instances connected to each queue manager.

You can see this for yourself in the DIS APSTATUS output from the current-app-status.sh script shown in the top-right of the video.

The result is that the application instances are connected like this:

diagram

Load balancing when client apps are scaled down

./demo-scripts/scale-getter-one-per-qmgr.sh

You can see this between 05:38 and 06:10 in the demo video

The scale-getter-one-per-qmgr.sh script scales down the getter application to just three instances.

As you can see from the source for the demo script, this is simply done by setting the number of replicas for the application Deployment.

oc scale deploy test-app-getter --replicas=3

As you can see from the demo video (at approx 5min 57secs) because Kubernetes isn’t aware of where each application is connected, it initially removed all of the instances of the application connected to QM2, leaving two instances connected to QM1.

The result is that the application instances were initially connected like this:

diagram

As before, without any manual intervention or reconfiguration, the application is quickly rebalanced. One of the application instances connected to QM1 was automatically instructed to reconnect to QM2, so that the client applications are again evenly distributed amongst the three queue managers.

As you can see from the demo video (at approx 6min 2secs) the application instances are rebalanced so that there is now one instance connected to each queue manager.

The result is that the application instances are connected like this:

diagram

Load balancing if there aren’t enough client apps

./demo-scripts/scale-getter-one-only.sh

You can see this between 06:13 and 06:38 in the demo video

The scale-getter-one-only.sh script scales the client application down so that only a single instance is running.

As you can see from the demo video (at approx 6min 22secs), the remaining instance of the application was connected to QM1. This happened by chance, and the application instance could have been connected to any of the queue managers.

However, in this case, it was connected like this:

diagram

The problem with this is that there are now no applications consuming the messages from the queues on QM2 and QM3.

As you can see in the demo video (at approx 6min 25secs) the number of messages on the queues on QM2 and QM3 continued to increase as a result.

You can see this in the output from the queue-depth.sh script, shown in the middle of the right side of the screen. If you look at the source for the script you can see that it is just displaying output from DIS QLOCAL(APPQ1) CURDEPTH.

This highlights that it is important that the replicas for the Deployment of applications consuming from the cluster queue is set to at least the same value as the number of queue managers in your uniform cluster.

Scaling the application back up

./demo-scripts/scale-up-getter.sh

You can see this by 07:22 in the demo video

Before proceeding to the next step in the demo, the number of instances of the client application is returned to thirty, so that there are again ten instances of the application connected to each of the three queue managers.

diagram

Scaling the queue manager cluster

./03-setup-additional-qmgr.sh

You can see this between 07:32 and 09:49 in the demo video

The 02-setup-additional-qmgr.sh script adds a fourth queue manager to the uniform cluster.

diagram

This does a few things.

It creates a new ConfigMap with an MQSC file that defines the addresses for the channels the new queue manager will use to join the cluster. The config for this demo defines the cluster sender channel for both of the existing full repositories in the cluster, and the cluster receiver channel for the new queue manager.

apiVersion: v1
kind: ConfigMap
metadata:
  name: mq-uniform-cluster-qm4-mqsc-cm
data:
  qm4-config.mqsc: |-
    alter chl(DEMOCLUSTER_QM1) chltype(CLUSSDR) conname('uniform-cluster-qm1-ibm-mq.uniform-cluster.svc(1414)')
    alter chl(DEMOCLUSTER_QM2) chltype(CLUSSDR) conname('uniform-cluster-qm2-ibm-mq.uniform-cluster.svc(1414)')
    alter chl(DEMOCLUSTER_QM4) chltype(CLUSRCVR) conname('uniform-cluster-qm4-ibm-mq.uniform-cluster.svc(1414)')

The new QueueManager spec uses this ConfigMap, together with the existing ConfigMaps created for the existing three queue managers. This can be seen in the config for QM4 in this demo, which starts by pointing at the three config maps.

apiVersion: mq.ibm.com/v1beta1
kind: QueueManager
metadata:
  name: uniform-cluster-qm4
spec:
  queueManager:
    name: QM4
    ini:
      - configMap:
          items:
          - config.ini
          name: mq-uniform-cluster-ini-cm
    mqsc:
      - configMap:
          items:
          - common_config.mqsc
          name: mq-uniform-cluster-mqsc-cm
      - configMap:
          name: mq-uniform-cluster-qm4-mqsc-cm
          items:
            - qm4-config.mqsc

As shown in the diagram above, initially the client applications remain connected to the existing three queue managers. The 02-setup-additional-qmgr.sh script next updates the client applications to reflect the updated uniform cluster.

First, the “ccdt” ConfigMap is updated to add the connection addresses for the new queue manager. As before, two entries are added for the new queue manager. The first is for the queue manager as a member of the cluster, with queueManager set to DEMOCLUSTER.

{
    "name": "DEF.SVRCONN",
    "clientConnection": {
        "connection": [
            {
                "host": "uniform-cluster-qm4-ibm-mq.uniform-cluster",
                "port": 1414
            }
        ],
        "queueManager": "DEMOCLUSTER"
    },
    "connectionManagement":
    {
      "clientWeight": 1,
      "affinity": "none"
    },
    "transmissionSecurity": {
        "cipherSpecification": "ANY_TLS12_OR_HIGHER"
    },
    "type": "clientConnection"
},

The second new entry is for the queue manager itself, with it’s own queueManager name.

{
    "name": "DEF.SVRCONN",
    "clientConnection": {
        "connection": [
            {
                "host": "uniform-cluster-qm4-ibm-mq.uniform-cluster",
                "port": 1414
            }
        ],
        "queueManager": "QM4"
    },
    "connectionManagement":
    {
      "clientWeight": 1,
      "affinity": "none"
    },
    "transmissionSecurity": {
        "cipherSpecification": "ANY_TLS12_OR_HIGHER"
    },
    "type": "clientConnection"
},

Client applications don’t need to be restarted to pick up the updated CCDT ConfigMap. Modifying the ConfigMap causes the CCDT file in the JMS application pods to update, and the IBM MQ client library detects and responds to changes in CCDT files.

When the CCDT config map is updated, the updated applications are quickly rebalanced across the four queue managers.

You can see this by 09:39 in the demo video

You can see this for yourself in the output from the current-app-status.sh script shown in the top-right of the video. As before, the script is displaying output from DIS APSTATUS.

The resulting connections looks like this:

diagram

Tags:

Comments are closed.