How to scale IBM MQ clusters and client applications in OpenShift

July 19th, 2022

Overview

You’re running a cluster of IBM MQ queue managers in Red Hat OpenShift, together with a large number of client applications putting and getting messages to them. This workload will vary over time, so you need flexibility in how you scale all of this.

This tutorial will show how you can easily scale the number of instances of your client applications up and down, without having to reconfigure their connection details and without needing to manually distribute or load balance them.

And it will show how to quickly and easily grow the queue manager cluster – adding a new queue manager to the cluster without complex, new, custom configuration.

Background

The IBM MQ feature demonstrated in this tutorial is Uniform Clusters. Dave Ware has a great introduction and demo of Uniform Clusters, so if you’re looking for background about how the feature works, I’d highly recommend it.

This tutorial is heavily inspired by that demo (thanks, Dave!), but my focus here is mainly on how to apply the techniques that Dave showed in OpenShift.

Read the rest of this entry »

How to transcribe and analyse a phone call in real-time

July 16th, 2022

In this post, I want to share an example of how to stream phone call audio through IBM Watson Speech to Text and IBM Watson Natural Language Understanding services, and show some ideas of what you could use this for.

Let’s start with a demo

That’s what I want to show you how to build.

At a high-level, this is what you will have seen in that video:

1.
Faith made a phone call to a phone number managed by Twilio.

2.
Twilio routed the phone call to me, and I answered the call.

We then started talking to each other. And while we were doing this:

3.
Twilio streamed a copy of the audio from the phone call to a demo Node.js app

4.
The Node.js app sent audio to the Watson Speech to Text service for transcribing.

5.
Watson Speech to Text asynchronously sent transcriptions to the Node.js app as soon as they were available.

6.
The app then submitted the transcription text to Watson Natural Language Understanding for analysis.

7.
All of this – the transcriptions and analyses – were displayed on the demo web page.

Read the rest of this entry »

How to use MQ Streaming Queues and Kafka Connect to make an auditable copy of IBM MQ messages

July 10th, 2022

Scenario

You have an IBM MQ queue manager. An application is putting messages to a command queue. Another application gets these messages from the queue and takes actions in response.

diagram

Objective

You want multiple separate audit applications to be able to review the commands that go through the command queue.

They should be able to replay a history of these command messages as many times as they want.

This must not impact the application that is currently getting the messages from the queue.

diagram

Solution

You can use Streaming Queues to make a duplicate of every message put to the command queue to a separate copy queue.

This copy queue can be used to feed a connector that can produce every message to a Kafka topic. This Kafka topic can be used by audit applications

diagram

Details

The final solution works like this:

diagram

  1. A JMS application called Putter puts messages onto an IBM MQ queue called COMMANDS
  2. For the purposes of this demo, a development-instance of LDAP is used to authenticate access to IBM MQ
  3. A JMS application called Getter gets messages from the COMMANDS queue
  4. Copies of every message put to the COMMANDS queue will be made to the COMMANDS.COPY queue
  5. A Connector will get every message from the COMMANDS.COPY queue
  6. The Connector transforms each JMS message into a string, and produces it to the MQ.COMMANDS Kafka topic
  7. A Java application called Audit can replay the history of all messages on the Kafka topic

Read the rest of this entry »

Connecting App Connect Enterprise to Event Streams

June 19th, 2022

Configuring IBM App Connect Enterprise to produce or consume messages from Kafka topics in IBM Event Streams requires careful configuration. In this post, I’ll share the steps I use that help me to avoid missing any required values.

To illustrate this, I’ll create a simple App Connect flow that implements a REST API, where any data I POST to the REST API is sent to a Kafka topic.

The key to getting this to work correctly first time is to make sure that values are accurately copied from Event Streams to App Connect.

To help with this, I use a grid like the one below.

The instructions in this post start with Event Streams, and explain how to populate the grid with the information you need.

Then the instructions will switch to App Connect, and explain how to use the values in the grid to set up your App Connect flow.

What this is Values you will see in my screenshots Your value
A Topic name
THIS.IS.MY.TOPIC
B Bootstrap address
kafkadev_kafka_bootstrap_demo.itzroks_120000f8p4_f9nd74_6ccd7f378ae819553d37d5f2ee142bd6_0000.eu_gb.containers.appdomain.cloud:443

kafkadev_kafka_bootstrap.demo.svc:9093

kafkadev_kafka_bootstrap.demo.svc:9092

C SASL mechanism
SCRAM-SHA-512
D SASL config
org.apache.kafka.common.security.scram.ScramLoginModule required;
E Security protocol
SASL_SSL

SASL_PLAINTEXT

SSL

PLAINTEXT

F Certificate
es-cert.jks
G Certificate password
wo05RndLJQgI
H Username
app-connect-enterprise
I Password
AIYJjrM2bSic
J Policy project name
demo-policies
K Policy name
demo-eventstreams-policy
L Security identity name
kafka-credentials
M Truststore identity name
kafka-truststore

Read the rest of this entry »

Event Endpoint Management “demo in a box”

December 17th, 2021

In this post, I’ll share how you can get your own Event Endpoint Management demo instance with just seven minutes of work.

(Seven minutes of hands-on-keyboard time… there is a lot of waiting-for-stuff-to-run time, but it doesn’t sound so impressive if I include waiting time!)


Read the rest of this entry »

Taking your first step towards an event-driven architecture

December 3rd, 2021

In this post, I want to suggest some approaches for introducing event-driven architecture patterns into your existing application environment. I’ll demonstrate how you can incrementally adopt Apache Kafka without needing to immediately build new applications or rebuild your existing applications, and show how this can be delivered in Red Hat OpenShift.

Read the rest of this entry »

Describing Kafka security in AsyncAPI

November 17th, 2021

As part of AsyncAPI Conference this week, I ran a session on how to describe Kafka security in AsyncAPI.

The aim of the session was to quickly show how to describe the security configuration of a Kafka cluster in an AsyncAPI document.

And, in reverse, if you’ve been given an AsyncAPI document, to show how to use that to configure a Kafka client or application to connect to the cluster, using the details in the AsyncAPI spec.

The recording and the slides I used are below.


youtu.be/CeGOLijUuQc


slides on Slideshare

Processing Apache Avro-serialized Kafka messages with IBM App Connect Enterprise

October 25th, 2021

IBM App Connect Enterprise (ACE) is a broker for developing and hosting high-throughput, high-scale integrations between a large number of applications and systems, including Apache Kafka.

In this post, I’ll describe how to use App Connect Enterprise to process Kafka messages that were serialized to a stream of bytes using Apache Avro schemas.

screenshot

Background

Best practice when using Apache Kafka is to define Apache Avro schemas with a definition of the structure of your Kafka messages.

(For more detail about this, see my last post on From bytes to objects: describing Kafka events, or the intro to Avro that I wrote a couple of years ago.)

In this post, I’m assuming that you have embraced Avro, and you have Kafka topics with messages that were serialized using Avro schemas.

Perhaps you used a Java producer with an Avro SerDe that handled the serialization automatically for you.

Or your messages are coming from a Kafka Connect source connector, with an Avro converter that is handling the serialization for you.

Or you are doing the serialization yourself, such as if you’re producing Avro-serialized messages from a Python app.

Now you want to use IBM App Connect Enterprise to develop and host integrations for processing those Kafka messages. But you need App Connect to know how to:

  • retrieve the Avro schemas it needs
  • use the schemas to turn the binary stream of bytes on your Kafka topics into structured objects that are easy for ACE to manipulate and process

Read the rest of this entry »