In this post, I want to suggest some approaches for introducing event-driven architecture patterns into your existing application environment. I’ll demonstrate how you can incrementally adopt Apache Kafka without needing to immediately build new applications or rebuild your existing applications, and show how this can be delivered in Red Hat OpenShift.
Most of the post will be demos, so if you just want to see those you can scroll down. But first, I’d like to set a little context.
Data centric “vs” Event centric
Data centric and Event centric are sometimes presented as opposing ideologies.
By “data centric”, I mean building applications around your data – with the implicit goal of collecting your data into locations where it can be processed. Conceptually we think of the data as being at rest, with everything happening around it. We prioritise collecting and storing that valid copy of the data, and design approaches, such as transactions, with that priority in mind.
By “event centric”, I mean thinking of applications and data differently – with data being something that is processed in flight, rather than collected and processed at rest. We prioritise responding to events, rather than preserving data, and design approaches with that in mind.
But I don’t think this should be an all-or-nothing thing. I don’t really think they are opposing ideologies, and I don’t think it’s sensible to choose one or the other.
In practice, I think you often want to achieve a blend of both – allowing you to take advantage of the way that event-centric approaches allow systems to respond to events in the moment and enable truly responsive applications.
Introducing events in your existing architectures
It can be daunting to do this in an existing application environment.
Fig 1
If you have a data-centric application environment (like Fig. 1 above), how can you take small, incremental steps to introduce event-centric thinking?
How can you start to move towards this blend (like Fig. 2 below) in a way that isn’t too disruptive or expensive?
Fig 2
How can you take steps towards enabling an event-centric architecture, allowing your applications and microservices to interact in both data-centric and event-centric ways where appropriate?
My goal for this post is to give you a few ideas for how you can easily and cheaply introduce, and start adopting, an event backbone in your existing application environment.
Introducing the event backbone
Discussions about the role of an event backbone in your infrastructure often mistakenly assume that you are starting with a blank page, and imply that the way to adopt event-driven architectures is to start implementing new microservices that can publish and subscribe to events.
There are absolutely benefits to doing this, but in practice ripping-and-replacing your existing microservices (or even making major changes to them to work in event-centric ways) is a risky and expensive way to start.
It’s a good end-goal to have for future microservices development. But it’s not the best place to start.
You can start to establish an event backbone in your architecture in more incremental ways.
Emit events from your existing enterprise systems
Many of your existing applications, systems and microservices will likely be processing things that can logically be thought of as events.
It is often cheap and easy to configure these systems to start contributing what they’re already doing to an event backbone, in a way that doesn’t disrupt what those systems are already doing.
For the first demo, I’m using Github to represent some existing critical enterprise system in my business. It’s a system we’re already using, and that already is recording when things happen, in a way that might make sense to think of as events.
demo: Emitting events from your existing enterprise systems
An increasing number of enterprise systems support webhooks – that is, you can configure them to make HTTP calls when something happens.
HTTP calls are a simple way to start getting events onto a new event backbone. An HTTP POST is all you need to put an event on a Kafka topic.
The events that this system could contribute to the backbone could enable new innovations – new microservices your developers can think of to make new uses of these events.
Or in time, you might decide to replace some of the places where your existing microservices are making synchronous calls to fetch data from the back-end enterprise system – where it could help to make them more responsive and process events in the moment.
Identifying existing interesting sources of events in your environment and contributing them to a new event backbone through simple HTTP configuration is a great first step to adopting event-driven approaches.
Demo details
Make existing APIs available as events
There are other ways to make the APIs that you’re currently using available through an event backbone.
For the second demo, I’m using Twitter and Alphavantage to represent existing, external, third-party APIs that microservices in my business are already using. They’re representing APIs that I’m already making a lot of use of to retrieve changing data, but that don’t support (or allow me to add) something like webhooks.
demo: Making existing APIs available as events
Making existing APIs available through the event backbone can again enable new innovations – giving developers a new way to interact with existing data can spark new ideas for new microservices.
For example, new capabilities the event backbone brings – such as the event log, that supports replaying historical events – might get your developers to think of new things they could create that wouldn’t have been possible or practical with the API alone.
There are also infrastructure benefits to doing this. Consider scenarios where the existing API is expensive to invoke (either in terms of compute resource if it’s an API in my own enterprise, or in financial terms where the API is provided by a third-party) and is being used by a large number of my microservices. In this sort of scenario, it you can migrate across so that a single connector becomes responsible for invoking that expensive API, and making the responses available on the event backbone where they can be cheaply used by a huge number of separate clients and microservices.
There are a huge number of connectors available to enable this pattern – in the demo video I showed connectors available from both IBM and Confluent. If you search for open source options there are even more available. Typically any system you might want to enable in this way will already have a connector available.
But for rare cases where there isn’t one already, the stock prices connector in the demo is a good example of how simple it is to create a new one – the Kafka Connect framework does most of the complicated event backbone side of things, so the only new thing you need to contribute is how to invoke the API.
Demo details
- example Kafka Connect cluster
- Twitter demo
- Stock prices demo
- IBM Event Streams Connectors catalog
- Confluent Connectors Hub
- advice on writing a Connector
Event-enabling your databases
It’s not just APIs and applications that can become effective event sources. Databases are likely another fundamental part of your existing data-centric architectures, so event enabling them is an effective step in introducing an event backbone.
For the next demo, I’m using a tiny PostgreSQL database to represent a critical back-end database in my enterprise.
demo: Event-enabling your databases
Change data capture connectors can be easily attached to most commonly-used database types, emitting events to the event backbone any time that new data is created/updated/deleted in the database.
Again, making your existing critical data available in a new form is a great way to enabling new innovations and an effective (but small and simple!) step towards adopting event-centric architectures.
Demo details
- example database definition used in demo
- initial data used to seed demo database
- example Kafka Connect cluster
- example topic
- example secret with database credentials
- example secret with database certificate
- example connector
- sample app used to display events
Enabling notifications
Getting events onto the event backbone is only one part of the story. Adoption also includes finding simple ways to process events, store events, and building systems that can respond to them.
For the last demo, I’m using Slack as an example of easily pushing events from the backbone to an application as notifications.
If you look at the effort involved across all of these demos, you can hopefully see how getting events onto the event backbone and then pushing these out as notifications, is very simple to do.
Demo details
Summary
If you currently have a very data-centric application environment, I hope this post has inspired you to consider how to introduce event-centric elements.
I’ve tried to show how you can get started purely through simple configuration, and without any new coding. And I’ve concentrated on simple ways to get started without disrupting existing architectures or expensive and risky rip-and-replace efforts.
Tags: apacheavro, apachekafka, kafka