{"id":5281,"date":"2024-09-12T10:55:16","date_gmt":"2024-09-12T10:55:16","guid":{"rendered":"https:\/\/dalelane.co.uk\/blog\/?p=5281"},"modified":"2024-09-12T10:55:17","modified_gmt":"2024-09-12T10:55:17","slug":"using-quotas-with-event-endpoint-management","status":"publish","type":"post","link":"https:\/\/dalelane.co.uk\/blog\/?p=5281","title":{"rendered":"Using quotas with Event Endpoint Management"},"content":{"rendered":"<p><strong>In this post, we share examples of using quotas with <a href=\"https:\/\/www.ibm.com\/products\/event-automation\/event-endpoint-management\">IBM Event Endpoint Management<\/a>, give you some pointers to help you try them for yourself, and most importantly get you thinking about where this might be useful for your own catalog.<\/strong><\/p>\n<p><a href=\"https:\/\/ibm.github.io\/event-automation\/eem\/\">Event Endpoint Management<\/a> makes it easy for you to share your Kafka topics. Put some of your Kafka topics in the catalog, and allow colleagues and partners to discover the topics, so they can use the self-service catalog page to get started with them immediately.<\/p>\n<p>Increasing reuse of your streams of events makes it possible for your business to unlock even more value from them. Innovative new ways to use them, that you might not have even thought of, will be enabled the more widely you share.<\/p>\n<p>But before you invite colleagues and partners to start using your topics, you want to make sure that you&#8217;re ready. Event Endpoint Management offers a range of tools to make sure that you remain in control. Quotas are just one of these, and we dig into what they offer in this post.<\/p>\n<ul>\n<li><a href=\"#section-quotas-for-producers\"><strong>Quotas &#8211; for producers<\/strong><\/a> &#8211; we&#8217;ll start with an intro for what quotas are for<\/li>\n<li><a href=\"#section-demo-producers\"><strong>Seeing quotas in action &#8211; for producers<\/strong><\/a> &#8211; we&#8217;ll take you through a demo (that you can try for yourself)<\/li>\n<li><a href=\"#section-quotas-for-consumers\"><strong>Quotas &#8211; for consumers<\/strong><\/a> &#8211; we&#8217;ll quickly explain how this helps with Kafka consumers as well<\/li>\n<li><a href=\"#section-demo-consumers\"><strong>Seeing quotas in action &#8211; for consumers<\/strong><\/a> &#8211; we&#8217;ll go through a demo of this, too<\/li>\n<li><a href=\"#section-multiple-partitions\"><strong>Multiple partitions<\/strong><\/a> &#8211; the impact is a little subtle if you have multiple partitions<\/li>\n<li><a href=\"#section-how-it-works\"><strong>How it works<\/strong><\/a> &#8211; once you&#8217;ve seen it in action, we&#8217;ll peek under the hood to explain how the Event Gateway does this<\/li>\n<li><a href=\"#section-quotas-kafka\"><strong>Adding quotas to the back-end<\/strong><\/a> &#8211; We&#8217;ll show you an example of where else you can add quotas as part of an overall solution<\/li>\n<li><a href=\"#section-other-controls\"><strong>Other controls<\/strong><\/a> &#8211; we&#8217;ll finish with a pointer to some other controls available to you, that complement what quotas can do<\/li>\n<\/ul>\n<p><em>Co-authored with <a href=\"https:\/\/github.com\/chrispatmore\">Chris Patmore<\/a><\/em><\/p>\n<p><!--more--><\/p>\n<hr \/>\n<h2 id=\"section-quotas-for-producers\">Quotas &#8211; for producers<\/h2>\n<p>Your Kafka cluster&#8217;s disk and network bandwidth aren&#8217;t unlimited resources.<\/p>\n<p>Quotas are a useful tool when you start widely sharing your Kafka topics. Quotas are a way to set an upper limit on how fast each application that uses your Kafka cluster is allowed to produce data to it.<\/p>\n<p>We&#8217;re starting by looking at <strong>sharing a topic to allow other teams to produce<\/strong> messages to it. (<em>Starting with this means we&#8217;ll have some messages on the topic to consume from in a moment!<\/em>)<\/p>\n<p>Our aim is to enable something like this:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/diagrams\/diagram-1.png?raw=true\" style=\"max-width: 600px; width: 100%;\"\/><\/p>\n<p>Before we start adding quotas, we ran one producer, to see how fast it could put messages on our topic.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/diagrams\/diagram-2.png?raw=true\" style=\"max-width: 600px; width: 100%;\"\/><\/p>\n<p>We ran a simple test app to produce 5,000,000 messages (each message containing randomly generated data).<\/p>\n<p>Our app produced over 95,000 messages per second.<\/p>\n<blockquote><p>(<em>This is only a small development cluster that we&#8217;re sharing with other people, so this is by no means a scientific performance test. Don&#8217;t read too much into the absolute numbers, we&#8217;ll just use this to give us an idea of the relative impact of adding controls on a busy cluster.<\/em>)<\/p><\/blockquote>\n<p>Maybe you&#8217;d be concerned about lots of applications all producing that much data, that quickly, at a sustained rate.<\/p>\n<p>Quotas let you control this. You can specify an upper limit of how fast each producer is allowed to produce messages to your topic. This makes sure that no individual application is allowed to flood your topic, or use a disproportionate amount of your cluster bandwidth.<\/p>\n<p>With five apps producing to your topic, a quota means that each application will have a limit applied to it.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/diagrams\/diagram-3.png?raw=true\" style=\"max-width: 600px; width: 100%;\"\/><\/p>\n<h3 id=\"section-demo-producers\">Trying this for yourself<\/h3>\n<p>Let&#8217;s take a step back and walk through how can set this up and see it in action.<\/p>\n<h4>Step 1: You need a Kafka cluster and an instance of IBM Event Endpoint Management.<\/h4>\n<p>You can use the <a href=\"https:\/\/github.com\/IBM\/event-automation-demo\">Event Automation demo<\/a> for this. It has an Ansible playbook that you can point at a Red Hat OpenShift cluster. It sets up a small development Event Streams (Kafka cluster), and an Event Endpoint Management catalog and Event Gateway. They&#8217;re all setup, connected, and ready to use.<\/p>\n<h4>Step 2: You need a Kafka topic.<\/h4>\n<p>You can apply <a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/topic.yaml\">this <code>KafkaTopic<\/code> spec<\/a> to create a topic called &#8220;quotatest&#8221;.<\/p>\n<pre style=\"border: thin #AA0000 solid; color: #770000; background-color: #ffffc0; padding: 1em; overflow-y: scroll; overflow-x: scroll; font-size: 0.8em; white-space: pre;\">oc apply -f topic.yaml<\/pre>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/screenshots\/test-topic.png?raw=true\" style=\"border: thin black solid; max-width: 600px; width: 100%;\"\/><\/p>\n<h4>Step 3: Add the topic to Event Endpoint Management.<\/h4>\n<p>We&#8217;re going to assume you&#8217;ve done this before, but if not there are <a href=\"https:\/\/ibm.github.io\/event-automation\/eem\/describe\/adding-topics\/\">instructions to walk you through it<\/a>.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/screenshots\/eem-topic-produce.png?raw=true\" style=\"border: thin black solid; max-width: 600px; width: 100%;\"\/><\/p>\n<p>We didn&#8217;t document the topic in any detail, as we&#8217;re just going to be using it to illustrate this post.<\/p>\n<h4>Step 4: Add options with quota controls.<\/h4>\n<p>To publish the topic in the Event Endpoint Management catalog you need to create an option. Again, if you&#8217;re new to this, there are <a href=\"https:\/\/ibm.github.io\/event-automation\/eem\/describe\/managing-options\/\">instructions available that you can follow<\/a>.<\/p>\n<p>The important thing is that when you&#8217;re defining the option, you need to add a <strong>Quota enforcement<\/strong> control.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/screenshots\/eem-controls-publish.png?raw=true\" style=\"border: thin black solid; max-width: 600px; width: 100%;\"\/><\/p>\n<p>Quota controls can be defined in megabytes per second, or in messages per second, or a combination of both.<\/p>\n<p>As every (random data) message your test application produces is the same size, it won&#8217;t make much difference which you choose. We went with messages-per-second as it is a little easier to understand.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/screenshots\/eem-quota-produce.png?raw=true\" style=\"border: thin black solid; max-width: 600px; width: 100%;\"\/><\/p>\n<p>To illustrate this post, we created a range of different options &#8211; each with a quota control defined slightly differently so we could compare.<\/p>\n<p>We gave each option a different topic alias, so the topic names would make it easy for us to remember what quota we had given to each.<\/p>\n<p>You don&#8217;t need to add as many options as we did. We recommend adding at least <strong>two<\/strong>:<\/p>\n<ul>\n<li>one with a quota control<\/li>\n<li>one without any quota control so you have something to compare with<\/li>\n<\/ul>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/screenshots\/eem-produce-options.png?raw=true\" style=\"border: thin black solid; max-width: 600px; width: 100%;\"\/><\/p>\n<blockquote><p>(<em>You might notice in the screenshot that we also added Approval controls. That is because we were doing this in a shared Catalog and we didn&#8217;t want our colleagues to play around with this topic while we were running our app. This meant no-one else could use this topic without us approving it first. You probably won&#8217;t need to do this.<\/em>)<\/p><\/blockquote>\n<h4>Step 5: Create credentials to access the topic through the Event Gateway.<\/h4>\n<p>Now that the topic is in the Catalog, you can start generating access credentials for your test applications to use.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/screenshots\/eem-produce-credentials.png?raw=true\" style=\"border: thin black solid; max-width: 600px; width: 100%;\"\/><\/p>\n<p>You will need to create credentials for each option.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/screenshots\/eem-password.png?raw=true\" style=\"border: thin black solid; max-width: 600px; width: 100%;\"\/><\/p>\n<h4>Step 6: Configure a producer application.<\/h4>\n<p>There is a simple test application included with IBM Event Streams, so we used that.<\/p>\n<p>It generates messages containing random data, and records how long it took to produce them.<\/p>\n<p>To set this up, we needed to:<\/p>\n<ul>\n<li>create a <a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/configs.yaml#L1-L8\">Kubernetes Secret with the CA certificate<\/a> for the Event Gateway<\/li>\n<li>create a <a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/configs.yaml#L85-L100\">Kubernetes ConfigMap with the configuration<\/a> for our application<\/li>\n<\/ul>\n<p>You can see how we did this in <a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/configs.yaml\"><code>configs.yaml<\/code><\/a>.<\/p>\n<pre style=\"border: thin #AA0000 solid; color: #770000; background-color: #ffffc0; padding: 1em; overflow-y: scroll; overflow-x: scroll; font-size: 0.8em; white-space: pre;\">dalelane@dales-mbp eem-quotas % <strong>oc apply -f configs.yaml<\/strong>\nsecret\/eem-ca-cert created\nconfigmap\/quotas-produce-unlimited created\nconfigmap\/quotas-produce-25000 created\nconfigmap\/quotas-produce-50000 created\nconfigmap\/quotas-produce-75000 created<\/pre>\n<p>You can copy our config &#8211; just change the <code>bootstrap.servers<\/code> property to match your Event Gateway and put the username and password you created in the Catalog in the <code>sasl.jaas.config<\/code> property.<\/p>\n<h4>Step 7: Run a producer application &#8211; with no quota.<\/h4>\n<p>We defined an application as a Kubernetes Job &#8211; so it would start up, produce 5 million messages, and then stop.<\/p>\n<p>You can see the configuration for this in <a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/produce-unlimited.yaml\"><code>produce-unlimited.yaml<\/code><\/a>.<\/p>\n<p>The bits that you might want to modify are the topic name (<a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/produce-unlimited.yaml#L18-L19\"><code>--topic<\/code><\/a>), the number of messages to produce (<a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/produce-unlimited.yaml#L20-L21\"><code>--num-records<\/code><\/a>), and the size of each message (<a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/produce-unlimited.yaml#L22-L23\"><code>--record-size<\/code><\/a>). (<em>It doesn&#8217;t matter what you pick, as long as you are consistent.<\/em>)<\/p>\n<p>To run the application, apply the job spec.<\/p>\n<pre style=\"border: thin #AA0000 solid; color: #770000; background-color: #ffffc0; padding: 1em; overflow-y: scroll; overflow-x: scroll; font-size: 0.8em; white-space: pre;\">oc apply -f produce-unlimited.yaml<\/pre>\n<p>To see the output, you can tail the log.<\/p>\n<pre style=\"border: thin #AA0000 solid; color: #770000; background-color: #ffffc0; padding: 1em; overflow-y: scroll; overflow-x: scroll; font-size: 0.8em; white-space: pre;\">oc logs -f --selector app=producer-unlimited<\/pre>\n<p>The application incrementally outputs information as it goes, so you can see how it progresses.<\/p>\n<p>For our purposes, we just ignored everything except the final line with the details of how quickly it produced the overall 5,000,000 messages.<\/p>\n<pre style=\"border: thin #AA0000 solid; color: #770000; background-color: #ffffc0; padding: 1em; overflow-y: scroll; overflow-x: scroll; font-size: 0.8em; white-space: pre;\">324846 records sent, 64969.2 records\/sec (7.93 MB\/sec), 1325.4 ms avg latency, 2179.0 ms max latency.\n480187 records sent, 96037.4 records\/sec (11.72 MB\/sec), 3121.5 ms avg latency, 3849.0 ms max latency.\n467674 records sent, 93516.1 records\/sec (11.42 MB\/sec), 3779.6 ms avg latency, 3897.0 ms max latency.\n594235 records sent, 118847.0 records\/sec (14.51 MB\/sec), 3266.6 ms avg latency, 3951.0 ms max latency.\n477995 records sent, 95599.0 records\/sec (11.67 MB\/sec), 3235.2 ms avg latency, 3936.0 ms max latency.\n455858 records sent, 91116.9 records\/sec (11.12 MB\/sec), 4118.0 ms avg latency, 4391.0 ms max latency.\n484633 records sent, 96887.8 records\/sec (11.83 MB\/sec), 3567.3 ms avg latency, 3796.0 ms max latency.\n452487 records sent, 90479.3 records\/sec (11.04 MB\/sec), 4172.5 ms avg latency, 4688.0 ms max latency.\n523942 records sent, 104767.4 records\/sec (12.79 MB\/sec), 3104.9 ms avg latency, 3720.0 ms max latency.\n505591 records sent, 101098.0 records\/sec (12.34 MB\/sec), 3594.4 ms avg latency, 3750.0 ms max latency.\n<strong>5000000 records sent, 95472.685265 records\/sec (11.65 MB\/sec), 3387.08 ms avg latency, 4688.00 ms max latency, 3559 ms 50th, 4353 ms 95th, 4596 ms 99th, 4686 ms 99.9th.<\/strong><\/pre>\n<blockquote><p>(<em>Again, it&#8217;s important to be clear that we were not doing scientific performance testing here &#8211; as this is a small dev cluster that is used by multiple people. To prove this point, we ran the same application again, and got a slightly different result:<\/em>)<\/p><\/blockquote>\n<pre style=\"border: thin #AA0000 solid; color: #770000; background-color: #ffffc0; padding: 1em; overflow-y: scroll; overflow-x: scroll; font-size: 0.8em; white-space: pre;\">dalelane@dales-mbp eem-quotas % oc delete -f produce-unlimited.yaml\njob.batch \"producer-unlimited\" deleted\n\ndalelane@dales-mbp eem-quotas % <strong>oc apply -f produce-unlimited.yaml<\/strong>\njob.batch\/producer-unlimited created\n\ndalelane@dales-mbp eem-quotas % <strong>oc logs -f --selector app=producer-unlimited<\/strong>\n423800 records sent, 84760.0 records\/sec (10.35 MB\/sec), 932.4 ms avg latency, 1815.0 ms max latency.\n588410 records sent, 117682.0 records\/sec (14.37 MB\/sec), 2655.5 ms avg latency, 3121.0 ms max latency.\n578210 records sent, 115503.4 records\/sec (14.10 MB\/sec), 2958.5 ms avg latency, 3159.0 ms max latency.\n441696 records sent, 88145.3 records\/sec (10.76 MB\/sec), 3790.4 ms avg latency, 4193.0 ms max latency.\n485335 records sent, 97067.0 records\/sec (11.85 MB\/sec), 3639.6 ms avg latency, 3977.0 ms max latency.\n407379 records sent, 81459.5 records\/sec (9.94 MB\/sec), 4074.2 ms avg latency, 4318.0 ms max latency.\n506870 records sent, 101374.0 records\/sec (12.37 MB\/sec), 3919.6 ms avg latency, 4410.0 ms max latency.\n558956 records sent, 111791.2 records\/sec (13.65 MB\/sec), 3061.4 ms avg latency, 3315.0 ms max latency.\n505741 records sent, 101128.0 records\/sec (12.34 MB\/sec), 3394.8 ms avg latency, 3664.0 ms max latency.\n461381 records sent, 92257.7 records\/sec (11.26 MB\/sec), 3680.8 ms avg latency, 3936.0 ms max latency.\n<strong>5000000 records sent, 99064.828023 records\/sec (12.09 MB\/sec), 3209.45 ms avg latency, 4410.00 ms max latency, 3308 ms 50th, 4253 ms 95th, 4351 ms 99th, 4397 ms 99.9th.<\/strong><\/pre>\n<p>For our purposes, that is fine. The point is that running this let us see the sort of rate that our application could produce at.<\/p>\n<h4>Step 8: Run a producer application &#8211; with a quota.<\/h4>\n<p>You can see in our <a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/configs.yaml\"><code>configs.yaml<\/code><\/a> that we created separate ConfigMaps, each with the username and password we had created for a different option.<\/p>\n<p>And we have variations of our producer job for each quota option. If you compare them, you&#8217;ll see that the main difference is to specify the correct topic alias and the correct credentials.<\/p>\n<p>You can see the output that we got from running these here:<\/p>\n<ul>\n<li><a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/output\/produce-25000.txt\">output<\/a> from running <a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/produce-25000.yaml\"><code>produce-25000.yaml<\/code><\/a><\/li>\n<li><a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/output\/produce-50000.txt\">output<\/a> from running <a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/produce-50000.yaml\"><code>produce-50000.yaml<\/code><\/a><\/li>\n<li><a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/output\/produce-75000.txt\">output<\/a> from running <a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/produce-75000.yaml\"><code>produce-75000.yaml<\/code><\/a><\/li>\n<\/ul>\n<p>The results are what you would expect.<\/p>\n<p>With a <strong>quota of 25,000 messages per second<\/strong>:<\/p>\n<pre style=\"border: thin #AA0000 solid; color: #770000; background-color: #ffffc0; padding: 1em; overflow-y: scroll; overflow-x: scroll; font-size: 0.8em; white-space: pre;\">dalelane@dales-mbp eem-quotas % <strong>oc apply -f produce-25000.yaml<\/strong>\njob.batch\/producer-25000 created\n\ndalelane@dales-mbp eem-quotas % <strong>oc logs -f --selector app=producer-25000<\/strong>\n...\n5000000 records sent, <strong>24893.827824 records\/sec<\/strong> (3.04 MB\/sec), 13505.16 ms avg latency, 14421.00 ms max latency, 14100 ms 50th, 14137 ms 95th, 14159 ms 99th, 14342 ms 99.9th.<\/pre>\n<p>With a <strong>quota of 50,000 messages per second<\/strong>:<\/p>\n<pre style=\"border: thin #AA0000 solid; color: #770000; background-color: #ffffc0; padding: 1em; overflow-y: scroll; overflow-x: scroll; font-size: 0.8em; white-space: pre;\">dalelane@dales-mbp eem-quotas % <strong>oc apply -f produce-50000.yaml<\/strong>\njob.batch\/producer-50000 created\n\ndalelane@dales-mbp eem-quotas % <strong>oc logs -f --selector app=producer-50000<\/strong>\n...\n5000000 records sent, <strong>49593.334656 records\/sec<\/strong> (6.05 MB\/sec), 6685.01 ms avg latency, 7708.00 ms max latency, 7053 ms 50th, 7162 ms 95th, 7571 ms 99th, 7665 ms 99.9th.<\/pre>\n<p>With a <strong>quota of 75,000 messages per second<\/strong>:<\/p>\n<pre style=\"border: thin #AA0000 solid; color: #770000; background-color: #ffffc0; padding: 1em; overflow-y: scroll; overflow-x: scroll; font-size: 0.8em; white-space: pre;\">dalelane@dales-mbp eem-quotas % <strong>oc apply -f produce-75000.yaml<\/strong>\njob.batch\/producer-75000 created\n\ndalelane@dales-mbp eem-quotas % <strong>oc logs -f --selector app=producer-75000<\/strong>\n...\n5000000 records sent, <strong>74056.519936 records\/sec<\/strong> (9.04 MB\/sec), 4394.07 ms avg latency, 5149.00 ms max latency, 4702 ms 50th, 4836 ms 95th, 5025 ms 99th, 5140 ms 99.9th.<\/pre>\n<p>The same application each time.<\/p>\n<p>The same code was attempting to produce the same number of equally sized random messages, and these were all going to the same topic.<\/p>\n<p>But it ran at different speeds each time.<\/p>\n<p>You don&#8217;t need to rely on the different application developers using your topics from the Catalog to be well-behaved. You don&#8217;t have to ask them to change how their application behaves. Quota controls applied by the Event Gateway can limit how quickly the applications can each produce data to the topic &#8211; controlling the impact of each application using topics from Event Endpoint Management.<\/p>\n<h2 id=\"section-quotas-for-consumers\">Quotas &#8211; for consumers<\/h2>\n<p>Let&#8217;s see the same for <strong>sharing a topic for other teams to consume<\/strong> messages from.<\/p>\n<p>(<em>Now that we had repeatedly produced 5 million messages to the test topic, we had plenty of messages for applications to consume!<\/em>)<\/p>\n<p>The principle is similar to before.<\/p>\n<p>Quotas let you set an upper limit on how fast each consuming application is allowed to consume data from your topic, so you can stay in control of the impact of each application on your cluster.<\/p>\n<p>The aim this time is to enable something like this:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/diagrams\/diagram-4.png?raw=true\" style=\"max-width: 600px; width: 100%;\"\/><\/p>\n<p>Setting a limit on how fast each application is able to consume from the Kafka cluster is a useful part in controlling the bandwidth used for the topics that you share.<\/p>\n<p>We ran a similar application to before, but this time we configured it to consume 5,000,000 messages and report how quickly it was able to do that.<\/p>\n<h4>Step 8: Add the topic to Event Endpoint Management (for consuming this time).<\/h4>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/screenshots\/eem-topic-consume.png?raw=true\" style=\"border: thin black solid; max-width: 600px; width: 100%;\"\/><\/p>\n<h4>Step 9: Add options with quota controls.<\/h4>\n<p>As before, the important thing is to include the <strong>Quota enforcement<\/strong> control.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/screenshots\/eem-controls-consume.png?raw=true\" style=\"border: thin black solid; max-width: 600px; width: 100%;\"\/><\/p>\n<p>As with produce, you can define this in megabytes per second, or in messages per second, or a combination of both.<\/p>\n<p>We went with messages per second, to mirror what we did with the producers.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/screenshots\/eem-quota-consume.png?raw=true\" style=\"border: thin black solid; max-width: 600px; width: 100%;\"\/><\/p>\n<p>We created a range of different options &#8211; each with a quota control defined slightly differently so we could compare.<\/p>\n<p>We gave each option a different topic alias, so the topic names would make it easy for us to remember what quota we had given to each.<\/p>\n<p>You don&#8217;t need to add as many options as we did. We recommend adding at least <strong>two<\/strong>:<\/p>\n<ul>\n<li>one with a quota control<\/li>\n<li>one without any quota control so you have something to compare with<\/li>\n<\/ul>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/screenshots\/eem-consume-options.png?raw=true\" style=\"border: thin black solid; max-width: 600px; width: 100%;\"\/><\/p>\n<h4>Step 10: Create credentials to access the topic through the Event Gateway.<\/h4>\n<p>Time to go to the Catalog and generate access credentials for each topic option.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/screenshots\/eem-consume-credentials.png?raw=true\" style=\"border: thin black solid; max-width: 600px; width: 100%;\"\/><\/p>\n<h4>Step 11: Configure a consumer application.<\/h4>\n<p>As before, you can use the simple test application included with IBM Event Streams.<\/p>\n<p>It consumes a predefined number of messages from the topic, and then outputs how long it took to consume them.<\/p>\n<p>To set this up, we needed to:<\/p>\n<ul>\n<li>create a <a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/configs.yaml#L1-L8\">Kubernetes Secret with the CA certificate<\/a> for the Event Gateway<\/li>\n<li>create a <a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/configs.yaml#L10-L23\">Kubernetes ConfigMap with the configuration<\/a> for our application<\/li>\n<\/ul>\n<p>You can see how we did this in <a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/configs.yaml\"><code>configs.yaml<\/code><\/a>.<\/p>\n<pre style=\"border: thin #AA0000 solid; color: #770000; background-color: #ffffc0; padding: 1em; overflow-y: scroll; overflow-x: scroll; font-size: 0.8em; white-space: pre;\">dalelane@dales-mbp eem-quotas % <strong>oc apply -f configs.yaml<\/strong>\nsecret\/eem-ca-cert unchanged\nconfigmap\/quotas-produce-unlimited unchanged\nconfigmap\/quotas-produce-25000 unchanged\nconfigmap\/quotas-produce-50000 unchanged\nconfigmap\/quotas-produce-75000 unchanged\nconfigmap\/quotas-consume-unlimited created\nconfigmap\/quotas-consume-25000 created\nconfigmap\/quotas-consume-50000 created\nconfigmap\/quotas-consume-75000 created<\/pre>\n<p>You can copy our config &#8211; just change the <code>bootstrap.servers<\/code> property to match your Event Gateway and put the username and password you created in the Catalog in the <code>sasl.jaas.config<\/code> property.<\/p>\n<h4>Step 12: Run a consumer application &#8211; with no quota.<\/h4>\n<p>We defined an application as a Kubernetes Job &#8211; so it would start up, consume 5 million messages, and then stop.<\/p>\n<p>You can see the configuration for this in <a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/consume-unlimited.yaml\"><code>consume-unlimited.yaml<\/code><\/a>.<\/p>\n<p>The bits that you might want to modify are the topic name (<a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/consume-unlimited.yaml#L18-L19\"><code>--topic<\/code><\/a>) and the number of messages to consume (<a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/consume-unlimited.yaml#L20-L21\"><code>--messages<\/code><\/a>). It doesn&#8217;t matter what you pick, as long as you are consistent.<\/p>\n<p>To run the application, apply the job spec.<\/p>\n<pre style=\"border: thin #AA0000 solid; color: #770000; background-color: #ffffc0; padding: 1em; overflow-y: scroll; overflow-x: scroll; font-size: 0.8em; white-space: pre;\">oc apply -f consume-unlimited.yaml<\/pre>\n<p>To see the output, you can tail the log.<\/p>\n<pre style=\"border: thin #AA0000 solid; color: #770000; background-color: #ffffc0; padding: 1em; overflow-y: scroll; overflow-x: scroll; font-size: 0.8em; white-space: pre;\">dalelane@dales-mbp eem-quotas % <strong>oc logs -f --selector app=consumer-unlimited<\/strong>\nstart.time,              end.time,                data.consumed.in.MB, MB.sec,  data.consumed.in.nMsg, nMsg.sec,    rebalance.time.ms, fetch.time.ms, fetch.MB.sec, <strong>fetch.nMsg.sec<\/strong>\n2024-09-06 16:16:51:674, 2024-09-06 16:17:15:679, 610.3760,            25.6374, 5000200,               210021.8414, 3759,              20049,         30.4442,      <strong>249398.9725<\/strong><\/pre>\n<p>You can see the output from when we ran this a couple of times at <a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/output\/consume-unlimited.txt\"><code>consume-unlimited.txt<\/code><\/a>.<\/p>\n<p>The interesting value is <code>fetch.nMsg.sec<\/code> (on the far right), which reported that the application fetched approximately <strong>250,000 messages per second<\/strong>.<\/p>\n<h4>Step 13: Run a consumer application &#8211; with a quota.<\/h4>\n<p>You can see in our <a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/configs.yaml\"><code>configs.yaml<\/code><\/a> that we created separate ConfigMaps, each with the username and password we had created for a different option.<\/p>\n<p>And we have variations of our consumer job for each quota option. If you compare them, you&#8217;ll see that the main difference is to specify the correct topic alias and the correct credentials.<\/p>\n<p>You can see the full output that we got from running these here:<\/p>\n<ul>\n<li><a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/consume-25000.yaml\"><code>consume-25000.yaml<\/code><\/a> &#8211; fetched approximately <a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/output\/consume-25000.txt\">25,200 messages per second<\/a><\/li>\n<li><a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/consume-50000.yaml\"><code>consume-50000.yaml<\/code><\/a> &#8211; fetched approximately <a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/output\/consume-50000.txt\">50,100 messages per second<\/a><\/li>\n<li><a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/consume-75000.yaml\"><code>consume-75000.yaml<\/code><\/a> &#8211; fetched approximately <a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/output\/consume-75000.txt\">75,000 messages per second<\/a><\/li>\n<\/ul>\n<p>The same code consuming the same messages from the same topic, but running at very different speeds each time.<\/p>\n<p>You don&#8217;t need to depend on application developers who find your topics in the Catalog to write their applications in a way that will share the cluster evenly.  Quota controls applied by the Event Gateway can controlled how quickly applications are able to fetch messages from the topic.<\/p>\n<h2 id=\"section-multiple-partitions\">Working with multiple partitions<\/h2>\n<p>All of the values we&#8217;ve shown so far has been using a <strong>topic with one partition<\/strong>.<\/p>\n<p>For example, consuming from a Kafka topic using an option with a 50,000 messages per second quota control, gives us results like this:<\/p>\n<pre style=\"border: thin #AA0000 solid; color: #770000; background-color: #ffffc0; padding: 1em; overflow-y: scroll; overflow-x: scroll; font-size: 0.8em; white-space: pre;\">dalelane@dales-mbp eem-quotas % <strong>oc apply -f consume-50000.yaml<\/strong>\njob.batch\/consumer-50000 created\n\ndalelane@dales-mbp eem-quotas % <strong>oc logs -f --selector app=consumer-50000<\/strong>\nstart.time,              end.time,                data.consumed.in.MB, MB.sec,  data.consumed.in.nMsg, nMsg.sec,    rebalance.time.ms, fetch.time.ms, fetch.MB.sec, <strong>fetch.nMsg.sec<\/strong>\n2024-09-09 14:15:16:400, 2024-09-09 14:17:00:349, 610.3516,            5.8716,  5000000,               48100.5108,  4036,              99913,         6.1088,       <strong>50043.5379<\/strong><\/pre>\n<p>The <code>fetch.nMsg.sec<\/code> value (on the far right) is approximately 50,000 messages per second, which is what you would expect with a quota of 50,000 messages per second.<\/p>\n<p>But if you repeat this with a <strong>topic that has two partitions<\/strong>&#8230;<\/p>\n<pre style=\"border: thin #AA0000 solid; color: #770000; background-color: #ffffc0; padding: 1em; overflow-y: scroll; overflow-x: scroll; font-size: 0.8em; white-space: pre;\">dalelane@dales-mbp eem-quotas % <strong>oc apply -f consume-50000.yaml<\/strong>\njob.batch\/consumer-50000 created\n\ndalelane@dales-mbp eem-quotas % <strong>oc logs -f --selector app=consumer-50000<\/strong>\nstart.time,              end.time,                data.consumed.in.MB, MB.sec,  data.consumed.in.nMsg, nMsg.sec,    rebalance.time.ms, fetch.time.ms, fetch.MB.sec, <strong>fetch.nMsg.sec<\/strong>\n2024-09-09 14:23:11:727, 2024-09-09 14:24:05:793, 610.3962,            11.2898, 5000366,               92486.3315,  3860,              50206,         12.1578,      <strong>99596.9804<\/strong><\/pre>\n<p>The application fetches messages twice as quickly as it did before. It was able to fetch approximately 99,600 messages per second.<\/p>\n<p>This is because when an application uses a topic with two partitions, what happens is more like this:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/diagrams\/diagram-7.png?raw=true\" style=\"max-width: 600px; width: 100%;\"\/><\/p>\n<p>The two topic partitions will each be hosted on different Kafka brokers, and the application will consume from each broker in parallel. This is why Kafka has topic partitions &#8211; to enable parallel processing!<\/p>\n<p>These two connections that the application will make to the Event Gateway will each, independently, have the quota enforcement control applied.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/diagrams\/diagram-8.png?raw=true\" style=\"max-width: 600px; width: 100%;\"\/><\/p>\n<p>The consumer application makes two connections to the Event Gateway, one for each partition. Each one is subjected to the 50,000 messages per second quota control.<\/p>\n<p>That is why cumulatively the application was able to fetch nearly 100,000 messages per second.<\/p>\n<p>This would be equally true if we had run two separate consumers as part of a consumer group, such as this:<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/diagrams\/diagram-9.png?raw=true\" style=\"max-width: 600px; width: 100%;\"\/><\/p>\n<p>Each member of the consumer group would separately have the quota control applied to its connection. It is perhaps less surprising when this happens, compared to when you are running a single application that is implicitly making multiple connections.<\/p>\n<p>The key to remember is that <strong>the Event Gateway will apply a quota control to every connection<\/strong> that is made. If an application makes multiple connections, such as to produce or consume to multiple topic partitions, the quota is applied to each connection independently.<\/p>\n<h2 id=\"section-how-it-works\">How the Event Gateway does all of this<\/h2>\n<p>Because the Event Gateway can front multiple Kafka clusters and expose multiple options over a single Kafka topic, it allows much finer grained control over what a client can do.<\/p>\n<p>For example, a single topic could be exposed over the Event Gateway as an open-to-anyone-to-consume option but with its rate tightly controlled. Simultaneously, the topic could be exposed to a more select number of clients with a more generous quota (or even have no limit at all) by applying an Approval control.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/screenshots\/eem-alternative-options.png?raw=true\" style=\"border: thin black solid; max-width: 600px; width: 100%;\"\/><\/p>\n<p>This is because the quotas are applied per-connection per-option. Each unique connection to a particular option is tracked and controlled separately to the others.<\/p>\n<p>The Event Gateway is able to enforce quotas on Kafka clients like this through its deep understanding of the Kafka protocol. It uses this knowledge to limit client applications by using a mechanism which is already baked into most popular Kafka client libraries.<\/p>\n<p>Kafka clients make requests and process responses using the Kafka protocol. Responses from Kafka can define a throttle time, which is a request for the client to pause before making its next request. It is expected that clients should honor these requests, and not make any further requests until that throttle time has expired. This then effectively limits the rate at which they are able to make requests and thus consume or produce data.<\/p>\n<p>The Event Gateway utilizes this mechanism to bring client rates down under their quota. By looking at how many messages or how much data has been produced or consumed by a client connection for a particular option, the Event Gateway can make decisions on what should be done. If it observes that a connection has exceeded the configured quota, it uses the throttle time value in the response metadata to tell the client to slow down so that its rate goes back below the quota.<\/p>\n<p>Because the Event Gateway uses this standard client mechanism, client applications do not need to be changed to make use of quotas. Most Kafka clients will behave appropriately and wait for the throttle time. This benefits the client and the server as there is less network traffic and the client doesn&#8217;t have to deal with the server ignoring it (such as handling connection timeouts or other such errors associated with being throttled by the server).<\/p>\n<p>It is important to note however, that in the event of a client ignoring the Event Gateway&#8217;s instructions to slow down, the client will still find itself unable to produce or consume more data. The Event Gateway will ignore clients that misbehave to ensure the quota is not exceeded. This is a necessary protection against malicious clients.<\/p>\n<h2 id=\"section-quotas-kafka\">Adding quotas to the back-end<\/h2>\n<p>Quotas applied at the Event Gateway are an effective way to stop individual applications using a disproportionate amount of your disk or network bandwidth.<\/p>\n<p>But what about the cumulative effect of a large number of applications all using your topic? Each individual application might be keeping to their quota, but if there are enough of them, they might lead to a collective impact that was more than you would like.<\/p>\n<p>For example, perhaps you have a quota being enforced by the Event Gateway that keeps each consumer application to 25 MB per second.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/diagrams\/diagram-5.png?raw=true\" style=\"max-width: 600px; width: 100%;\"\/><\/p>\n<p>We created this using a new catalog option, with a new quota enforcement control.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/screenshots\/eem-consume-25mbs.png?raw=true\" style=\"border: thin black solid; max-width: 600px; width: 100%;\"\/><\/p>\n<p>We used the catalog to create a new set of credentials for this new option.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/screenshots\/eem-consume-25mbs-credentials.png?raw=true\" style=\"border: thin black solid; max-width: 600px; width: 100%;\"\/><\/p>\n<p>We used these credentials to run five instances of the consumer application at once.<\/p>\n<pre style=\"border: thin #AA0000 solid; color: #770000; background-color: #ffffc0; padding: 1em; overflow-y: scroll; overflow-x: scroll; font-size: 0.8em; white-space: pre;\">dalelane@dales-mbp eem-quotas % <strong>oc apply -f consume-25mbs-five-instances.yaml<\/strong>\njob.batch\/consumer-25mbs created\n\ndalelane@dales-mbp eem-quotas % <strong>oc get pods -oname --selector app=consumer-25mbs | xargs -I {} oc logs {}<\/strong>\nstart.time,              end.time,                data.consumed.in.MB, <strong>MB.sec<\/strong>,  data.consumed.in.nMsg, nMsg.sec,    rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec\n2024-09-07 19:34:36:552, 2024-09-07 19:35:00:349, 610.3619,            <strong>25.6487<\/strong>, 5000085,               210114.0900, 3906,              19891,         30.6853,      251374.2396\n2024-09-07 19:34:36:588, 2024-09-07 19:35:00:464, 610.3619,            <strong>25.5638<\/strong>, 5000085,               209418.8725, 3825,              20051,         30.4405,      249368.3607\n2024-09-07 19:34:36:469, 2024-09-07 19:35:00:941, 610.3619,            <strong>24.9412<\/strong>, 5000085,               204318.6090, 3897,              20575,         29.6652,      243017.4970\n2024-09-07 19:34:36:539, 2024-09-07 19:35:00:580, 610.3619,            <strong>25.3884<\/strong>, 5000085,               207981.5731, 3916,              20125,         30.3285,      248451.4286\n2024-09-07 19:34:36:602, 2024-09-07 19:35:00:121, 610.3619,            <strong>25.9519<\/strong>, 5000085,               212597.6870, 3866,              19653,         31.0569,      254418.4094<\/pre>\n<p>The interesting values are in the <code>MB.sec<\/code> column. Each of our five consumer applications consumed messages from the Kafka topic at approximately 25 MB per second.<\/p>\n<p>Cumulatively, these applications are consuming approximately 125 MB per second from the Kafka cluster through the Event Gateway.<\/p>\n<p>What if this is higher than you wanted? Perhaps you want to limit the cumulative impact to your Kafka cluster (of sharing your topic in the Catalog) to something such as 75 MB per second.<\/p>\n<p>Adding a quota to the connection between the back-end Kafka cluster and the Event Gateway is a way to control that.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/diagrams\/diagram-6.png?raw=true\" style=\"max-width: 600px; width: 100%;\"\/><\/p>\n<p>With quotas defined for both sides of the Event Gateway, you can:<\/p>\n<ul>\n<li>protect against any individual application using a disproportionate level of cluster resources so that all applications using topics from the Catalog get their fair share<\/li>\n<li>protect against a high number of applications resulting in a negative combined impact<\/li>\n<\/ul>\n<p>To apply the quota, we modified the Kafka credentials used to add the topic to the Event Endpoint Management catalog, from <a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/kafka-connection-unlimited.yaml\">unlimited<\/a> to this <a href=\"https:\/\/github.com\/dalelane\/eem-quotas-demo\/blob\/master\/kafka-connection-limited.yaml\">limited (quota)<\/a> version.<\/p>\n<p>The difference between them is:<\/p>\n<pre style=\"border: thin #AA0000 solid; color: #770000; background-color: #ffffc0; padding: 1em; overflow-y: scroll; overflow-x: scroll; font-size: 0.8em; white-space: pre;\">spec:\n  quotas:\n    consumerByteRate: 75000000\n    producerByteRate: 75000000<\/pre>\n<p>With the quota applied, we re-ran the five consumer applications:<\/p>\n<pre style=\"border: thin #AA0000 solid; color: #770000; background-color: #ffffc0; padding: 1em; overflow-y: scroll; overflow-x: scroll; font-size: 0.8em; white-space: pre;\">dalelane@dales-mbp eem-quotas % <strong>oc apply -f consume-25mbs-five-instances.yaml<\/strong>\njob.batch\/consumer-25mbs created\n\ndalelane@dales-mbp eem-quotas % <strong>oc get pods -oname --selector app=consumer-25mbs | xargs -I {} oc logs {}<\/strong>\nstart.time,              end.time,                data.consumed.in.MB, <strong>MB.sec<\/strong>,  data.consumed.in.nMsg, nMsg.sec,    rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec\n2024-09-07 20:19:35:811, 2024-09-07 20:20:15:012, 610.3619,            <strong>15.5701<\/strong>, 5000085,               127549.9350, 3841,              35360,         17.2614,      141405.1188\n2024-09-07 20:19:35:889, 2024-09-07 20:20:15:102, 610.3619,            <strong>15.5653<\/strong>, 5000085,               127510.9020, 3977,              35236,         17.3221,      141902.7415\n2024-09-07 20:19:36:132, 2024-09-07 20:20:15:497, 610.3619,            <strong>15.5052<\/strong>, 5000085,               127018.5444, 3776,              35589,         17.1503,      140495.2373\n2024-09-07 20:19:36:194, 2024-09-07 20:20:15:276, 610.3619,            <strong>15.6175<\/strong>, 5000085,               127938.3092, 3776,              35306,         17.2878,      141621.3958\n2024-09-07 20:19:36:265, 2024-09-07 20:20:14:508, 610.3619,            <strong>15.9601<\/strong>, 5000085,               130745.1037, 3832,              34411,         17.7374,      145304.8444<\/pre>\n<p>The interesting values are again in the <code>MB.sec<\/code> column. Each of our five consumer applications this time consumed messages from the Kafka topic at a little over 15 MB per second.<\/p>\n<p>While individually each application is not permitted to exceed 25 MB per second, cumulatively these applications are now limited to approximately 55 MB per second.<\/p>\n<h2 id=\"section-other-controls\">Other controls<\/h2>\n<p>Quotas are just one tool available to you, and they complement a range of options that Event Endpoint Management provides to enable you to remain in control when you share your Kafka topics.<\/p>\n<p>Our colleague Adam has written an <a href=\"https:\/\/community.ibm.com\/community\/user\/integration\/blogs\/adam-pilkington\/2024\/09\/11\/internet-facing-event-gateways\">overview of these different options<\/a>, which puts our deep dive here into quotas in a broader perspective.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A deep-dive into how you can apply quotas with IBM Event Endpoint Management<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4],"tags":[593,584],"class_list":["post-5281","post","type-post","status-publish","format-standard","hentry","category-ibm","tag-apachekafka","tag-kafka"],"_links":{"self":[{"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/posts\/5281","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5281"}],"version-history":[{"count":0,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=\/wp\/v2\/posts\/5281\/revisions"}],"wp:attachment":[{"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5281"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5281"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dalelane.co.uk\/blog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5281"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}