[Webinar] Build Your GenAI Stack with Confluent and AWS | Register Now

Spring for Apache Kafka 101

Écrit par

Extensive out-of-the-box functionality, a large user community, and up-to-date, cloud-native features make Spring and its libraries a strong option for anchoring your Apache Kafka® and Confluent Cloud based microservices architecture. Spring takes care of boilerplate system responsibilities—letting you focus on your business logic—while serverless Apache Kafka on Confluent Cloud provides your transport and storage layers.

Intro to Spring and Kafka

For an entire course on using Spring and its libraries with the Apache Kafka ecosystem, make sure to visit Confluent Developer.

Benefits of Spring

Spring’s opinionated approach can significantly reduce your development time and allow you to easily collaborate with other developers on your team. Moreover, its support for multiple profiles means that you are able to provide different configuration parameters based on your environment (for example, development vs. QA). Adding the Spring for Apache Kafka project to your Spring implementation provides Kafka-specific capabilities, and its features are modeled on common Spring patterns, so you are likely to find them familiar.

Try It Out for Yourself
To get started with your own Spring Boot and Confluent Cloud project using Spring Initialzr, the Confluent Cloud Console, as well as Java code, you can follow the exercise on Confluent Developer. Be sure to use the promo code SPRING101 for $101 of free Confluent Cloud usage.

Send messages to Confluent Cloud with KafkaTemplate

The KafkaTemplate class in the Spring for Apache Kafka project was designed to be similar to JMSTemplate. It’s relatively simple to use, particularly if you are already familiar with Kafka producers—which it wraps. To proceed, you create a ProducerFactory bean, providing a configuration map, then use that to make a KafkaTemplate, which ultimately provides you with numerous convenience methods. (Note that Kafka is asynchronous by default, so Spring provides a facility for handling synchronicity.)

Try It Out for Yourself
In the exercise available on Confluent Developer, you can use the project that you set up earlier to try your hand at producing from Spring Boot to Confluent Cloud.

Receive messages with KafkaListener

In Spring, the components that communicate with messaging systems have access to message-driven POJO functionality, in which regular POJOs can serve as asynchronous message listeners. Thus, consuming Kafka messages in Spring is accomplished by simply annotating a bean with KafkaListener, which will cause the Spring framework to instantiate a MessageListenerContainer that will take care of parallelization, configuration, retries, offsets, and other elements needed by your Kafka application. Offloading this work allows you to focus on your primary logic.

Try It Out for Yourself
Learn to annotate a class Consumer with the KafkaListener annotation and specify topics that you’d like to subscribe to. Then check for the existence of your consumer on Confluent Cloud.

Create Kafka topics with TopicBuilder

Manual topic creation is straightforward in Confluent Cloud, but you may wish to create topics programmatically, for example, if you are working with regular expressions in topic names or creating a large number of topics. You can accomplish this in Spring with the TopicBuilder class in conjunction with the KafkaAdmin class, which wraps Kafka’s Admin API. TopicBuilder methods allow you to provide standard configuration parameters relating to, for example, the number of partitions and number of replicas—but also relating to some parameters that aren’t available by API—such as one covering the compression type used in the topic.

Try It Out for Yourself
Learn to establish a topic with the TopicBuilder class in your Spring code, then verify it on Confluent Cloud.

Process messages with Kafka Streams

Adding an EnableKafkaStreams annotation, some configuration parameters, as well as a StreamsBuilder to your Spring code will let you access the Kafka Streams APIs. Spring’s wrapper over Kafka Streams is quite thin and Spring handles lifecycles, which lets you primarily focus on business logic (as you have probably come to expect by now after learning about the other Spring features). Additionally, Spring for Apache Kafka wraps JMX metrics from Kafka Streams and makes them available through the Micrometer framework.

Try It Out for Yourself
Get hands on by configuring a StreamsBuilder and adding a KStream to it, then adding a KTable for processing your data. Next, convert your KTable to a KStream, and set up a new topic with TopicBuilder. Then view your data on Confluent Cloud, modify your data for the console, and finally create a REST service for sharing your data.

Establish Confluent Cloud Schema Registry

When you create a Confluent Cloud cluster, you have the option to create a Schema Registry. This establishes it on Confluent Cloud, but to connect to it from your Spring Boot application, you’ll need its URL as well as some credentials. Your options for formats are Avro, JSON Schema, and Protobuf, and you can use multiple schemas in one application (Confluent provides SerDes for all three).

Try It Out for Yourself
Learn how to use a Gradle Avro plugin to generate regular POJOs from Avro schemas. Then produce them to Confluent Cloud and consume them in your application code.

Conclusion

Leveraging Spring can enable you to quickly start developing sophisticated Apache Kafka based systems. Although Spring is opinionated by default (which, as mentioned above, saves time and simplifies communication among developers), keep in mind that it does provide the ability to extend or customize certain options, should you need something not in the out-of-the-box version.

To learn more about combining Spring and Apache Kafka:

Get Started

  • Viktor Gamov is a developer advocate at Confluent, the company that makes an event streaming platform based on Apache Kafka. Back in his consultancy days, Viktor developed comprehensive expertise in building enterprise application architectures using open source technologies. He enjoys helping architects and developers design and develop low-latency, scalable, and highly available distributed systems. He is a professional conference speaker on distributed systems, streaming data, JVM, and DevOps, and he regularly speaks at events like JavaOne, Devoxx, OSCON, and QCon. He co-authored O’Reilly’s Enterprise Web Development and writes on the Confluent blog.

Avez-vous aimé cet article de blog ? Partagez-le !