[Webinar] Build Your GenAI Stack with Confluent and AWS | Register Now

An Introduction to Apache Kafka Security: Securing Real-Time Data Streams

Écrit par

The largest companies in the world use Apache Kafka® for their real-time streaming data pipelines and applications. Kafka is the basis for the real-time fraud text alerts from your bank and the network-connected medical devices used in your local hospital. Securing customer or patient data as it flows through the Kafka system is crucial. However, out of the box, Kafka has relatively little security enabled. This blog post previews the free Confluent Developer course that teaches the basics of securing your Apache Kafka-based system.

Introduction to Kafka security

There are some basic requirements before you can get started securing your Kafka system. You first need a basic familiarity with authentication, authorization, encryption, and audit logs in Kafka.

When you’re ready to plan your overall security strategy, consider these factors first to help guide you to the right solutions and implementations:

  • Know your corporate security policy
  • Identify any industry or regulatory requirements that govern your data processing capabilities
  • Consider the environment in which you plan to deploy your solution
  • Understand there are additional performance costs associated with added security; the more secure your cluster, the more resources you need, in particular heightened CPU usage

Throughout the system, all data should be encrypted so that it can’t be read in transit or at rest. Additionally, all operations should be recorded in an audit log so that there is an audit trail in the case of a security breach or misconfigured cluster.

Hands-on: Create a secure connection to your Kafka cluster

To get started quickly, we use Confluent Cloud to run Kafka. This module shows you how to sign up for a free Confluent Cloud account, create a new cluster and topic as well as produce and consume some messages, all done securely with an encrypted connection.

Kafka authentication basics

In a Kafka-based system, many different interactions begin with participants authenticating the components with which they are communicating. For example, when a connection is established between a client (a user, application, or service) and a broker, each side of the connection will usually wish to verify the other. Internally in Kafka, a client’s identity is represented using a KafkaPrincipal object, or principal. For example, if you connect to Kafka and authenticate with a username and password, the principal associated with the connection will represent your username.

The same holds true when two brokers connect—each may verify the other. Another authentication scenario is a broker accessing ZooKeeper, whereby the broker may be required to authenticate before being allowed to access sensitive cluster metadata.

Your principal is used to assign your authorizations in the target system, as you will learn in the Authorization module, and it is also used to log details of any permissible operation you perform—as you will learn in the Audit Logs module.

Authorization

Authorization determines what an entity can do once it has been authenticated by the Kafka system. Once a broker has authenticated a client’s identity, it determines the actions that the client is able to execute—whether creating a topic or producing or consuming a message.

Kafka uses access control lists (ACLs) to specify which users are allowed to perform which operations on specific resources or groups of resources. Recall that each connection is assigned a principal when it is first opened. This is what gets assigned the identity. Each ACL contains a principal, a permission type, an operation, a resource type (e.g., cluster, topic, or group), and a name. You can use the kafka-acls command-line tool to create ACLs.

If you work in a large organization or have a large cluster topology, you might find it inefficient to specify ACLs for each individual user principal. Instead, you might wish to assign users to groups or differentiate them based on roles. You can accomplish this with Kafka, but you will need to do several things: you need an external system that allows you to associate individuals with roles and/or groups, something like an LDAP store; you need to apply ACLs to resources based not only on users but also on roles and groups; finally, you need to implement a custom authorizer that can call your external system to find the roles and groups for a given principal.

Encryption

This module teaches you how to encrypt the data, set up certificates, encrypt your data at rest, and set strict filesystem permissions. You may wish to go even further and encrypt your data from start to finish, which you also learn how to do in this module. (The only way that you might be able to not use encryption is if your Kafka system fully resides in a secure and isolated network, and you don’t have to answer to any authorities or auditors.)

Encryption uses mathematical techniques to scramble data so that it is unreadable by those who don’t have the right key, and it also protects the data’s integrity so that you can determine if it was tampered with during its journey. The simplest encryption setup consists of encrypted traffic between clients and the cluster, which is important if clients access the cluster through an unsecured network such as the public internet.

Hands-on: Set up encryption

Each “Hands-on” module is designed as an exercise to walk you through how to put a lot of the things we’ve talked about so far into practice. In this module, you follow along step-by-step, add an SSL listener to your brokers, create a CA, create broker keystores, import the CA into your broker keystore, and configure the SSL properties.

Hands-on: Require encryption for broker traffic

After learning to enable SSL on your Kafka brokers, you can now take it even further by creating the Kafka client truststore and importing the CA, configuring the Kafka client to encrypt data in transit using SSL, and requiring SSL for client-broker traffic.

Securing ZooKeeper

If you chose to run Kafka with ZooKeeper, you also need to consider how to secure ZooKeeper, as it stores a lot of important cluster configuration and security information, and encrypted versions of user passwords if you are using Kafka’s SASL/SCRAM provider.

There are two ways to secure ZooKeeper: SSL and SASL. No matter which one you use, you need to update your broker configuration to make sure that secure ACLs in ZooKeeper are associated with the metadata in ZooKeeper. These ACLs specify that the metadata can be read by everyone but only changed by the brokers. Sensitive metadata, such as SCRAM credentials, is an exception: it can only be read by the brokers by default.

Audit logs

In this module, you’ll learn procedures for protecting your system against targeted attacks. For example, a rogue client may spawn fake messages or you may experience a DDoS-style attack on broker resources. If one of these happened, how would you know you had been targeted, and how could you identify the perpetrators as well as the sequence of events? Furthermore, is there a way that you can prevent future attacks?

Audit logs help because they provide records of everything that has happened on a system. Specifically, they provide:

  • Insight: They provide insight into situations such as trying to determine if a particular group of users was successful in authenticating and getting access to the correct broker resources after a new ACL was added
  • Security: They enhance security by letting you identify anomalies and unauthorized operations in the historical record so that you can take action as quickly as possible
  • Impact: They let you see who, as well as which services, have been impacted by unusual activities so that you can communicate with stakeholders as the situation progresses
  • Compliance: They enable you to generate audit reports according to internal policies and external regulations, and also provide an official record in the event of a security breach

Apache Kafka doesn’t provide audit logging out of the box. It does, however, support comprehensive audit logging with Log4j, which can be configured to deliver events from separate logger instances to separate destinations (typically files). By configuring separate appenders for the log4j.properties file on each broker, you can capture detailed information for auditing, monitoring, and debugging purposes.

Security recommendations

The final module for this course reviews some general recommendations as well as a security checklist. The checklist won’t cover all use cases but it should serve as a useful outline.

Next steps

Learn more about Kafka Security by taking the full course on Confluent Developer.

Start the Course

Here are some additional resources where you can learn more about Kafka’s security features:

  • Amani Newton is a technical writer and content designer partnered with Confluent. Her past and present clients include Google, Meta, and eBay.

Avez-vous aimé cet article de blog ? Partagez-le !