Running Kafka in Kubernetes With Kraft Mode and SASL Authentication

Learn how to launch an Apache Kafka with the Apache Kafka Raft (KRaft) consensus protocol and SASL/PLAIN authentication

PLAIN versus PLAINTEXT: Do not confuse the SASL mechanism PLAIN with the no TLS/SSL encryption option, which is called PLAINTEXT. Configuration parameters such as sasl.enabled.mechanisms or sasl.mechanism.inter.broker.protocol may be configured to use the SASL mechanism PLAIN, whereas security.inter.broker.protocol or listeners may be configured to use the no TLS/SSL encryption option, SASL_PLAINTEXT.

Prerequisites

An understanding of Apache Kafka, Kubernetes, and Minikube.

The following steps were initially taken on a MacBook Pro with 32GB memory running MacOS Ventura v13.4.

Make sure to have the following applications installed:

  • Docker v23.0.5
  • Minikube v1.29.0 (running K8s v1.26.1 internally)
It's possible the steps below will work with different versions of the above tools, but if you run into unexpected issues, you'll want to ensure you have identical versions. Minikube was chosen for this exercise due to its focus on local development.

Deployment Components

We need to configure brokers and clients to use SASL authentication. Refer to Kafka Broker and Controller Configurations for Confluent Platform page for a detailed explanation of the configurations used here.

The deployment we will create will have the following components:


  • Namespace :   kafka.  This is the namespace within which all components will be scoped.
  • Service Account :   kafka.  Service accounts are used to control permissions and access to resources within the cluster.
  • Headless Service :   kafka-headless.  It exposes ports 9092 (for SASL_PLAINTEXT communication).
  • StatefulSet :   kafka.  It manages Kafka pods and ensures they have stable hostnames and storage.
The source code for this deployment can be found in this  GitHub repository .

Broker

Enable SASL/PLAIN mechanism in the  server.properties  file of every broker.
YAML
 
# List of enabled mechanisms, can be more than one 
- name: KAFKA_SASL_ENABLED_MECHANISMS
  value: PLAIN

# Specify one of of the SASL mechanisms 
- name: KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL
  value: PLAIN


Tell the Kafka brokers on which ports to listen for client and interbroker SASL connections. Configure listeners, and advertised.listeners:
YAML
 
- command: 
  ... 
  export KAFKA_ADVERTISED_LISTENERS=SASL://${POD_NAME}.kafka-headless.kafka.svc.cluster.local:9092 
  ...  
  
  env: ... 
  - name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
    value: "CONTROLLER:PLAINTEXT,SASL:SASL_PLAINTEXT" 
  - name: KAFKA_LISTENERS
    value: SASL://0.0.0.0:9092,CONTROLLER://0.0.0.0:29093


Configure JAAS for the Kafka broker listener as follows:
YAML
 
- name: KAFKA_LISTENER_NAME_SASL_PLAIN_SASL_JAAS_CONFIG
  value: org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret" user_admin="admin-secret" user_kafkaclient1="kafkaclient1-secret"; 


Client

Create a ConfigMap  based on the  sasl_client.properties  file:
YAML
 
kubectl create configmap kafka-client --from-file sasl_client.properties -n kafka 
kubectl describe configmaps -n kafka kafka-client 

Output:
configmap/kafka-client created 
Name:         kafka-client 
Namespace:    kafka 
Labels:       <none> 
Annotations:  <none>  

Data 
==== 
sasl_client.properties: 
---- 
sasl.mechanism=PLAIN 
security.protocol=SASL_PLAINTEXT sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \   
username="kafkaclient1" \   
password="kafkaclient1-secret";  

BinaryData 
====  

Events:  <none>


Mount the ConfigMap as a volume:
YAML
 
... 
volumeMounts:     
      - mountPath: /etc/kafka/secrets/
        name: kafka-client 
... 
volumes: 
- name: kafka-client
  configMap:      
    name: kafka-client


Creating the Deployment

Clone the repo:
YAML
 
       git clone https://github.com/rafaelmnatali/kafka-k8s.git
cd ssl

     

 Deploy Kafka using the following commands:
YAML
 
       kubectl apply -f 00-namespace.yaml
kubectl apply -f 01-kafka-local.yaml


Verify Communication Across Brokers

There should now be three Kafka brokers, each running on separate pods within your cluster. Name resolution for the headless service and the three pods within the StatefulSet is automatically configured by Kubernetes as they are created, allowing for communication across brokers. See the related documentation for more details on this feature.

You can check the first pod's logs with the following command:
YAML
 

       kubectl logs kafka-0


The name resolution of the three pods can take more time to work than it takes the pods to start, so you may see UnknownHostException warnings in the pod logs initially:

YAML
 

       WARN [RaftManager nodeId=2] Error connecting to node kafka-1.kafka-headless.kafka.svc.cluster.local:29093 (id: 1 rack: null) (org.apache.kafka.clients.NetworkClient) java.net.UnknownHostException: kafka-1.kafka-headless.kafka.svc.cluster.local        



But eventually each pod will successfully resolve pod hostnames and end with a message stating the broker has been  unfenced :
YAML
 
       INFO [Controller 0] Unfenced broker: UnfenceBrokerRecord(id=1, epoch=176) (org.apache.kafka.controller.ClusterControlManager)


Create a Topic Using the SASL_PLAINTEXT Endpoint 

The Kafka StatefulSet should now be up and running successfully. Now we can create a topic using the SSL endpoint.

You can deploy Kafka Client using the following command:
YAML
 
 kubectl apply -f 02-kafka-client.yaml
      




Check if the Pod is Running:
YAML
 
       kubectl get pods 
      




Output:
YAML
 
       NAME        READY   STATUS    RESTARTS   AGE 
kafka-cli   1/1     Running   0          12m


    


Connect to the pod kafka-cli:
YAML
 
   kubectl exec -it kafka-cli -- bash




Create a topic named test-ssl with three partitions and a replication factor of 3.
YAML
 
       kafka-topics --create --topic test-sasl --partitions 3 --replication-factor 3 --bootstrap-server ${BOOTSTRAP_SERVER} --command-config /etc/kafka/secrets/sasl_client.properties 
Created topic test-sasl.
      


The environment variable BOOTSTRAP_SERVER contains the list of the brokers; therefore, we save time in typing.

List all the topics in Kafka:
YAML
 
       kafka-topics --bootstrap-server ${BOOTSTRAP_SERVER} -list --command-config /etc/kafka/secrets/sasl_client.properties 
test
test-sasl
test-ssl
test-test
      


Summary and Next Steps

This tutorial showed you how to get Kafka running in KRaft mode on a Kubernetes cluster with SASL authentication.  This is another step to secure communication between clients and brokers in addition to the SSL encryption discussed  in this article . I invite you to keep studying and investigating how to improve security in your environment.

 

 

 

 

Top