1-) Introduction
This article reflects my recent experiences at trying to connect an IRIS Business Operation to a secure Kafka Server, using SSL tunnels to encrypt the communications and using SASL (Simple Authentication and Security Layer) password hashing with SCRAM-SHA-512.
2-) Background Information
Kafka implementations can be made extremely secure by using encryption and ACLs (Access Control Lists) to control access to topics and other resources in the cluster.
2.1-) SSL Encryption
The first part, encryption, can be easily implemented by configuring SSL on the Kafka Listeners.
The instructions and details can be found at this link: https://kafka.apache.org/41/security/encryption-and-authentication-using-ssl/
From the link, it is important to notice that Kafka can implement Client Authentication through mutual authentication of the SSL tunnel, and that is the default behavior of the Kafka Listener.
However, to implement SASL (Simple Authentication and Security Layer) Authentication with SCRAM-SHA-512 (or SCRAM-SHA-256) hashing, the SSL Client Authentication must be disabled on the Listener.
To disable the Client Authentication on the SSL Listener, it is necessary to change the Kafka Server configuration:
ssl.client.auth=requested
ssl.endpoint.identification.algorithm=””
The default value for ssl.client.auth is required, which implies the client trying to connect to Kafka has to present a digital certificate to authenticate. Although extremely secure, SSL Authentication of the clients connecting to Kafka, require the overhead of issuing and managing the expiration of digital certificates to each individual client that is going to connect. Most customers, prefer not deal with this overhead and instead configure Kafka to use SASL.
The parameter ssl.endpoint.identification.algorithm must be set to an empty string.
2.2-) SASL Authentication
To enable SASL the required information can be found at this link: https://kafka.apache.org/41/security/authentication-using-sasl/
On my test environment I configured a Listener with the SASL_SSL security mapping on Kafka, using the SCRAM-SHA-512 mechanism for password hashing.
listeners=SASL_SSL://:9093
advertised.listeners=SASL_SSL://<kafka-hostname>:9093
sasl.enabled.mechanisms=SCRAM-SHA-512
On my test environment, with three Kafka Servers on the cluster, this configuration needs to be present on all servers.
2.3-) Authorizer and ACLs
To enable ACLs an Authorizer needs to be configured on each Broker and Controller of the Kafka cluster.
The relevant information can be found at this link: https://kafka.apache.org/41/security/authorization-and-acls/
My tests were executed on a cluster in Kraft Mode, so I had to add the following properties in the configuration files for the Kafka Brokers and Controller:
authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer
allow.everyone.if.no.acl.found=true
super.users=User:broker;User:client
The authorizer.class.name points to the default Authorizer that is shipped with Kafka.
The allow.everyone.if.no.acl.found=true is a bit of overkill that allows any authenticated users to access a resource that has no specific ACL attached to it. The default value of this parameter is false, which will deny access to resources without a specific ACL.
The super.users=User:broker;User:client is a list of Principals (aka Users) that are considered Super Users, which are allowed to access any resource.
2.4-) Create SCRAM Credentials
It is also important to create the SASL credentials on the Kafka Brokers for each user that is going to authenticate on the Cluster, by using the Kafka CLI:
$ kafka-configs.sh --bootstrap-server <Server URI> \
--alter --add-config 'SCRAM-SHA-512=[password=<passwd>]' \
--entity-type users --entity-name <USER> \
--command-config <Config File for Connection>
It is important to create the credentials with the same level of encryption that is configured on the Listener. In my tests, I configured the Listener with SCRAM-SHA-512, but it is possible to use SCRAM-SHA-256 instead.
You can verify that the SCRAM Credentials are created on the cluster by running this Kafka CLI:
$ kafka-configs.sh --bootstrap-server <Server URI> \
--describe \
--entity-type users \
--command-config <Config File for Connection>
Each User (aka Principal) with associated SASL credentials will be listed on the output, like the one bellow:
SCRAM credential configs for user-principal '<USER>' are SCRAM-SHA-512=iterations=4096
At this point, we have done all the configuration required on the Kafka side.
3-) Configuring IRIS
The tests were executed using a IRIS Version soon to be GA 2025.1.3 (Build 457U), which includes some enhancements necessary to use advanced Kafka Security options like SASL SCRAM-SHA-512 or SCRAM-SHA-256.
To test this configuration, I created a simple Business Operation, in a Production in IRIS.
3.1-) Create a Production
Create a new Interoperability Production in your Namespace, if necessary create a Namespace that is capable of running Interoperability Productions:

Create a new Production, and select the options to enable Testing and Trace Events:

3.2-) Create a Simple Business Operation
Create a new Business Operation, using the out-of-the-box Class EnsLib.Kafka.Operation, I named my “Kafka Producer”, select the “Enable Now” option.

On the Business Operation Settings make the necessary changes to the Kafka Settings section:

The above example reflect my test environment, make the necessary adjustments to reflect your environment.
- On the Servers Setting, change the broker hostname and port number to point to a Kafka Listener that has SASL_SSL security protocol enabled;
- Credentials point to the Interoperability Credential that has the userID and password created on the Kafka broker, that have SCRAM credentials created, see details bellow;
- SecurityProtocol has to be set to SASL_SSL;
- SASLMechanism is set to SCRAM-SHA-512,or the encryption level that is set on your environment;
- TrustStoreLocation, KeyStoreLocation are the files containing the SSL TrustStore and KeyStore that are going to be used to create the SSL tunnel. Those files are created on the Kafka Server and should be copied over in jks or p12 format;
- TrustStoreCredentials, KeyStoreCredentails and KeyCredentials point to a Interoperability Credentials that have the credentials to access the TrustStore,KeyStore and Key files, see bellow;
3.3-) Create the Credentials
On the Credentials page:

Create two credentials:
The Credential used to authenticate on the Kafka Broker with SCRAM Credentials created on step 2.4 above.

The second credential is the password used to access the TrustStore KeyStore and Keyfile.

In my case all files have the same password, so just one credential opens all files.
3.4-) Sending a Test Message
Now we can test the configuration settings and send a Test Message, using the Test function of the Business Operation.

If everything is configured properly you should see the Test Results instantly. If the Test Results progress bar takes some time to progress, this is an indication that some mis-configuration have occurred, and you need to double-check your settings.

Clicking on the Visual Trace shows the results:

And if you have a Kafka IU available you can see the message on the Topic, or you can see the messages using Kafka CLI:
$ kafka-console-consumer.sh \
--bootstrap-server <Server URI> \
--topic <TOPIC> --from-beginning \
--consumer.config <consumer.config.properties>
The above CLI show all messages on the Topic <TOPIC> and it runs until it is interrupted by a Ctrl-C.
This concludes this quick tests.
Best of luck!
.png)
.png)
.png)
.png)
.png)
.png)
.jpg)