Consume and produce data to Apache Kafka using CLI
Introduction
In this series of articles we will see the different methods that we can use in order to produce data to a topic, and the way to consume it. We will start by setting up a local environment using docker and docker-compose. Once the kafka ecosystem is ready we will create a topic, than produce some data and consume it via CLI.
Local environment
Using the following docker-compose
we will be able to start a local environment that contain: a broker
and zookeeper
.
docker-compose.yml
Start the local environment by executing this command:
Check if the environment is ready:
Setup kafka CLI
In order to interact with the kafka broker, Apache Kafka provides a client CLI:
Add this export to your shell profile, it will allow to execute the bin from any location in the system.
Create the topic
In order to create the topic, we will need to use kafka-topics.sh
command and set the required parameters:
localhost:9092
- the broker addressnewTopic
- the topic name3
- The number of partitions for topic1
- The number of replication for the topic, in our environment we have only on broker.S
To check the creation of the topic we will use the previous command with
--list
option.
To Have more details about the topic that has been created, there is an option that we can set to kafka-topics.sh
which is called --describe
.
The output of the command give us: the number of partitions and the replicas. In our case we are using only on instance for the broker which is obviously refer to the number of replicas.
In a future article we can discuss a setup that contain a cluster with multiple brokers.
Produce the data
Now that we have our topic created in the broker and we could describe its configuration, we can start producing messages.
In order to send messages to our topic we will use kafka-console-producer.sh
. An interactive prompt will be shown and we can start writing our messages.
To confirm the sending of the message we need to hit ENTER
and continue.
Once we finish our sending we can quit the process using CTRL+C
DETAIL
: We can start producing the data without creation of the topic. A question that we can ask ourself is the following: Why we took the time to create the topic before sending the messages? Usually in production environment theauto-creation
for topics is disabled by default, Organizations prefer to have control and approve the creation of topics, if we want to have this behavior in our setup we can set this environment variable in our docker-compose.yml
To make the production of the data more interesting we can use Meetup Streaming API. The following endpoint will stream open events from Meetup-API:
We need to execute this command to have a continuous flow from Meetup-API
We execute a GET
request via curl
on Meetup-API and we pipe the result to jq
to map the output and get the following fields:
- id
- event_url
- name
For each message produced to the topic a >
sign will be printed to the terminal. We keep running this command in a tab to feed our topic. We will end up having json objects in our topic that look like the following structure:
Consume the data
In the section we will spawn a new terminal to consume the data that has been produced previously. To do so, we will need kafka-console-consumer.sh
binary and set the required parameters to start the consumption.
We can play with this command and consume from a specific offset for a specific partition with a fixed number of messages before exiting.
--offset
: rewind the process to the specified offset.--partition
: consume from this specific partition.--max-messages
: total messages to consume before exiting the process.
--offset
accepts an integer andearliest
to start from the beginning orlatest
which the default value that mean consume from end.
To get the status of each partition we can execute this command:
We see clearly that our topic has 3 partitions: 0, 1 and 2 and for each partition we have the last offset that has been reached.
- partition 0 offset 34
- partition 1 offset 41
- partition 2 offset 36
Conclusion
As you have seen put in place a Kafka local environment is accessible to every developer that is interested to get in Streaming world. In a few minutes, we manage to setup the cluster and start producing and consuming the data.
For testing purpose working with the CLI is fine and allows to prototype quickly. But for production application the Favorite way of producing/consuming the data is via a programming language of or using kafka connector. In the next article we will discuss how we can implement it via the Java SDK.
Stay tuned ✌!