
Topic:node-test PartitionCount:1 ReplicationFactor:1 Configs: Let’s quickly verify that our topic was created correctly $ bin/kafka-topics.sh -describe -zookeeper localhost:2181 -topic node-test We need to create a new topic for our tests $ bin/kafka-topics.sh -create -zookeeper localhost:2181 -replication-factor 1 -partitions 1 -topic node-test INFO Connecting to zookeeper on localhost:2181 () Now we can start our Kafka server $ bin/kafkas-server-start.sh config/server.properties INFO Reading configuration from: config/zookeeper.properties (.quorum.QuorumPeerConfig) To do so we download one of the binary packages from and extract it $ tar -xvf kafka_2.12-0.10.2.0.tgzĪfter we have successfully extracted the package we are ready to startup a ZooKeeper ( ) instance $ bin/zookeeper-server-start.sh config/zookeeper.properties Setupįirst we need to install a version of Kafka on our local system. For this blog however we will take a different route and try to explore how we can develop a simple Kafka producer and consumer combo within Node.js, while still leveraging the Avro data serialization system ( ).
Nodejs kafka how to#
There are many tutorials on how to use Kafka within a Java environment. Multiple consumers can work in tandem to form a consumer group (-> parallelization)

A producer publishes messages to one or many Kafka topics.Each record consists of a key, a value, and a timestamp.

The Kafka cluster stores streams of records in categories called topics.Kafka is run as a cluster on one or more servers.If you are not familiar with Apache Kafka, I recommend reading this excellent introduction ( ). To give you a quick overview here are the core principles: Over the last few months Apache Kafka gained a lot of traction in the industry and more and more companies explore how to effectively use Kafka in their production environments.
