How to find kafka broker username and password
I’m using Kafka 3.0.0 and following a tutorial to run this command:
How to find kafka broker username and password
I’m using Kafka 3.0.0 and following a tutorial to run this command:
How to find kafka broker username and password
I’m using Kafka 3.0.0 and following a tutorial to run this command:
Apache Kafka with kraft cluster connection not established
I have set up a 3-node Kafka KRaft cluster with each node acting as both a broker and a controller. However, when I check the metadata quorum status, each node seems to act as a separate leader instead of forming a single consistent cluster. Below are the details of my setup and configuration:
Apache Kafka with kraft cluster connection not established
I have set up a 3-node Kafka KRaft cluster with each node acting as both a broker and a controller. However, when I check the metadata quorum status, each node seems to act as a separate leader instead of forming a single consistent cluster. Below are the details of my setup and configuration:
Kafka Error org.apache.kafka.common.errors.NotLeaderOrFollowerException NOT_LEADER_OR_FOLLOWER only on one of the two k8s clusters
tl;dr
Proper way to send json with schema message via kafka-console-producer
I want to test my kafka/connect/schema registry configuration i have set up locally with docker-compose. initially – i set up a connect instance with a s3 sink plugin that writes incoming json messages to s3 in avro format. I was able to send messages with the format
in Kafka, with Idempotence producer, is there a possibility of gaps in a topic partition?
Assume a KAFKA PRODUCER application (C++, librdkafka, with enable.idempotence=true) which produce msgs 1,2,3,4 and 5 in order for a single topic partition.
Is there a possiblity of kafka writing (i.e. for a Consumer when reading), these msgs with a gap ?
e.g. is there a possiblity of kafka writing 1,2,5 (i.e. when consumer reads, they get 1,2,5) – 3 and 4 missing.
Kafka 3.7.1 using KRaft with SSL: cannot register yet because the metadata version is still 3.0-IV1
I am trying to setup a 3 machines Kafka cluster.
I’m currently running Kafka 3.7.1 with Scala 2.13 on premise (downloading the tgz from apache.downloads.com).
Kafka compact cleanup policy and config that could cleanup if segment is to big
We have a kafka topic set with a cleanup.policy
of compact
.
we currently have the segment.bytes
set to 1GB.