Kafka: InvalidGroupIdException when manually assigning partition to consumer
I am trying to manually assign partitions to a KafkaListener in Spring boot in order to avoid using GroupManagement and related.
Relevant code:
How to configure HTTP version used by Spring Kafka Consumer
I am having an issue with my Kafka consumer in Kotlin, where, upon message consumption using @KafkaListner, the following error occurs. I am struggling to understand why the error is occurring and where I have to make a change.
Getting ProducerFencedException during producer.send in Springboot application during Kafka Split-brain test
We have a Spring Boot application deployed on DC1 that connects to a Kafka cluster with its leader in DC2. There was a 9-second connection outage for Kafka on DC1, during which the application was unable to produce messages using the Spring Kafka transactional producer. After 9 seconds, the Kafka connection was restored, and a new leader election took place. However, the Spring Kafka transactional producer still couldn’t publish messages and encountered a ProducerFencedException. It took around 60 minutes for application to recover.
Kafka broker down
We are experencing the following scenario: We have a simple kafka producer application which reads from rabbitmq and using kafka template produces messages on a kafka topic. Now our kafka producer can no longer deliver messages to the Kafka brokers when one of the brokers goes down for any reason. You would then expect the producer to deliver to another broker (we have 5 in the cluster). This does not work. The producer keeps waiting for the broker to come back. So far, we have noticed that a new broker has returned, but the connection was not restored. To resolve this, we have to manually restart the producer application.
Springboot connect Kafka get error “No resolvable bootstrap urls given in bootstrap.servers”
I have a springboot project and try to connect Kafka. My Kafka server is started on CentOS7 instead of docker.
What happens to a Kafka consumer processing a batch of messages during a rebalance?
I’m working with Apache Kafka in Spring Boot application and I’m trying to understand the behavior of a Kafka consumer when it is processing a batch of messages and a rebalance occurs.
SSLHandshakeException while running Kafka server in docker container
Below is my docker compose for kafka container.
Spring Kafka RequestReply ListenableFuture Threads Park Issue
Application Overview
How to do exception handling in spring-kafka producer
I have a service which acts as a Kafka producer. It receives a complex object via REST endpoint and then it splits and produces it on two Kafka Topics. For second topic, we are extracting messages from a list and sending each msg to the topic. We want to make sure that if any of the msg from the list fails it should not stop the processing of rest of the messages. And error messages are logged and maybe sent to DLT.
Exception in thread “main” org.apache.kafka.common.errors.SaslAuthenticationException: Authentication failed
I am trying to lern the confluent Kafka in my project and we do have API Key and Secrete, but I am not able to use it while trying to read the message from kafka topic. I went thriugh the link: https://docs.confluent.io/cloud/current/access-management/authenticate/api-keys/best-practices-api-keys.html, but its not clear how to set it into Java consumer code and also getting below error