Flink how to handle Kafka connector failures?
In Apache Flink, how can I catch the exceptions thrown by Kafka Source Connector?
Flink Kubernetes Operator pods resources request and limits
I have not been able to find a way to limit the resource requests and limits for pods while using Flink Kubernetes Operator v1.8.
State empty when i introduce new sources
I have a Flink application with:
The POJO class passes the test ,but shows invalid during execution
I define a POJO as follows:
How to Load properties file in Flink
I know I can pass application parameters via args
or -D
, but I would like to have the configuration in the specified properties
file and have different flink applications use different properties files.
How does Apache Flink HA react if a new job version is deployed without stopping the previous job?
I’ve an application mode Apache Flink installation deployed as a standalone docker image over Kubernetes. The job is running in High Availability with Zookeeper.
How does Apache Flink HA react if a new job version is deployed without stopping the previous job?
I’ve an application mode Apache Flink installation deployed as a standalone docker image over Kubernetes. The job is running in High Availability with Zookeeper.
Is the savepoint directory enough to restart and resume an Apache Flink job?
In Apache Flink, when using the flag --fromSavepoint
, do I have to specify the exact path or is the state.savepoints.dir
i.e. the root dir enough?
Is the savepoint directory enough to restart and resume an Apache Flink job?
In Apache Flink, when using the flag --fromSavepoint
, do I have to specify the exact path or is the state.savepoints.dir
i.e. the root dir enough?
How to initialise some code before flink task manager starts?
I need to setup a logging SDK before TM starts. It’s easy with Job Manager since JM executes the main() method which TM does not. Is there a way I can run some code once at the start and restart of each TM?