Ephemeral storage bloat when submitting multiple jobs to flink session cluster
I’m try to run a flink session cluster and submit a lot of small jobs on it.
My jar is configurable on what processing should happen, where the data is and to where the processed data should be loaded.
The problem i have is that when i try to submit more than 5 jobs at once the cluster struggles until it fails.
The blob server takes giga bytes of storage for jobs that are very small and simple.
And the job manager is very overloaded with planning all jobs which are pretty much the same job with different configuration.
Is there a way to handle my problem better with flink? Is flink not the solution?
Ephemeral storage bloat when submitting multiple jobs to flink session cluster
I’m try to run a flink session cluster and submit a lot of small jobs on it.
My jar is configurable on what processing should happen, where the data is and to where the processed data should be loaded.
The problem i have is that when i try to submit more than 5 jobs at once the cluster struggles until it fails.
The blob server takes giga bytes of storage for jobs that are very small and simple.
And the job manager is very overloaded with planning all jobs which are pretty much the same job with different configuration.
Is there a way to handle my problem better with flink? Is flink not the solution?
Caused by: java.lang.ClassCastException: LinkedMap in instance of org.apache.flink.cdc.debezium.DebeziumSourceFunction
Trying to connect to mongoDB using FlinkSQL. Created maven project and implemented the java code. after building the jar and submitted to flink cluster.