failed to submit Job to Flink standalone ZooKeeper-HA-cluster
I have a standalone ZooKeeper-HA-cluster.
First restart ZK cluster.
Second Submit 31 Jobs with Command-Line , ./bin/flink run
Then I caught an exception. Specifically, there were 4 jobs that failed to submit.
flink dynamic window adjust based on the completed window
My idea is to set a parameter that will adjust its value based on the data of each window, and use this value to adjust the size of the next window. How should I implement this function
flinkcdc 2.4 snapshot.select.statement.overrides how to config
I need use flinkcdc 2.4 capture data from mysql , I need write a backfill program, I want to do a initial mode just capture the subset of the table, I find the property snapshot.select.statement.overrides, but it does not work, I find a issue in github
https://github.com/apache/flink-cdc/discussions/1131
it said the scan.incremental.snapshot.enabled should be false, but how to config this property to false in datastream api? thanks
How to write custom plugin for flink
I try to write my custom flink plugin.
Add custom metric in jobmanager
When I start the Flink JobManager, I see some standard metrics such as flink_jobmanager_Status_JVM_ClassLoader_ClassesLoaded
and flink_jobmanager_Status_JVM_Memory_Mapped_TotalCapacity
.
In Apache Flink, how do I have a sink of sinks?
In flink 1.14 how can I have a single sink which both writes to kafka and some other data source.
How to achieve even distribution of messages across Flink operator subtasks
I’m attempting to evenly distribute messages across 8 parallel instances (task slots).
Executive ./bin/start-cluster.sh is giving update#setState idle error
I am facing struggle with starting Flink,it is showing up with update#setState idle and few other commands and leading to opening the code in vscode and displaying starting cluster commands. how to solve it?
flink-cdc 3.1.0 something goes wrong?
ods_order.yaml:
Metrics short lived jobs
I’m launching a job (automatically detected as batch) that reads a csv (3 records) and outputs the same content to Kafka. At the end of this job, which takes about 3s to run, I need to get some metrics, including custom ones.