Using Flink and Flink CDC reports an error org.apache.flink.shaded.guava31.com.google.common.util.concurrent.ThreadFactoryBuilder
When I used Flink CDC to migrate data to Oracle and SQL Server, the Flink version used was 1.16.0 and the Flink cdc version used was 3.2.0. Then flink-shaded-guava used 31.1-jre-17.0. The error org.apache.flink.shaded.guava30.com.google.common.collect.Lists was reported. But when I used 30.1.1-jre-15.0, the error org.apache.flink.shaded.guava31.com.google.common.util.concurrent.ThreadFactoryBuilder was reported. The specific information is as follows:
Using Flink and Flink CDC reports an error org.apache.flink.shaded.guava31.com.google.common.util.concurrent.ThreadFactoryBuilder
When I used Flink CDC to migrate data to Oracle and SQL Server, the Flink version used was 1.16.0 and the Flink cdc version used was 3.2.0. Then flink-shaded-guava used 31.1-jre-17.0. The error org.apache.flink.shaded.guava30.com.google.common.collect.Lists was reported. But when I used 30.1.1-jre-15.0, the error org.apache.flink.shaded.guava31.com.google.common.util.concurrent.ThreadFactoryBuilder was reported. The specific information is as follows:
Using Flink and Flink CDC reports an error org.apache.flink.shaded.guava31.com.google.common.util.concurrent.ThreadFactoryBuilder
When I used Flink CDC to migrate data to Oracle and SQL Server, the Flink version used was 1.16.0 and the Flink cdc version used was 3.2.0. Then flink-shaded-guava used 31.1-jre-17.0. The error org.apache.flink.shaded.guava30.com.google.common.collect.Lists was reported. But when I used 30.1.1-jre-15.0, the error org.apache.flink.shaded.guava31.com.google.common.util.concurrent.ThreadFactoryBuilder was reported. The specific information is as follows:
Using Flink and Flink CDC reports an error org.apache.flink.shaded.guava31.com.google.common.util.concurrent.ThreadFactoryBuilder
When I used Flink CDC to migrate data to Oracle and SQL Server, the Flink version used was 1.16.0 and the Flink cdc version used was 3.2.0. Then flink-shaded-guava used 31.1-jre-17.0. The error org.apache.flink.shaded.guava30.com.google.common.collect.Lists was reported. But when I used 30.1.1-jre-15.0, the error org.apache.flink.shaded.guava31.com.google.common.util.concurrent.ThreadFactoryBuilder was reported. The specific information is as follows:
Using Flink and Flink CDC reports an error org.apache.flink.shaded.guava31.com.google.common.util.concurrent.ThreadFactoryBuilder
When I used Flink CDC to migrate data to Oracle and SQL Server, the Flink version used was 1.16.0 and the Flink cdc version used was 3.2.0. Then flink-shaded-guava used 31.1-jre-17.0. The error org.apache.flink.shaded.guava30.com.google.common.collect.Lists was reported. But when I used 30.1.1-jre-15.0, the error org.apache.flink.shaded.guava31.com.google.common.util.concurrent.ThreadFactoryBuilder was reported. The specific information is as follows:
Using Flink and Flink CDC reports an error org.apache.flink.shaded.guava31.com.google.common.util.concurrent.ThreadFactoryBuilder
When I used Flink CDC to migrate data to Oracle and SQL Server, the Flink version used was 1.16.0 and the Flink cdc version used was 3.2.0. Then flink-shaded-guava used 31.1-jre-17.0. The error org.apache.flink.shaded.guava30.com.google.common.collect.Lists was reported. But when I used 30.1.1-jre-15.0, the error org.apache.flink.shaded.guava31.com.google.common.util.concurrent.ThreadFactoryBuilder was reported. The specific information is as follows:
How to connect flink cdc to mysql on Ubuntu?
This is simple piece of cake, however, it’s not the case for me. It’s really complicated since I cannot grasp the understanding of Ubuntu structure. If it was C, C++, Python, Java or Julia, it would be easy.
Why does flink re-upload the jar to s3 each time when I delete the jobmanager deployment in kubernetes?
Prerequisite: I use the s3 file system, k8s HA, application mode.
Why does flink re-upload the jar to s3 each time when I delete the jobmanager deployment in kubernetes?
Prerequisite: I use the s3 file system, k8s HA, application mode.
Flink how to handle Kafka connector failures?
In Apache Flink, how can I catch the exceptions thrown by Kafka Source Connector?