Is there a way to use IterativeStreams in Pyflink?
I’m looking for a way to introduce ‘loops’ in dataflows. The way I’m currently doing so is by using the same topic as source/sink in Kafka. However, I’d like to reduce latency by not having to go through Kafka and instead looping directly in the dataflow.
PyFlink data transform – how to combine three streams to one with matching ids?
I’m struggling with a data transformation using Pyflink and the DataStream API. I’m open to using the Table API if that is more suitable.