[Webinar] AI-Powered Innovation with Confluent & Microsoft Azure | Register Now
An Apache Flink® job not producing results often indicates an issue with watermarks, which are necessary for handling out-of-order message processing in time-based aggregations from Kafka. Watermarks balance data loss and latency by defining how long to wait for late messages. Incorrectly setting...
The post discusses the Dual-Write Problem in distributed systems, where atomic updates across multiple systems like databases and messaging systems (e.g., Apache Kafka) are challenging, leading to potential inconsistencies. It outlines common anti-patterns that fail to address the issue...
Learn how to build a Java pipeline that consumes clickstream data from Apache Kafka®. Consuming clickstreams is something that many businesses have a use for and it can also be generalized to consuming other types of streaming data.
This blog post discusses the two generals problems, how it impacts message delivery guarantees, and how those guarantees would affect a futuristic technology such as teleportation.
Building data streaming applications, and growing them beyond a single team is challenging. Data silos develop easily and can be difficult to solve. The tools provided by Confluent’s Stream Governance platform can help break down those walls and make your data accessible to those who need it.