Développez l'apprentissage automatique prédictif avec Flink | Atelier du 18 déc. | S'inscrire
Why replace ZooKeeper with an internal log for Apache Kafka® metadata management? This post explores the rationale behind the replacement, examines why a quorum-based consensus protocol like Raft was utilized […]
Classic relational database management systems (RDBMS) distribute and organize data in a relatively static storage layer. When queries are requested, they compute on the stored data and then return results […]
Stream processing has become an important part of the big data landscape, a new programming paradigm bringing asynchronous, long-lived computations to unbounded data in motion. But many people still think […]
Robust data governance support through Schema Validation on write is now supported in Confluent Platform 5.4. Schema Validation enables the broker to verify that data produced to an Apache Kafka® […]
With the release of Apache Kafka® 2.1.0, Kafka Streams introduced the processor topology optimization framework at the Kafka Streams DSL layer. This framework opens the door for various optimization techniques […]
We are happy to announce that our paper Streams and Tables: Two Sides of the Same Coin is published and available for free download. The paper was presented at the […]
This blog post is the third and last in a series about the exactly-once semantics for Apache Kafka®. See Exactly-once Semantics are Possible: Here’s How Kafka Does it for the […]
Here comes the July 2016 edition of Log Compaction, a monthly digest of highlights in the Apache Kafka and stream processing community. Want to share some exciting news on this […]
After months of testing efforts and seven voting rounds, Apache Kafka 0.10.0 and the corresponding Confluent Platform 3.0 have been finally released, cheers!
There has been much renewed interest in using log-centric architectures to scale distributed systems that provide efficient durability and high availability. In this approach, a collection of distributed servers can […]