Développez l'apprentissage automatique prédictif avec Flink | Atelier du 18 déc. | S'inscrire
The session will discuss how Uber evolved its stream processing system to handle a number of use cases in Uber Marketplace, with a focus on how Apache Kafka and Apache Samza played an important role in building a robust and efficient data pipeline. The use cases include but not limited to realtime aggregation of geospatial time series, computing key metrics as well as forecasting of marketplace dynamics, and extracting patterns from various event streams. The session will present how Kafka and Samza are used to meet the requirements of the use cases, what additional tools are needed, and lessons learned from operating the pipeline.