Développez l'apprentissage automatique prédictif avec Flink | Atelier du 18 déc. | S'inscrire
We hosted our first-ever Confluent AI Day on October 23 in San Francisco and virtually. It was sponsored by Confluent, AWS, and MongoDB, with a vibrant gathering of talent and innovation. With 200 attendees, the full-day event brought together AI developers, technology leaders, and startup innovators to explore how data streaming powers generative AI (GenAI) applications. Featuring nine speakers from across the AI stack, participants gained valuable insights into cutting-edge technologies and learned how to build RAG-enabled applications. The day saw over a dozen impressive GenAI hackathon entries, with participants showcasing their creativity and problem-solving applications on stage. From discussions to hands-on workshops, here’s a look at the highlights from AI Day.
In case you missed it, you can catch up with the full AI Day livestream on demand.
AI Day kicked off with a keynote from Andrew Sellers, Head of Technology Strategy at Confluent, and Tim Graczewski, Global Head of Confluent for Startups, on how data streaming helps build and scale GenAI applications.
“The bigger data gets, the more specialized one has to be in how it's organized and queried,” said Sellers. “The specialization of data streaming is that it makes data consumable. To take all that data to flatten it out, denormalize it, make it very well contextualized—and that way, it can have meaning to the LLM.”
Confluent’s data streaming platform provides the minimum set of capabilities necessary to make data a reusable asset. It streams data across any environment, with pre-built connectors to connect data systems and applications, Apache Flink® to process data, and Stream Governance to ensure data quality and security. Sellers explained how these capabilities enable retrieval-augmented generation (RAG), which can be implemented in four key steps:
Data augmentation – Integrating disparate operational data from wherever it is in the enterprise, and making it available in real time to be reliable, discoverable, and trustworthy.
Inference – Connecting relevant information with each prompt, contextualizing what users are asking for and ensuring GenAI applications are built to handle those responses.
Workflows – Parsing natural language, synthesizing the necessary information, and using a reasoning agent to determine what to do next to optimize performance.
Post-processing – Validating LLM outputs and enforcing business logic and compliance requirements to detect hallucinations and ensure the LLM has returned a trustworthy answer.
Dive into the details behind the four steps in this blog.
“Data streaming turns our workflows from highly synchronous to asynchronous things that allow for decoupling teams, technologies, and systems,” said Sellers. “They can scale confidently and independently, both in their development and in their operations.”
He explained how a data streaming platform acts as the data orchestration substrate that works with any vector store, embedding model, LLM, etc., in a decoupled architecture. This accelerates time to market, and allows for ease of adopting the latest AI components. Confluent also supports Flink AI Model Inference, which:
Simplifies development by using familiar SQL syntax to work directly with AI/ML models, reducing the need for specialized ML tools and languages
Enables seamless coordination between data processing and ML workflows to improve efficiency and reduce operational complexity
Allows for accurate, real-time AI-driven decision-making by leveraging fresh, contextual streaming data to enable patterns like RAG
In the second half of the keynote, Tim Graczewski announced the launch of AI Accelerator, a specialized 10-week program for AI startups that provides early access to Confluent’s newest AI tools and features along with one-on-one technical and business mentoring. Applications are currently open and the first cohort of AI startups will be announced in early 2025.
From there, an engaging panel discussion on “Is your data ready for trustworthy GenAI?” featured insights and first-hand experiences shared by guest speakers including:
Garvan Doyle, Head of Applied AI, Anthropic
Amit Singh, Global Head, GTM & Use Cases, GenAI & ML Partners, Amazon Web Services (AWS)
Prakul Agarwal, AI Product Manager, MongoDB
Adam Watkins, Co-founder and CTO, Reworkd AI
To help put everything into practice, AWS and MongoDB held two GenAI workshops with live demos and Q&A:
“Data augmentation for RAG with Flink AI Model Inference and MongoDB Atlas” [GitHub Repo] – with Prakul Agarwal, AI Product Manager, MongoDB and Braeden Quirante, Strategic Solutions Engineer, Confluent.
“Building a real-time RAG-enabled GenAI application with Amazon Bedrock” [GitHub Repo] – with Vijay Pawar, Principal Solutions Architect, AWS and Austin Groeneveld, Streaming Analytics Specialist, Solutions Architect at AWS.
Both workshops can be accessed in the livestream recording.
Then it was time for a hackathon with a chance to win amazing prizes including an Apple Vision Pro. Developers worked individually and in teams to crank on code and build their GenAI applications. In a matter of hours, they were able to leverage Confluent’s pre-built connectors, write Flink statements to process data, and use Stream Governance with Schema Registry to quickly tap into data streams for a wide variety of use cases, even implementing a RAG pattern. Participants came on stage to showcase and demo the applications that they had built on Confluent’s data streaming platform. The following highlights the hackathon winners and honorable mentions with innovative GenAI use cases:
3D customer service agent from Xian and Yosun
In less than three hours, Xian and Yosun built a 3D service agent with Jay Kreps as the avatar, generating audio replies to customer questions. They leveraged Confluent connectors to write relevant data to topics—including customer chats, order history, product reviews, and social media messages—and provide the customer service agent with real-time context for generating the most helpful responses for the customer. The potential impact saves time and cost for service teams while providing personalized customer interactions.
Churn prevention from Arvind
Arvind created a GenAI application that detects and avoids user churn based on their orders and support conversations. As seen in Confluent’s Stream Lineage view below, his application used connectors to stream data from different sources and Flink to join and aggregate data, flagging at-risk users and creating appropriate follow-ups.
Developer productivity tool from Samuel
Samuel built a chatbot that imitates how someone would reason through a ticket that has already been done in the codebase, helping accelerate onboarding, easily transfer domain knowledge to junior developers, and boost productivity. It features a streaming RAG pipeline with connectors for a GitHub source and MongoDB sink.
Security tool to detect and respond to identity-based anomalies – from Anupam
R&D assistant that analyzes AI/ML posts to generate relevant insights and ideas – from Adrian, Benedict, and Tan
Healthcare chatbot that helps update doctor availability, schedule appointments, and provide real-time monitoring of patient health data – from Shaik
Keep it tidy app that prioritizes Slack messages received in real time, to let users know which messages and mentions deserve immediate attention, and which can be left to respond at a later time – from Yeop
Thank you to everyone who joined us—all attendees and speakers, as well as Ram Dhakne, Staff Solutions Engineer at Confluent, who was a fantastic emcee. This was our first AI Day and we look forward to seeing you at the next one.
Watch the full livestream on demand:
To learn more, visit the GenAI hub for developer resources and stay tuned for future AI events—both online as well as coming to a city near you.
Join Confluent at AWS re:Invent 2024 to learn how to stream, connect, process, and govern data, unlocking its full potential. Explore innovations like GenAI use cases, Apache Iceberg, and seamless integration with AWS services. Visit our booth for demos, sessions, and more.