Hands-on Flink Workshop: Implement Stream Processing | Register Now

Streaming Data Fuels Real-time AI & Analytics: Connect with Confluent Q1 Program Entrants

Écrit par

In today’s fast-moving digital economy, organizations need real-time intelligence to power AI, analytics, and increasingly fast paced decision-making. But to successfully deploy AI and advanced analytics, businesses must operate on trusted, up-to-date data streams that provide an accurate picture of what’s happening right now. That’s where Confluent’s data streaming platform comes in—enabling businesses to connect, process, and govern data in motion to fuel the AI-driven enterprise.

This quarter, we welcome three new partner integrations into the Connect with Confluent technology partner program, each bringing unique capabilities to enhance AI and analytics with streaming data: Amazon Redshift, SAS, and Vectara. Their integrations open up exciting possibilities, making it easier than ever to connect real-time data with the tools and platforms today’s enterprises rely on.

Meet the Q1 2025 Connect with Confluent entrants

New members to the CwC partner program in Q1 include Amazon Redshift, SAS, and Vectara. Additionally, the HiveMQ and Onibex integrations have each been updated.

  • Amazon Redshift: Securely query real-time data streams from Confluent Cloud within Amazon Redshift with mutual TLS (mTLS) authentication, enabling up-to-the-second analytics and decision-making.

  • SAS: Connect Confluent with SAS Event Stream Processing to deliver high-performance analytics that power real-time insights in milliseconds.

  • Vectara: Accelerate development of real-time AI assistants and agents with fresh, highly contextualized streaming data, integrated with Vectara’s enterprise retrieval-augmented generation (RAG) platform.

Beyond new program entrants, our existing Connect with Confluent partners continue to enhance their integrations:

HiveMQ allows businesses to create a real-time IoT data foundation with streaming data. With support for HiveMQ Cloud recently added, more Confluent customers can now deploy secure, bidirectional data exchange between IoT devices and Apache Kafka® clusters, powering use cases in industrial automation, smart cities, connected vehicles, and more. The native integration ensures reliable message delivery for event processing, real-time analytics, and AI-driven insights—helping businesses extract maximum value from their IoT and machine data.

The One Connect Snowflake JDBC connector from Onibex facilitates the real-time transfer of both historical and real-time SAP data from Kafka topics on Confluent’s platform to Snowflake tables. Leveraging Onibex’s streamlined connectivity of SAP ECC or S/4HANA data with Confluent Cloud, the new integration with Snowflake enables idempotent writes with upserts, and supports the auto-creation of tables, and auto-evolution using Schema Registry.

Confluent’s growing network of data streaming partners

As real-time AI and analytics adoption accelerates, organizations need a modern data architecture that ensures reliable, governed, and accessible data streams. The Connect with Confluent partner network is evolving to support this need—bringing together leading analytics, AI, and cloud-based data platforms that help enterprises extract maximum value from streaming data.

The Connect with Confluent (CwC) Q1 ‘25 program snapshot. Check out the full library of CwC integrations.

By integrating with Confluent, these partners enable businesses to break free from static, outdated data, and operate on always-fresh, high-quality data, which fuels AI models, analytics platforms, and operational systems.

AI is only as good as the data that fuels it

Successfully deploying AI across an enterprise requires real-time, trusted data products—not just raw data, but governed, reusable assets, designed to power AI and analytics, regardless of their source. This is where challenges arise. Enterprises know that providing AI applications with internally and externally generated operational data is key to unlocking high-impact use cases like anomaly detection, predictive analytics, and hyper-personalization. However, many struggle to operationalize AI, because their data remains siloed in disparate systems that weren’t built to work together.

Most organizations have two critical data silos:

  • Operational systems that power applications, transactions, and real-time events.

  • Analytical systems that drive data intelligence and AI for better decision-making.

AI models need fresh, real-time operational data to generate accurate predictions, while real-time applications depend on AI-driven insights from analytical systems to refine accuracy and automate decisions. However, data still moves between these environments via slow, fragile, and manual batch jobs, often losing governance and lineage along the way. While this might have worked for offline model training in traditional machine learning, it’s a serious problem for LLMs and agentic AI—where outdated or incorrect data can lead to flawed reasoning and unreliable outcomes.

Span the operational and analytical divide in real time

Bridging the gap between operational and analytical systems requires a real-time data streaming foundation that continuously delivers high-quality, trusted data products to fuel AI and analytics. Confluent’s data streaming platform, built by the original creators of Apache Kafka, enables enterprises to break down data silos by streaming real-time data from applications, transactions, and events, into AI and analytics systems. With 120+ pre-built connectors spanning the entire data ecosystem, businesses can seamlessly integrate real-time data from every corner of their operations, ensuring AI models and real-time applications are always working with the freshest, most relevant information.

Beyond just moving data, Confluent provides powerful tools like Apache Flink® stream processing and built-in governance, to transform raw data into AI-ready assets. Together, these tools enable businesses to clean, process, and enrich data in motion—ensuring that only high-quality, curated data lands in data lakes and analytics platforms. And with Tableflow (now in early access), organizations can effortlessly convert Kafka topics into Apache Iceberg or Delta tables, blending real-time Kafka logs with open format tables, for seamless integration into any warehouse, lake, or analytics engine. Paired with direct integrations into downstream governance suites, Confluent ensures that real-time data products remain governed, traceable, and compliant—so enterprises can confidently power AI with trusted data.

Convert streaming Kafka data and associated schemas to Apache Iceberg or Delta tables in a few clicks—to feed any data warehouse, data lake, or analytics engine.

Learn more about Confluent’s recently expanded partnership with Databricks, which will provide streamlined, bidirectional data flow between Tableflow and Databricks’ Unity Catalog, for Delta Lake tables.

Learn more about the development of Snowflake’s open source Polaris Catalog, which provides organizations and the Iceberg community with centralized, secure read and write access to Iceberg tables.

Go “Real-time” with Confluent

The Connect with Confluent program accelerates innovation by integrating real-time data streams across your most popular data systems, simplifying Kafka management, and enabling widespread adoption of Confluent Cloud’s powerful capabilities. The program provides:

Native Confluent integrations

CwC integrations embed Confluent’s data streaming directly into leading enterprise systems, providing fully managed solutions that eliminate the complexity of self-managing Apache Kafka. This allows businesses to power real-time, low-latency experiences efficiently and cost-effectively with Confluent Cloud—recognized as a leader in The Forrester Wave™: Streaming Data Platforms. Powered by the Confluent Kora engine, Confluent Cloud provides elastic scaling, an available 99.99% uptime SLA, 120+ pre-built connectors, Flink stream processing, stream governance, enterprise-grade security, and global availability—reducing Kafka TCO by up to 60% (Forrester TEI).

Expanding data streaming adoption

These integrations make real-time data more accessible across teams, reducing reliance on Kafka specialists, and enabling new use cases to emerge organically. By embedding Confluent within widely used tools, businesses can unlock more value from their data, without the burden of managing complex infrastructure.

More real-time data, more impact

With each new CwC integration, customers can instantly share and access data across Confluent’s vast streaming network. This ensures that real-time insights are readily available across business applications, making data more actionable and valuable than ever before.

Find and configure your next integration

Ready to get started? Check out the full library of Connect with Confluent partner integrations to easily integrate your application with fully managed data streams.

Not seeing what you need? Not to worry. Check out our repository of 120+ pre-built source and sink connectors, including 80+ that are fully managed.

Are you building an application that needs real-time data? Interested in joining the CwC program? Become a Confluent partner and give your customers the absolute best experience for working with data streams—right within your application, supported by the Kafka experts.

‎ 

Apache®, Apache Kafka®, Kafka®, the Kafka logo, Apache Flink®, Flink®, the Flink logo, Apache Iceberg™️, and the Iceberg logo, and associated open source project names are either registered trademarks or trademarks of the Apache Software Foundation.

  • Greg Murphy is the Staff Product Marketing Manager focused on developing and evangelizing Confluent’s technology partner program. He helps customers better understand how Confluent’s data streaming platform fits within the larger partner ecosystem. Prior to Confluent, Greg held product marketing and product management roles at Salesforce and Google Cloud.

Avez-vous aimé cet article de blog ? Partagez-le !