Développez l'apprentissage automatique prédictif avec Flink | Atelier du 18 déc. | S'inscrire
The Q3 Cloud Bundle Launch comes to you from Current 2024, where data streaming industry experts have come together to show you why data streaming is critical today, especially in the age of AI, and how it will become even more important in shaping tomorrow’s businesses. This year’s Current event attracted over 2,500 attendees, both in-person and virtual, and featured 140+ learning sessions by industry experts.
This launch introduces a suite of enhancements across the four key pillars of a data streaming platform—stream, connect, process, and govern—alongside some significant work we have been doing with our partner ecosystem to help customers unlock new possibilities.
Confluent has helped over 4,900+ global enterprises start their data streaming journey and was recently named a Leader by Forrester Research in The Forrester Wave™: Streaming Data Platforms, Q4 2023.
Join us on October 17 for the Q3 Launch webinar and demo to see these new features in action.
Confluent Cloud for Apache Flink® is all about making stream processing easier, whether you're a SQL pro or a Java/Python enthusiast. Earlier this spring, we announced support for Flink SQL, giving you a powerful, easy-to-use tool for filtering, aggregations, and joins using the SQL syntax you already know.
We’re excited to extend Flink to more of the languages and tools that developers love with Flink Table API, now in open preview. This API brings the power of Java and Python into the mix, offering developers the programmatic control they need for intricate data transformations and custom logic. With the Table API, you can work in your preferred language, take advantage of modern IDE tools, and integrate seamlessly into your existing codebases. Whether you're handling simple queries or complex processing tasks, Confluent Cloud for Apache Flink has you covered, offering the flexibility to choose the right tool for the job.
Support for the Flink Table API allows customers to:
Enhance language flexibility by enabling developers to use their preferred programming languages, taking advantage of language-specific features and custom operations
Improve the developer experience through better IDE support, featuring auto-completion, refactoring tools, and compile-time checks to ensure higher code quality and minimize runtime issues
Facilitate easier debugging with an iterative approach to data processing and streamlined CI/CD integration
In the data streaming paradigm, schemas play a pivotal role in defining the structure of data, enabling systems to efficiently interpret, validate, and manipulate streams of information.
We launched Confluent Cloud for Apache Flink® with support for Schema Registry, but what if you already have your schemas defined outside of Confluent’s Schema Registry? Now we’re introducing flexible schema management in Flink so that you can use our serverless Flink offering regardless of your schema management practices and without complex schema migrations or laborious changes to serialization formats. This new feature is crucial for organizations that manage schemas independently or are in the prototype stage, where schema definition isn't yet solidified.
With flexible schema management, you can:
Maintain schema flexibility by leveraging Flink's powerful stream processing capabilities while adhering to established organizational schema management practices
Streamline operations by using existing Avro, Protobuf, or JSON schemas with Flink without needing to re-encode the data into a different format
Ensure data consistency and integrity by maintaining the original wire encoding throughout the data lifecycle, reducing the risk of errors or data loss during conversion processes
In addition, Confluent Cloud for Apache Flink can now also accommodate multiple schemas or event types within a single table, leading to simplified data management, more efficient processing, and greater flexibility.
AI, especially generative AI, is all the buzz. You’re building AI models and we’re excited to integrate them into the data streaming platform as first-class citizens.
Today, we’re excited to announce that AI model inference in Confluent Cloud for Apache Flink® is available in open preview. This milestone makes the new functionality widely accessible, allowing users to test and experiment with integrating AI into their stream processing workflows. By entering the open preview phase, we invite customers to explore this advanced capability and provide feedback, which will help us refine and enhance the feature before its general release.
By adding support for AI model inference, our Flink service allows you to:
Simplify development by using familiar SQL syntax to interact directly with AI/ML models, including large language models (LLMs), thereby reducing the need for specialized ML tools and languages
Coordinate data processing and ML workflows more effectively, minimizing operational complexity and improving efficiency
Enable accurate, real-time AI-driven decision-making by leveraging fresh, contextual streaming data to support scenarios like retrieval-augmented generation (RAG), which updates LLM models with real-time information
This integration allows you to use ML models as first-class resources in Flink, enabling you to call remote ML model endpoints like OpenAI, GCP Vertex AI, AWS SageMaker, and Azure. You can manage these models using SQL data definition language (DDL) statements, eliminating the need to handle the underlying infrastructure. This approach enhances the flexibility and scalability of your ML applications, making it easier than ever to bring AI into your real-time data pipelines.
Confluent Cloud already offers robust enterprise-grade security features like data-at-rest encryption, data-in-transit encryption via Transport Layer Security (TLS), and role-based access controls (RBAC). But for organizations in regulated industries such as financial services, healthcare, and the public sector, there's often a need for even tighter data protection, especially for sensitive information like PII.
That’s why we’re excited to introduce Client-Side Field Level Encryption (CSFLE) to help teams set all their data in motion, including their most sensitive workloads. You can encrypt individual fields within messages on the producer side to prevent unwanted access from even system admins and highly privileged users for enhanced security and compliance.
With CSFLE, organizations can:
Improve the security of sensitive data and adhere to strict compliance requirements
Maintain flexible and granular access control of which specific fields to encrypt
Lower total cost of ownership and operational complexity by reducing the need for topic duplication
CSFLE supports multiple client languages including Java, Go, and C#/.NET with Node.js and Python coming soon. You can also integrate with select fully managed connectors such as Amazon S3 sink and Snowflake sink connectors.
Upgrade to Stream Governance Advanced to start using CSFLE in limited availability, recommended for production workloads, with general availability to follow in the coming weeks.
We’re taking data streaming in your own cloud to the next level with WarpStream. Now, our customers can get streaming data any way they need it—self-managed, fully managed, or bring your own cloud (BYOC).
“Confluent wants to offer data streaming to all customers with all requirements and workloads. I've been deeply impressed with WarpStream—it’s BYOC done right. With this acquisition, we have a data streaming offering for everyone.” — Jay Kreps, CEO and co-founder.
Get the full scoop in Jay’s blog.
We're excited to introduce a new program designed for managed service providers (MSPs), cloud service providers (CSPs), and independent software vendors (ISVs) worldwide, making it easier than ever to bring data streaming to your products and services. Built on the trusted foundation of Apache Kafka® and Apache Flink®, and backed by the original creators, this program allows partners to provide Confluent’s data streaming platform everywhere your business is run.
With this program, Confluent will offer ongoing technical support, expert implementation guidance, and certification to help our partners launch enterprise-ready offerings and ensure long-term success for their customers. For more information or to learn how to join the program, check out our blog.
Confluent is committed to being a strategic ally in overcoming customers’ business challenges. We’ve added even more partner solutions to Build with Confluent (BwC) and new migration offerings to our Confluent Migration Accelerator to better serve you.
Build with Confluent
Partners can quickly develop streaming use case offerings, allowing you to jump-start new projects and innovate faster.
We’re excited to launch 11 new partner solutions this quarter from BearingPoint, Converge, EPAM, GoodLabs Studio, Improving, Informula, Mindlabs, Ness Digital Engineering, Seynur, and Synthesis Software Technologies (Pty) Ltd.
Check out the entire portfolio of Build with Confluent solutions here.
Confluent’s Migration Accelerator
Transition smoothly from traditional messaging systems or Apache Kafka® to Confluent, ensuring minimal disruption and maximum efficiency.
We’re happy to announce that Marionete and Synthesis Software Solutions (Pty) Ltd have joined the accelerator. Learn more about our partner offerings here.
By working hand in hand with Confluent and our partners, you can meet the growing demand for real-time customer experiences and applications, unlocking new opportunities for growth and success.
Now one year old and comprised of 50+ integrations, our Connect with Confluent (CwC) partner program further extends the global data streaming ecosystem and brings Confluent data streams directly to developers’ doorsteps within the tools where they are already working. In just 12 short months, the program has transformed from an ambitious initiative into a thriving and ever-expanding connector portfolio that’s rapidly increasing value of your real-time data.
New members to the CwC partner program in Q3 include Amazon OpenSearch, Cribl, MongoDB Relational Migrator, Qdrant, and Synthesis.
Check out the CwC Q3 announcement blog to learn more about these connectors and major milestones achieved through the CwC program this past year.
To help Kafka operators identify and resolve issues before they escalate, we've introduced new client observability metrics and tools that provide greater visibility into cluster performance. These new metrics include:
Hot partition warning metrics: Available for all clusters, this feature helps identify partitions with unusually high loads that self-balancing clusters (SBC) can't balance automatically, allowing you to manually address potential issues before they cause throttling or availability problems.
CKU count metric: For Dedicated clusters, this metric helps determine the cluster's capacity by tracking the number of Confluent Kafka Units (CKUs), ensuring your cluster is right-sized for its workload.
Cluster load metric aggregations: Also for Dedicated clusters, this metric provides a clear view of how utilized your cluster is, with a load graph in the Cloud Console that shows usage percentages, helping you monitor and manage performance to avoid latency spikes.
Additionally, you can now easily spot outdated client versions and producer-side latency directly in the UI.
Earlier this year, we announced that our cloud-native Apache Flink service supports private networking on AWS for Enterprise clusters. This represents an important first step in providing a secure environment for data streaming workloads and ensuring compliance with regulations like GDPR and CCPA. Today, we're thrilled to extend this capability to Dedicated clusters, which are used by many customers for large-scale, mission-critical applications.
We plan to extend Flink private networking support to additional cloud platforms and cluster types soon.
HTTP Source and Sink V2 connectors
Two of Confluent Cloud’s most popular connectors among our customers are the fully managed HTTP Source and Sink connectors, which integrate API-based applications (e.g., SaaS apps and microservices) with Confluent via HTTP or HTTPS. Today, we are excited to introduce upgraded HTTP Source and Sink V2 connectors to deliver key improvements like, streamlined configuration via OpenAPI standards, capability to handle multiple API paths, and support for OAuth 2.0 Code Grant flow coming later this year.
We recommend that current users of the HTTP connectors migrate to V2 using custom offsets and that new users start with the HTTP V2 connectors to get access to all the latest enhancements.
Ready to get started? If you haven’t done so already, sign up for a free trial of Confluent Cloud to explore the new features. New sign-ups receive $400 to spend within Confluent Cloud during their first 30 days. Use the code CL60BLOG for an additional $60 of free usage.*
The preceding outlines our general product direction and is not a commitment to deliver any material, code, or functionality. The development, release, timing, and pricing of any features or functionality described may change. Customers should make their purchase decisions based on services, features, and functions that are currently available.
Confluent and associated marks are trademarks or registered trademarks of Confluent, Inc.
Apache®, Apache Kafka®, and Apache Flink® are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. No endorsement by the Apache Software Foundation is implied by using these marks. All other trademarks are the property of their respective owners.
Continuing our discussion of JVM microservices frameworks used with Apache Kafka, we introduce Micronaut. Let’s integrate a Micronaut microservice with Confluent Cloud—using Stream Governance—and test the Kafka integration with TestContainers.
With both Confluent and Amazon Redshift supporting mTLS, streaming developers and architects are able to take advantage of a native integration that allows Amazon Redshift to query Confluent Cloud topics.