[Webinar] Build Your GenAI Stack with Confluent and AWS | Register Now
As part of our recent Q3 Launch for Confluent Cloud we announced the general availability of the Confluent Terraform Provider. HashiCorp Terraform is an Infrastructure as Code tool that lets you define both cloud and on-prem resources in human-readable configuration files that you can version, reuse, and share, using standardized resource management tooling, pipelines, and processes.
The new Confluent Terraform Provider, built in partnership with HashiCorp, allows businesses to:
Altogether, teams can safely accelerate data streaming initiatives in the cloud with infrastructure management that is fully automated through code and integrated within continuous delivery workflows.
Let’s dive a bit deeper to understand more about this new integration.
Resources are the most important element in the Terraform language. A Terraform resource describes one or more infrastructure objects, such as an Environment, Kafka Cluster, or API Key. You can manage the following Confluent Cloud resources in Terraform:
Data sources allow you to load data from APIs or other Terraform workspaces. The Confluent Terraform Provider supports the following data sources:
Now that you’ve learned a bit about the Confluent Terraform provider, let’s get started and put it to work.
You’ll need these prerequisites to follow along:
Let’s test one of the pre-built sample configurations in the Confluent Terraform Provider Github Repository.
git clone https://github.com/confluentinc/terraform-provider-confluent.git
cd terraform-provider-confluent/examples/configurations/standard-kafka-rbac
The main.tf file contains Terraform resources that represent the infrastructure we will be building, including an Environment, a Kafka cluster, a Kafka topic, three Service Accounts, two API Keys, and 4 RBAC role bindings.
Now let’s actually apply the Terraform configurations!
terraform init
export TF_VAR_confluent_cloud_api_key="<cloud_api_key>" export TF_VAR_confluent_cloud_api_secret="<cloud_api_secret>" #note that “” is required around the values
terraform plan
terraform apply #then enter a confirmation: yes
Let’s make sure everything works as we expect!
confluent version
Your output should resemble:
... Version: v2.5.1 # any version >= v2.0 is OK ...
# Alternatively, you could also run terraform output -json resource-ids terraform output resource-ids
Your output should resemble:
# 1. Log in to Confluent Cloud $ confluent login # 2. Produce key-value records to topic '<TOPIC_NAME>' by using <APP-PRODUCER'S NAME>'s Kafka API Key $ confluent kafka topic produce <TOPIC_NAME> --environment <ENVIRONMENT_ID> --cluster <CLUSTER_ID> --api-key "<APP-PRODUCER'S KAFKA API KEY>" --api-secret "<APP-PRODUCER'S KAFKA API SECRET>" # 3. Enter copy and paste each sample record row one at a time, then hit enter: {"number":1,"date":18500,"shipping_address":"899 W Evelyn Ave, Mountain View, CA 94041, USA","cost":15.00} {"number":2,"date":18501,"shipping_address":"1 Bedford St, London WC2E 9HG, United Kingdom","cost":5.00} {"number":3,"date":18502,"shipping_address":"3307 Northland Dr Suite 400, Austin, TX 78731, USA","cost":10.00} # press 'Ctrl-C' to kill the producer and return to your terminal prompt # 4. Consume records from topic '<TOPIC_NAME>' by using <APP-CONSUMER'S NAME>'s Kafka API Key $ confluent kafka topic consume --from-beginning --environment <ENVIRONMENT_ID> --cluster <CLUSTER_ID> --api-key "<APP-CONSUMER'S KAFKA API KEY>" --api-secret "<APP-CONSUMER'S KAFKA API SECRET>" # When you are done, press 'Ctrl-C'. # Note that values surrounded by <> will be dynamically updated for your specific deployment.
Run the following command to destroy all the resources you created:
terraform destroy
This command destroys all the resources specified in your Terraform state. terraform destroy doesn’t destroy resources running elsewhere that aren’t managed by the current Terraform project.
We will continue to invest in Terraform to help our customers reduce complexity, risk, and time-to-market for data streaming infrastructure deployments. Look for new resources, data sources, bug fixes, and optimizations by following the Confluent Terraform Provider changelog on Github.
If you haven’t done so already, sign up for a free trial of Confluent Cloud, our cloud-native data streaming platform built by the founders of Apache Kafka®. New sign ups receive $400 to spend within Confluent Cloud during their first 30 days. Use the code CL60BLOG for an additional $60 of free usage. Once you’ve created an account you can then try out our Terraform Sample Project guide yourself!
Please reach out to our team with any questions or requests by email at: cflt-tf-access@confluent.io, or on the Confluent Community slack #terraform channel.
We covered so much at Current 2024, from the 138 breakout sessions, lightning talks, and meetups on the expo floor to what happened on the main stage. If you heard any snippets or saw quotes from the Day 2 keynote, then you already know what I told the room: We are all data streaming engineers now.
We’re excited to announce Early Access for Confluent for VS Code. This Visual Studio integration streamlines workflows, accelerates development, and enhances real-time data processing, all in a unified environment. This post shows how to get started, and also lists opportunities to get involved.