Prerequisites
- Apache Kafka 3.2 or later installed in your environment
- (Optional) Confluent Cloud account if deploying on Confluent Cloud
Features
- Append-only writes with at-least-once delivery semantics
- Schema Registry support for Kafka message values
- Developed and maintained by Firebolt; verified by Confluent
- Supports all Firebolt data types except STRUCT and GEOGRAPHY
Quickstart
Follow this guide to set up Firebolt Kafka Connect Sink on Confluent Cloud.Firebolt details
To connect to Firebolt you need the following information:- Service account client ID and client secret
- Database name β the database that will contain the tables populated from Kafka topics
- Engine name β the engine that will run INSERT queries
- Account name β the Firebolt account that has access to the database
Kafka details
- Topic names β the topics that will be synced to Firebolt tables
- Kafka API key and secret β when deployed on Confluent Cloud, used to authenticate to Kafka
- Schema Registry API key and secret β if using Schema Registry on Confluent Cloud, used to authenticate to Schema Registry
Firebolt connector configuration
- Mandatory attributes
firebolt.clientIdβ client ID used to authenticate to Fireboltfirebolt.clientSecretβ client secret corresponding to the client IDjdbc.connection.urlβ JDBC connection URL used to connect to Firebolt. It must include the database name, account name, and engine name.Do not put the client ID and client secret in the JDBC connection URL; this attribute is not obfuscated when the connector definition is displayed.topicsβ comma-delimited list of topics the connector listens to (for example:mytopic1,mytopic2,mytopic3)value.converterβ set toio.confluent.connect.json.JsonSchemaConverterkey.converterβ set toorg.apache.kafka.connect.storage.StringConverter
- Optional attributes
topic.to.table.mappingβ if your topic names do not match your table names, use this property to map topics to tables. It is a comma-separated list oftopic_name:table_namepairs (for example:mytopic1:mytable1,mytopic2:mytable2).value.converter.schema.registry.urlβ URL of your Schema Registry if used for the value schemavalue.converter.basic.auth.credentials.sourceβ set toUSER_INFOif using API key/secret to communicate with Schema Registryvalue.converter.schema.registry.basic.auth.user.infoβ credentials in the formatapi_key:api_secreterrors.deadletterqueue.topic.nameβ dead-letter queue topic for messages that cannot be processederrors.deadletterqueue.context.headers.enableβ set totrueto include failure context headers in the dead-letter queueerrors.toleranceβ set toallso that Kafka messages that cannot be processed are sent to the dead-letter queue
Install Firebolt connector on Confluent Cloud
- In Confluent Cloud, navigate to the target cluster. Select Connectors in the left navigation and search for βFireboltβ.

- The connector is verified by Confluent but is not managed by Confluent, so you need to download the archive.

- Create a new Custom Connector using the downloaded artifact.

- Configure the Firebolt connector.

- Connector plugin name β choose a name for your connector
- Connector class β
com.firebolt.kafka.connect.FireboltSinkConnector(the custom connector class for Firebolt) - Type β select
Sink(Firebolt implements the Sink functionality) - Connector archive β select the JAR file you downloaded in step 2
- Sensitive properties β Firebolt Connect Sink has two sensitive properties (they are not shown in the UI or via REST):
firebolt.clientIdβ client ID used to authenticate to Fireboltfirebolt.clientSecretβ client secret corresponding to the client ID




- You should now see the connector running on the Connectors page.

Troubleshoot installing Firebolt connector on Confluent Cloud
- Networking endpoints troubleshooting β The Kafka connector needs to know in advance the egress endpoints it will call so it can allowlist those IP addresses.
Set endpoints for Firebolt authentication (
id.app.firebolt.io) and Firebolt backend API calls (api.app.firebolt.io). Some endpoints are dynamic (the account URL is specific to the account in your JDBC URL). Each endpoint may be served by multiple IPs because a reverse proxy is used in front of the services.



- Log messages with request being too large
- reduce the number of messages that you ingest in a kafka batch by setting this property
consumer.override.max.poll.recordsto a smaller value - contact support@firebolt.io to request an increase in maximum payload request size
Kafka Sink connector is under development so we will be adding new features in the following versions:
- Change data capture (CDC): not currently supported
- Schema evolution: not currently supported
- Avro format and Kafka message keys with Schema Registry: not currently supported