Sequin comes with managed consumers you can connect to your streams that will forward data to popular targets. One of those targets is a Kafka cluster.

You can filter the data you want a consumer to handle by specifying syncs, collections, and event types. You can also route data to different topics based on its properties.

Kafka is also a great option if you want to build your own consumer. You can route all events or records to Kafka first, then forward from Kafka to your application or a database.


Your Kafka consumer can consume records from the records stream or events from the events stream.

You’ll create Kafka topics on your cluster according to your needs. For example, you can create a topic for each collection you want to consume. Then, you’ll connect Sequin to your Kafka cluster and specify how you want Sequin to map records or events to topics.

A single Kafka consumer can consume records or events from multiple syncs and collections.

You can choose to initialize your Kafka consumer at the beginning or at the end of the stream.

If a Kafka consumer is offline or unreachable for a long time, Sequin’s system will automatically turn the consumer off. You’ll see a “disabled” status in the console and Management API. You can turn the consumer back on at any time.

Sequin’s Kafka consumer is located in AWS’ us-west-2.


Create your Kafka topic(s)

You’ll route records or events from Sequin to topics in your Kafka cluster. Create those topics so Sequin can push messages to them. You can change the topics Sequin pushes to at any time.

Create a Kafka user for Sequin

You’ll create a Kafka user for Sequin and then supply Sequin with the credentials. Sequin will use this Kafka user to connect to your database and perform upserts to your target tables.

Below are the instructions for setting up Sequin with a self-hosted Kafka setup. If you’re using a managed Kafka service, consult their documentation for how to create a user and grant access to topics.

1. Enable SASL/SCRAM for Authentication (if needed)

First, ensure that your Kafka cluster is configured for SASL/SCRAM authentication. This is typically done in the Kafka server properties file (

You’ll also need to enable ACL for authorization.

Add or update the following properties:


2. Restart Zookeeper and Kafka (if needed)

If you changed the in the prior step, you’ll need to find a convenient time to restart your cluster for changes to take effect.

3. Create a Kafka user

Use the script to add a user: --zookeeper <zookeeper-host>:<port> --alter --add-config 'SCRAM-SHA-256=[password=▀▀▀▀▀▀]' --entity-type users --entity-name sequin

4. Set ACLs for the user

You can choose to give Sequin read and write access to each topic or just write access. Write access is all that’s strictly necessary, although sometimes read access can be helpful for debugging purposes.

For read access: --authorizer-properties zookeeper.connect=<zookeeper-host>:<port> --add --allow-principal User:sequin --operation Read --topic {{topicname}}

For write access: --authorizer-properties zookeeper.connect=<zookeeper-host>:<port> --add --allow-principal User:sequin --operation Write --topic {{topicname}}

Add the Kafka cluster to Sequin (console)

After creating a Kafka user for Sequin, you’ll connect Sequin to your Kafka cluster.

Click on “Targets” in the Sequin console and then click on “Add new → Kafka.” That will bring up Sequin’s Kafka connection flow.

You’ll be prompted for details like your cluster’s hostname, port, and the credentials for the Kafka user you created for Sequin. You can have Sequin connect to your cluster via a bastion host if you wish.

Add the Kafka cluster to Sequin (API)

Alternatively, you can add clusters to Sequin via our Management API.

Setup the consumer

The consumer is the worker cluster that will pull objects off your Sequin streams and push them to your Kafka cluster. You can have one consumer pull data from multiple collections across multiple syncs writing to one Kafka cluster.

In the Sequin console, click on “Consumers” and then click on “Add new → Kafka.” That will bring up Sequin’s Kafka consumer setup flow:

1. Select target

Select your Kafka cluster as the target.

2. Select stream

Select which stream you want to consume from, the record or event stream.

The record stream is great if downstream consumers only need to know the current state of an API object. This is the case in situations where you’ll be pulling from Kafka into a database. You can configure your Kafka topic to compact such that the stream efficiently holds a copy of every record.

Otherwise, the event stream will give you each change event for all API objects.

3. Select syncs

Select which syncs you want this consumer to pull data from. You can change this list of syncs at any time in the future.

You can have one Kafka consumer pull data from one sync or thousands of syncs.

4. Select collections

Select which collections you want this consumer to consume, for example Salesforce contacts or Stripe subscriptions.

5. Map syncs and/or collections to topics

Next, specify how you want Sequin to map records or events to topics. You can map all records or events to one topic or bifurcate across topics based on the sync or collection.

6. Select start position

You can have your Kafka consumer start consuming from the beginning of the stream or from the end.

7. Save

Save your consumer. Sequin will provision a consumer cluster and begin populating your Kafka topic right away.

Don’t have a Kafka cluster?

If you don’t have a Kafka cluster, we recommend you check out Upstash. They offer a fully managed Kafka service that’s easy to set up and use.