Sequin comes with managed consumers that forward API data from your streams to popular targets. One of those targets is a Postgres database.

Querying your API data in Postgres has higher throughput, lower latency, and more expressive filtering than reading with HTTP.

Sequin’s Postgres consumer is simple but fast and efficient. It is great for most read-at-rest use cases where you want to query your raw API data without the limitations of HTTP. You can build complex read patterns on top of the Postgres consumer with views and SQL join statements to your internal data.

However, if you have a more complex use case, you can build your own consumer. For example, if you need to augment your API data with data from other sources. See limitations to learn more.


To setup a Postgres consumer, you’ll first register your Postgres database as a target and then configure your Postgres consumer.

During target setup, you’ll create a user on your Postgres database that Sequin will use to connect. You can restrict Sequin’s user to only have access to the schema and/or tables it needs to write to.

During consumer setup, you’ll specify which provider or sync you want to forward to your database (e.g. All Salesforce syncs or a specific Salesforce sync). Sequin will give you the create table statement you need to run to create the table Sequin will write to.

For more details on setup, see the setup section below.

Note that you provide only a single table name in the consumer configuration. Sequin’s Postgres consumer writes all of your API data to a single table:

                  Table "my_schema.my_table"

       Column        |           Type           | Nullable
 sequin_id           | uuid                     | not null
 sync_id             | uuid                     | not null
 sync_external_id    | text                     | null
 sync_metadata       | jsonb                    | not null
 collection_id       | text                     | not null
 upstream_id         | text                     | not null
 payload             | jsonb                    | not null
 updated_idx         | bigint                   | not null
 upstream_updated_at | timestamp with time zone | not null
 upstream_created_at | timestamp with time zone | not null
 inserted_at         | timestamp with time zone | not null
 updated_at          | timestamp with time zone | not null
    "my_table_pkey" PRIMARY KEY, btree (sequin_id)

See views to learn how to build filtered and transformed tables on top of the target table.

This table has a generic schema so that a row can contain data from any sync or collection. The raw API data is stored in the payload column as JSONB.

Views allow you to customize table schemas on top of this generic structure. There are many benefits of this approach, outlined below.

After setup, Sequin will backfill this table with all the records for the sync or syncs. It will then continuously work in real-time to pull records off your record stream and perform creates, updates, and deletes in your database.

If Sequin’s Postgres consumer falls behind, you’ll see an unhealthy status for your consumer in the console and Management API. Furthermore, Sequin will send you an email alert.

If a target is offline or unreachable, Sequin will automatically pause any attached consumers until the target is healthy again. You’ll see an “unhealthy” status in the console and Management API with reference to the unhealthy target. Because Sequin’s Postgres consumer uses the records stream for its sync, it can be offline indefinitely and catch up when it comes back online.

Sequin’s Postgres consumer is located in AWS’ us-west-2.


Create a schema for your Sequin tables

Although not strictly required, we highly recommend you create a separate schema for your Sequin tables. This will make it easier to manage permissions for Sequin’s user and prevent you from accidentally granting Sequin access to sensitive internal tables:

create schema YOUR_SCHEMA;

Create a Postgres user for Sequin

You’ll create a Postgres user for Sequin and then supply Sequin with the credentials. Sequin will use this Postgres user to connect to your database and perform upserts to your target tables:

1. Create a user

create user sequin with encrypted password '▀▀▀▀▀▀';

2. Grant permissions

Sequin needs permission to:

  • Connect to your database
  • Read from and write to tables currently in your Sequin schema
  • Read from and write to tables that you add to your Sequin schema in the future

Those permissions:

grant connect, create on database YOUR_DBNAME to sequin;
-- only usage is strictly necessary, though `create` can be good for us to have in diagnostic situations
grant usage, create on schema YOUR_SCHEMA to sequin;
-- grant permissions on all tables currently present in the schema to Sequin
grant all privileges on all tables in schema YOUR_SCHEMA to sequin;
-- grant permissions on all tables created in the future to Sequin
alter default privileges in schema YOUR_SCHEMA grant all on tables to sequin;

Add the database to Sequin (console)

After creating a Postgres user for Sequin, you’ll connect Sequin to your Postgres database.

Click on “Targets” in the Sequin console and then click on “Add target.” That will bring up Sequin’s database connection flow:

You’ll be prompted for details like your database’s hostname, port, and database name as well as the credentials for the Postgres user you created for Sequin. You can have Sequin connect to your database via a bastion host if you wish.

Whitelist Sequin’s IP addresses

Sequin will connect to your database from the following IP addresses:


Add the database to Sequin (API)

Alternatively, you can add databases to Sequin via our Management API.

Setup the consumer

The consumer is the worker that will pull events off your Sequin records stream and perform upserts to your database. You can have one consumer pull data from multiple syncs writing to one database target.

To make table setup and migrations easy, Sequin will provide you with the create table statement you need to run to create the tables Sequin will write to.

In the Sequin console, click on “Consumers” and then click on “Add consumer.” That will bring up Sequin’s Postgres consumer setup flow:

1. Select target

Select your Postgres database as the target.

2. Configure filtering

Postgres consumers filter either by provider or sync.

If you select a provider, the consumer will forward records for all syncs from that provider. This scales to thousands of syncs.

If you select a sync, the consumer will only forward records from that sync.

3. Configure your table

Enter the Postgres schema and table name of the table that Sequin should write to.

Ensure that the Postgres user used during the target setup has access to this schema and table.

All records are upserted to a single table for the consumer. See views to learn how to build filtered and transformed tables on top of the target table.

4. Create tables

Next, Sequin will provide you with the create table statement you need to run to create the table Sequin will write to. You can run these statements using your database client of choice.

Sequin will verify that the schema is correct. Once everything is ready, click “Start consumer” to create the consumer and start consuming records to your database.


Postgres views are a powerful feature when used with Sequin’s Postgres consumer.

What the Postgres docs say about views

Making liberal use of views is a key aspect of good SQL database design. Views allow you to encapsulate the details of the structure of your tables, which might change as your application evolves, behind consistent interfaces.

Views can be used in almost any place a real table can be used.

Postgres views act almost identically to Postgres tables. They allow you to manage the schema of your API data in your database while Sequin manages the inserts, updates, and deletes.

With views, you can:

  • Filter and transform the Sequin managed table with just the records and columns that matter to your application.
  • Manage your migrations in your application code without needing to involve Sequin.

Filter and transform

Views allow you to create a “table” leads with filters on HubSpot records:

	WHERE payload ->> 'lifecyclestage' = 'lead'

Which gives you a view of just HubSpot leads. You can also select just the fields relevant to your application:

		payload ->> 'hubspot_owner_id' as hubspot_owner_id,
		(payload ->> 'createdate')::timestamp as createdate
		payload ->> 'lifecyclestage' = 'lead'

You can then query this view exactly like a table of leads with the three specified columns.

                  View "public.hubspot_leads"
      Column      |            Type             | Nullable
 upstream_id      | text                        |
 hubspot_owner_id | text                        |
 createdate       | timestamp without time zone |

Manage your migrations

Sequin manages one table with all of your API data and you manage your views. This allows you to decouple your view migrations from Sequin.

The advantages of this decoupling are:

  • You keep your view migrations in version control alongside your application code.
  • You choose when to deploy and run view migrations without relying on Sequin.
  • You decouple the schema of your data from the contents of the data.


Sequin’s Postgres consumer was designed to provide a fast, turn-key method to mirror API data to your database. As such, it has some limitations to be aware of:

No soft deletes: When a record is deleted in the API, Sequin will delete the corresponding record in your database.

All-to-1 mapping: All records are upserted to a single table per consumer. See views to learn how to build filtered and transformed tables on top of the target table.

Schema restrictions: Sequin’s Postgres consumer expects the target table to have certain columns and specific constraints. (Sequin will specify what these are when you setup the database.) Straying from this structure will cause the consumer to fail.

No constraints: You shouldn’t add any of your own constraints to the tables Sequin writes to. For example, you shouldn’t add a unique constraint to a column. Far too often, these constraints will break the consumer.

How do constraints break consumers?

There are two common reasons:

  1. Often, APIs do not enforce constraints on their data in the same way Postgres does. For example, in Salesforce, you can specify a custom field as not null but Salesforce will let old records with a null value for that field persist. Postgres, on the other hand, does not allow null values for not null fields. This mismatch between API constraints and database constraints causes issues.
  2. There are race conditions that can cause constraints to fail. While Sequin’s streams guarantees strict ordering for events for a given record, it doesn’t guarantee strict ordering for events across records. This out-of-order arrival of events means there can be brief periods where e.g. two records have the same value for a field. If you have a unique constraint on that field, this will cause the consumer to fail.

No foreign keys: As with constraints, you shouldn’t add any foreign keys to the table Sequin writes to. There’s no guarantee that Sequin will forward the parent of a record before the child. This means that you can end up with brief periods where a child record references a parent record that doesn’t exist in the database yet.

These “parent-child” races are usually not an issue as parents normally arrive before their children. But they can crop up and cause the consumer to halt.

If any of these limitations are a deal-breaker for you, you can consider building your own consumer.

Schema lifecycle


You can add syncs without modifying your consumer. A consumer that is configured with a provider will automatically start forwarding records from new syncs with the same provider.

You can configure a new consumer per sync if you want each sync’s data in a separate database, schema, or table.

Removing a sync from Sequin will not automatically delete the records from your target table. This allows you to control the lifecycle of deleted syncs.

You can safely run a delete statement on your target table where sync_id = <deleted-sync-id>.


You can add collections to syncs and the data will automatically populate in your target table.

If you remove a collection from a sync, new data is no longer forwarded but existing records are not automatically deleted from target tables.

This allows you to control the lifecycle of deleted collections.

You can safely run a delete statement on your target table where collection_id = <deleted-collection-id>.

Development databases

In development, every developer on your team can setup a separate consumer that writes to their own database.

Developer databases are valid targets if:

  • They are cloud hosted databases
  • The developer uses a reverse proxy like ngrok

Sequin Postgres consumers always use the same table format and you control your view schemas. This means developers can manage views and related application code in their feature branch and merge / deploy these changes to staging / production safely.

If you need automated workflows to create or modify consumers or targets, you can write a script using our Management API to achieve this.

We’re working on a Terraform provider that will make it easy to create and manage development databases for Sequin. That way, new developers on your team can create consumers with configurations that match everyone else’s. And when a developer makes a change to their consumer, they can easily share that change with the rest of the team.

If you’re interested in being an early tester of this Terraform provider, please let us know.