Kafka Connector

The Kafka connector enables near real-time catalog synchronization by consuming messages from your Kafka topics and ingesting them into Constructor. Unlike batch-based connectors, data flows continuously - messages are buffered and ingested as they arrive.

Why use it?

  • Near real-time ingestion - data is ingested shortly after it's published to your Kafka topics, no scheduled syncs required
  • Event-driven architecture - only changed data is processed, using a delta ingestion strategy
  • Flexible message format - send messages in any JSON structure; Constructor maps them to the catalog model
  • Entity support - supports items and variations
  • Confluent compatible - works with Confluent Cloud and self-managed Kafka clusters

How it works

  1. Your system publishes catalog update messages to one or more Kafka topics
  2. Constructor's Kafka listener consumes messages continuously
  3. Messages are buffered and flushed in batches (every 60 seconds or at 10,000 messages, whichever comes first)
  4. Each batch is transformed and ingested into Constructor via the streaming API using a delta strategy - only changes are applied to your catalog
šŸ“˜

Heads up!

This is a push-based connector, so Constructor listens to your topics.

There is no polling or scheduled syncs involved.

Supported message formats

Messages must be valid JSON, and must include the data that needs to be ingested / updated.

For the case of Confluent instances, the connector is also compatible with Confluent Schema Registry wire format (the magic byte prefix is handled automatically).

šŸ“˜

Need help?

For details on structuring your message payloads, contact our team.

Supported compression codecs

The following compression codecs are supported on your Kafka topics:

  • GZIP
  • Snappy
  • LZ4
  • ZSTD

Connection setup

To configure a Kafka connector, the following information is required:

FieldDescription
BrokersComma-separated broker addresses (host:port)
TopicsComma-separated list of Kafka topic names to consume from
Group IDKafka consumer group ID for offset tracking
Client IDOAuth client identifier
Client SecretOAuth client secret
Auth Token EndpointOAuth token URL for authentication
Cluster IDLogical cluster identifier (e.g., Confluent Cloud cluster ID)
Identity Pool IDIdentity pool identifier (e.g., Confluent Cloud identity pool)

Authentication

The connector authenticates with your Kafka cluster via OAuth 2.0 (client credentials grant) using the SASL OAUTHBEARER mechanism over SSL. All connections are encrypted - plaintext connections are not supported.

šŸ“˜

Using a different setup?

We're happy to support your use case - for details on configuring authentication for your specific environment, contact our team.

Data mapping and transformation

Messages published to your Kafka topics can have any structure. During onboarding, Constructor configures a custom transformation that maps your message payload into Constructor's catalog model.

Supported entity types:

  • Items - your core product catalog entries
  • Variations - product variants (e.g., size, color)