Kafka Connector
The Kafka connector enables near real-time catalog synchronization by consuming messages from your Kafka topics and ingesting them into Constructor. Unlike batch-based connectors, data flows continuously - messages are buffered and ingested as they arrive.
Why use it?
- Near real-time ingestion - data is ingested shortly after it's published to your Kafka topics, no scheduled syncs required
- Event-driven architecture - only changed data is processed, using a delta ingestion strategy
- Flexible message format - send messages in any JSON structure; Constructor maps them to the catalog model
- Entity support - supports items and variations
- Confluent compatible - works with Confluent Cloud and self-managed Kafka clusters
How it works
- Your system publishes catalog update messages to one or more Kafka topics
- Constructor's Kafka listener consumes messages continuously
- Messages are buffered and flushed in batches (every 60 seconds or at 10,000 messages, whichever comes first)
- Each batch is transformed and ingested into Constructor via the streaming API using a delta strategy - only changes are applied to your catalog
Heads up!This is a push-based connector, so Constructor listens to your topics.
There is no polling or scheduled syncs involved.
Supported message formats
Messages must be valid JSON, and must include the data that needs to be ingested / updated.
For the case of Confluent instances, the connector is also compatible with Confluent Schema Registry wire format (the magic byte prefix is handled automatically).
Need help?For details on structuring your message payloads, contact our team.
Supported compression codecs
The following compression codecs are supported on your Kafka topics:
- GZIP
- Snappy
- LZ4
- ZSTD
Connection setup
To configure a Kafka connector, the following information is required:
| Field | Description |
|---|---|
| Brokers | Comma-separated broker addresses (host:port) |
| Topics | Comma-separated list of Kafka topic names to consume from |
| Group ID | Kafka consumer group ID for offset tracking |
| Client ID | OAuth client identifier |
| Client Secret | OAuth client secret |
| Auth Token Endpoint | OAuth token URL for authentication |
| Cluster ID | Logical cluster identifier (e.g., Confluent Cloud cluster ID) |
| Identity Pool ID | Identity pool identifier (e.g., Confluent Cloud identity pool) |
Authentication
The connector authenticates with your Kafka cluster via OAuth 2.0 (client credentials grant) using the SASL OAUTHBEARER mechanism over SSL. All connections are encrypted - plaintext connections are not supported.
Using a different setup?We're happy to support your use case - for details on configuring authentication for your specific environment, contact our team.
Data mapping and transformation
Messages published to your Kafka topics can have any structure. During onboarding, Constructor configures a custom transformation that maps your message payload into Constructor's catalog model.
Supported entity types:
- Items - your core product catalog entries
- Variations - product variants (e.g., size, color)
Updated about 10 hours ago