Work in Progress: This page is under development. Use the feedback button on the bottom right to help us improve it.

Connectors

Laminar supports connectors for ingesting streaming data and writing results to data lakes.

Overview

Connectors in Laminar are categorized into:

  • Sources: Where your streaming data originates from
  • Sinks: Where your processed data is written to

Available Connectors

Sources

Kafka

Apache Kafka distributed streaming platform. Supports standard Kafka clusters with various authentication mechanisms.

Learn more →

Confluent Cloud

Fully managed Kafka service by Confluent. Supports Confluent Schema Registry integration.

Learn more →

AWS Kinesis

Amazon Kinesis managed streaming service for real-time data ingestion.

Learn more →

Sinks

Apache Iceberg

Apache Iceberg table format for data lakes. Supports S3, HDFS, and other object storage backends.

Learn more →

Connection Profiles

Many connectors support Connection Profiles which allow you to store authentication credentials and connection settings separately from your pipeline configuration.

This enables:

  • Reusable connection configurations
  • Secure credential management
  • Easy switching between environments (dev/staging/prod)
Best Practice

Always use Connection Profiles for production deployments to keep your credentials secure and separate from your pipeline definitions.

Connector Configuration

Each connector has its own set of configuration parameters. Generally, you'll need to specify:

  1. Connection details: Host, port, credentials
  2. Data format: JSON, Avro, Protobuf, etc.
  3. Schema: The structure of your data
  4. Advanced options: Buffer sizes, timeout settings, etc.

Next Steps