Connectors
Laminar supports connectors for ingesting streaming data and writing results to data lakes.
Overview
Connectors in Laminar are categorized into:
- Sources: Where your streaming data originates from
- Sinks: Where your processed data is written to
Available Connectors
Sources
Kafka
Apache Kafka distributed streaming platform. Supports standard Kafka clusters with various authentication mechanisms.
Confluent Cloud
Fully managed Kafka service by Confluent. Supports Confluent Schema Registry integration.
Sinks
Apache Iceberg
Apache Iceberg table format for data lakes. Supports S3, HDFS, and other object storage backends.
Connection Profiles
Many connectors support Connection Profiles which allow you to store authentication credentials and connection settings separately from your pipeline configuration.
This enables:
- Reusable connection configurations
- Secure credential management
- Easy switching between environments (dev/staging/prod)
Best Practice
Always use Connection Profiles for production deployments to keep your credentials secure and separate from your pipeline definitions.
Connector Configuration
Each connector has its own set of configuration parameters. Generally, you'll need to specify:
- Connection details: Host, port, credentials
- Data format: JSON, Avro, Protobuf, etc.
- Schema: The structure of your data
- Advanced options: Buffer sizes, timeout settings, etc.
Next Steps
- Explore individual connector documentation
- Learn about Connection Tables
- Set up Connection Profiles