LMNR CLI
LMNR is a kubectl-style command-line tool for managing Laminar pipelines, connection profiles, and tables. It communicates with the Laminar API via HTTP and uses YAML manifests for declarative resource management.
Quick Reference
| Command | Description |
|---|---|
lmnr apply -f <file> | Create or update resources from YAML |
lmnr list <type> | List resources (pipelines, profiles, tables, jobs) |
lmnr get <type> <id> | Get detailed resource information |
lmnr describe <type> <id> | Show comprehensive resource details |
lmnr delete <type> <id> | Delete a resource by ID |
lmnr delete -f <file> | Delete resources defined in YAML |
lmnr summary | Display cluster overview |
lmnr cleanup | Remove all resources |
Configuration
Environment Variables
# API endpoint (default: <laminar_backend>/api/v1)
export LAMINAR_API_URL="<laminar_backend>/api/v1"
# Optional: Authentication token
export LAMINAR_TOKEN="your-auth-token"Global Options
All commands support these options:
| Option | Description | Default |
|---|---|---|
--api-url <URL> | Laminar API URL | <laminar_backend>/api/v1 |
--token <TOKEN> | Authentication token | - |
-o, --output <FORMAT> | Output format: table, yaml, json | table |
-h, --help | Print help | - |
-V, --version | Print version | - |
Configuration Precedence
Options are resolved in this order (highest to lowest priority):
- Command-line flags (
--api-url,--token) - Environment variables (
LAMINAR_API_URL,LAMINAR_TOKEN) - Default values
Commands
apply
Create or update resources from YAML manifest files. Supports both individual files and directories.
lmnr apply -f <FILE_OR_DIRECTORY>Options:
| Option | Description |
|---|---|
-f, --file <PATH> | Path to YAML manifest file or directory (required) |
Features:
- Directory support: When given a directory, recursively processes all
.yamland.ymlfiles - Profile name resolution: Tables can reference profiles by name instead of ID - LMNR automatically resolves the name to the correct ID
- Multi-resource files: Process multiple resources in a single file separated by
--- - Idempotent: Safely re-run without creating duplicates
Examples:
# Apply a single manifest file
lmnr apply -f pipeline.yaml
# Apply all manifests in a directory
lmnr apply -f ./manifests/
# Apply with custom API URL
lmnr apply -f pipeline.yaml --api-url http://laminar.example.com:8000/api/v1Output:
Parsed 3 resource(s) from pipeline.yaml
[1/3] Applying Table 'events'...
✓ Created Table 'events' (ID: ct_7DYJlexDEz)
[2/3] Applying Table 'output'...
✓ Created Table 'output' (ID: ct_Uyukf2ajPK)
[3/3] Applying Pipeline 'my-first-pipeline'...
✓ Created Pipeline 'my-first-pipeline' (ID: pl_Irnc9kACAC)
All resources applied successfully
list
List Laminar resources of a specific type. Automatically handles pagination for large result sets.
lmnr list <RESOURCE_TYPE> [OPTIONS]Resource Types:
| Type | Description |
|---|---|
pipelines | List all pipelines |
profiles | List connection profiles |
tables | List connection tables |
jobs | List jobs (optionally filtered by pipeline) |
Options:
| Option | Description |
|---|---|
--pipeline-id <ID> | Filter jobs by pipeline ID (only for jobs) |
Examples:
# List all pipelines
lmnr list pipelines
# List all tables
lmnr list tables
# List all profiles
lmnr list profiles
# List jobs for a specific pipeline
lmnr list jobs --pipeline-id pl_Irnc9kACAC
# Output as YAML
lmnr list pipelines -o yaml
# Output as JSON for scripting
lmnr list pipelines -o jsonOutput Examples:
Pipelines:
┌───────────────────┬───────────────┬─────────────┬────────────┬─────────┐
│ NAME ┆ ID ┆ PARALLELISM ┆ CHECKPOINT ┆ STATUS │
╞═══════════════════╪═══════════════╪═════════════╪════════════╪═════════╡
│ my-first-pipeline ┆ pl_Irnc9kACAC ┆ 1 ┆ 1s ┆ Running │
└───────────────────┴───────────────┴─────────────┴────────────┴─────────┘
Tables:
┌────────┬───────────────┬───────────┬─────────┬─────────┐
│ NAME ┆ ID ┆ CONNECTOR ┆ PROFILE ┆ CREATED │
╞════════╪═══════════════╪═══════════╪═════════╪═════════╡
│ output ┆ ct_Uyukf2ajPK ┆ preview ┆ - ┆ 5m │
├╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌┤
│ events ┆ ct_7DYJlexDEz ┆ mock ┆ - ┆ 5m │
└────────┴───────────────┴───────────┴─────────┴─────────┘
Profiles:
┌────────────┬────────────────┬───────────┐
│ NAME ┆ ID ┆ CONNECTOR │
╞════════════╪════════════════╪═══════════╡
│ kafka-prod ┆ prof_abc123def ┆ kafka │
└────────────┴────────────────┴───────────┘
Jobs:
┌─────────────────┬─────────────────┬─────────┬───────┬─────────┐
│ ID ┆ RUN_ID ┆ STATUS ┆ TASKS ┆ AGE │
╞═════════════════╪═════════════════╪═════════╪═══════╪═════════╡
│ job_xyz789 ┆ run_001 ┆ Running ┆ 4 ┆ 2h │
└─────────────────┴─────────────────┴─────────┴───────┴─────────┘
get
Get detailed information about a specific resource.
lmnr get <RESOURCE_TYPE> <ID>Resource Types: pipelines, profiles, tables, jobs
Examples:
# Get pipeline details
lmnr get pipelines pl_Irnc9kACAC
# Get table details as YAML
lmnr get tables ct_7DYJlexDEz -o yaml
# Get profile details as JSON
lmnr get profiles prof_456 -o json
# Get job details
lmnr get jobs job_xyz789Output (pipeline):
Pipeline Details
==================================================
Name: my-first-pipeline
ID: pl_Irnc9kACAC
Status: Running
Parallelism: 1
Checkpoint Interval: 1s
Preview: false
Created: 2025-12-03 02:24:50 UTC (5m)
Query:
--------------------------------------------------
INSERT INTO output SELECT id, value * 2 as doubled_value, timestamp FROM events WHERE value > 50
Execution Graph:
--------------------------------------------------
Nodes: 3
Mock<SchemaDriven<3 fields, 100 eps>> (mock) - parallelism: 1
events -> watermark -> sink projection (chained_op) - parallelism: 1
PreviewSink (preview) - parallelism: 1
Edges: 2
0 -> 1
1 -> 4
Output (job):
Job Details
==================================================
ID: job_xyz789
Run ID: run_001
State: Running
Running Desired: true
Start Time: 2025-12-03 02:24:50 UTC
Tasks: 4
Created: 2025-12-03 02:24:50 UTC (2h)
describe
Show comprehensive details about a resource including configuration, status, execution graph, and related resources.
lmnr describe <RESOURCE_TYPE> <ID>Resource Types: pipelines, jobs (only these two types are supported)
Examples:
# Describe pipeline (includes recent jobs, operators, execution graph)
lmnr describe pipelines pl_Irnc9kACAC
# Describe job (includes execution details, failure messages if any)
lmnr describe jobs job_xyz789Output (pipeline describe):
Pipeline Details
==================================================
Name: my-first-pipeline
ID: pl_Irnc9kACAC
Status: Running
Parallelism: 1
Checkpoint Interval: 1s
Preview: false
Created: 2025-12-03 02:24:50 UTC (5m)
Query:
--------------------------------------------------
INSERT INTO output SELECT id, value * 2 as doubled_value, timestamp FROM events WHERE value > 50
Execution Graph:
--------------------------------------------------
Nodes: 3
Mock<SchemaDriven<3 fields, 100 eps>> (mock) - parallelism: 1
events -> watermark -> sink projection (chained_op) - parallelism: 1
PreviewSink (preview) - parallelism: 1
Edges: 2
0 -> 1
1 -> 4
Recent Jobs:
--------------------------------------------------
┌─────────────────┬─────────────────┬─────────┬───────┬─────────┐
│ ID ┆ RUN_ID ┆ STATUS ┆ TASKS ┆ AGE │
╞═════════════════╪═════════════════╪═════════╪═══════╪═════════╡
│ job_xyz789 ┆ run_001 ┆ Running ┆ 4 ┆ 5m │
└─────────────────┴─────────────────┴─────────┴───────┴─────────┘
delete
Delete Laminar resources by ID or from a manifest file.
# Delete by ID
lmnr delete <RESOURCE_TYPE> <ID>
# Delete from manifest file
lmnr delete -f <FILE>Resource Types: pipelines, profiles, tables, jobs
Options:
| Option | Description |
|---|---|
-f, --file <FILE> | Delete resources defined in YAML file |
Deletion Order (file mode):
When deleting from a manifest file, resources are deleted in reverse dependency order:
- Pipelines (first)
- Tables
- Profiles (last)
Examples:
# Delete a pipeline by ID
lmnr delete pipelines pl_Irnc9kACAC
# Delete a table by ID
lmnr delete tables ct_7DYJlexDEz
# Delete a profile by ID
lmnr delete profiles prof_456
# Delete a job by ID
lmnr delete jobs job_xyz789
# Delete all resources defined in a manifest
lmnr delete -f pipeline.yamlsummary
Display a cluster overview showing all resources.
lmnr summaryOutput:
Cluster Summary
Connection Profiles
No profiles found
Connection Tables
┌────────┬───────────────┬───────────┬─────────┬─────────┐
│ NAME ┆ ID ┆ CONNECTOR ┆ PROFILE ┆ CREATED │
╞════════╪═══════════════╪═══════════╪═════════╪═════════╡
│ output ┆ ct_Uyukf2ajPK ┆ preview ┆ - ┆ 5m │
├╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌┤
│ events ┆ ct_7DYJlexDEz ┆ mock ┆ - ┆ 5m │
└────────┴───────────────┴───────────┴─────────┴─────────┘
Pipelines
┌───────────────────┬───────────────┬─────────────┬────────────┬─────────┐
│ NAME ┆ ID ┆ PARALLELISM ┆ CHECKPOINT ┆ STATUS │
╞═══════════════════╪═══════════════╪═════════════╪════════════╪═════════╡
│ my-first-pipeline ┆ pl_Irnc9kACAC ┆ 1 ┆ 1s ┆ Running │
└───────────────────┴───────────────┴─────────────┴────────────┴─────────┘
Jobs
No jobs found
cleanup
Remove all resources from the cluster. Use with caution as this operation cannot be undone.
lmnr cleanup [--force]Options:
| Option | Description |
|---|---|
--force | Skip confirmation prompt |
Operation Order:
The cleanup command removes resources in the following order to respect dependencies:
- Stop all running pipelines (with retry logic)
- Delete all pipelines (with retry logic)
- Delete all tables
- Delete all profiles
Examples:
# With confirmation prompt
lmnr cleanup
# Skip confirmation (useful for scripts)
lmnr cleanup --forceOutput:
⚠️ This will delete ALL resources from the cluster.
Are you sure you want to continue? [y/N]: y
Stopping pipelines...
✓ Stopped pipeline 'my-first-pipeline'
Deleting pipelines...
✓ Deleted pipeline 'my-first-pipeline'
Deleting tables...
✓ Deleted table 'output'
✓ Deleted table 'events'
Deleting profiles...
No profiles to delete
Cleanup complete
YAML Manifest Format
LMNR uses Kubernetes-style YAML manifests for defining resources.
Resource Structure
---
apiVersion: laminar.io/v1
kind: <ResourceKind>
spec:
name: <resource-name>
# ... resource-specific configurationSupported Resource Kinds:
Profile- Connection profilesTable- Connection tablesPipeline- Data pipelines
Multiple Resources in One File
Combine multiple resources using --- separator:
---
apiVersion: laminar.io/v1
kind: Profile
spec:
name: kafka-local
# ...
---
apiVersion: laminar.io/v1
kind: Table
spec:
name: events
# ...
---
apiVersion: laminar.io/v1
kind: Pipeline
spec:
name: my-pipeline
# ...Profile
Connection profiles define how to connect to external systems like Kafka, Iceberg, or cloud services.
---
apiVersion: laminar.io/v1
kind: Profile
spec:
name: kafka-prod
connector: kafka
config:
bootstrap_servers: kafka.example.com:9092
authentication:
type: sasl
sasl_config:
protocol: SASL_SSL
mechanism: SCRAM-SHA-512
username: my-user
password: my-passwordSupported Connectors:
| Connector | Description | Requires Profile |
|---|---|---|
kafka | Apache Kafka | Yes |
confluent | Confluent Cloud | Yes |
iceberg | Apache Iceberg | Yes |
kinesis | AWS Kinesis | Yes |
delta | Delta Lake | Yes |
filesystem | Local/S3/GCS filesystem | Yes |
mock | Mock data generator | No |
preview | Preview sink | No |
stdout | Standard output sink | No |
Table
Tables define source and sink connections with their schemas.
Profile Reference:
Tables can reference profiles by name or ID. When using a name, LMNR automatically resolves it to the correct profile ID:
# Reference by name (recommended)
connection_profile_id: kafka-prod
# Reference by ID
connection_profile_id: prof_abc123defExamples
Source Table (Mock)
---
apiVersion: laminar.io/v1
kind: Table
spec:
name: events
connector: mock
config:
SchemaDriven:
name: events
fields:
- name: id
type: int64
generator:
generator_type: sequence
start: 1
- name: value
type: float64
generator:
generator_type: range
min: 0
max: 100
- name: timestamp
type: timestamp
generator:
generator_type: datetime
generation:
mode: streaming
rate: 100.0
schema:
format:
json: {}
fields:
- field_name: id
field_type:
type:
primitive: Int64
nullable: false
- field_name: value
field_type:
type:
primitive: F64
nullable: false
- field_name: timestamp
field_type:
type:
primitive: DateTime
nullable: falseSource Table (Kafka)
---
apiVersion: laminar.io/v1
kind: Table
spec:
name: orders-source
connector: kafka
connection_profile_id: kafka-prod
config:
topic: orders
type:
source_config:
offset: earliest
schema:
format:
json:
timestampFormat: rfc3339
fields:
- field_name: order_id
field_type:
type:
primitive: String
nullable: false
- field_name: amount
field_type:
type:
primitive: F64
nullable: false
- field_name: timestamp
field_type:
type:
primitive: DateTime
nullable: falseSink Table (Preview)
---
apiVersion: laminar.io/v1
kind: Table
spec:
name: output
connector: preview
config: {}
schema:
format:
json: {}
fields:
- field_name: id
field_type:
type:
primitive: Int64
nullable: false
- field_name: result
field_type:
type:
primitive: F64
nullable: falseSink Table (Iceberg)
---
apiVersion: laminar.io/v1
kind: Table
spec:
name: events-sink
connector: iceberg
connection_profile_id: iceberg-lakehouse
config:
type: sink
sink_table_config:
namespace: my_database
table_name: events
rolling_policy:
interval_seconds: 10
schema:
format:
parquet: {}
fields:
- field_name: id
field_type:
type:
primitive: Int64
nullable: false
- field_name: value
field_type:
type:
primitive: F64
nullable: falsePipeline
Pipelines define SQL transformations that process data from source tables to sink tables.
---
apiVersion: laminar.io/v1
kind: Pipeline
spec:
name: my-pipeline
query: |
INSERT INTO output
SELECT id, value * 2 as doubled_value, timestamp
FROM events
WHERE value > 50
parallelism: 1
checkpoint_interval_micros: 1000000Pipeline Fields:
| Field | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Pipeline name |
query | string | Yes | SQL query with INSERT INTO...SELECT |
parallelism | integer | No | Number of parallel tasks (default: 1) |
checkpoint_interval_micros | integer | No | Checkpoint interval in microseconds (default: 1000000 = 1s) |
Complete Example
1. Create the manifest
Create my-pipeline.yaml:
---
apiVersion: laminar.io/v1
kind: Table
spec:
name: events
connector: mock
config:
SchemaDriven:
name: events
fields:
- name: id
type: int64
generator:
generator_type: sequence
start: 1
- name: value
type: float64
generator:
generator_type: range
min: 0
max: 100
- name: timestamp
type: timestamp
generator:
generator_type: datetime
generation:
mode: streaming
rate: 100.0
schema:
format:
json: {}
fields:
- field_name: id
field_type:
type:
primitive: Int64
nullable: false
- field_name: value
field_type:
type:
primitive: F64
nullable: false
- field_name: timestamp
field_type:
type:
primitive: DateTime
nullable: false
---
apiVersion: laminar.io/v1
kind: Table
spec:
name: output
connector: preview
config: {}
schema:
format:
json: {}
fields:
- field_name: id
field_type:
type:
primitive: Int64
nullable: false
- field_name: doubled_value
field_type:
type:
primitive: F64
nullable: false
- field_name: timestamp
field_type:
type:
primitive: DateTime
nullable: false
---
apiVersion: laminar.io/v1
kind: Pipeline
spec:
name: my-first-pipeline
query: |
INSERT INTO output
SELECT id, value * 2 as doubled_value, timestamp
FROM events
WHERE value > 50
parallelism: 12. Deploy
lmnr apply -f my-pipeline.yaml3. Verify
# Check cluster status
lmnr summary
# Get pipeline details
lmnr list pipelines
lmnr get pipelines <pipeline_id>
# View comprehensive details
lmnr describe pipelines <pipeline_id>4. Clean up
# Delete resources from manifest
lmnr delete -f my-pipeline.yaml
# Or delete everything
lmnr cleanup --force