Work in Progress: This page is under development. Use the feedback button on the bottom right to help us improve it.

Logging

Laminar uses structured JSON logging with Vector for collection and GrepTimeDB for storage.

Collection Architecture

Desktop Mode

Laminar Backend ──► Log files ──► Vector ──► GrepTimeDB
                   (/tmp/laminar/logs)

Vector reads log files and parses JSON (both Rust and Pino formats).

Kubernetes Mode

Pod stdout/stderr ──► Vector (DaemonSet) ──► GrepTimeDB
                      kubernetes_logs source

Vector uses the kubernetes_logs source to collect logs from all pods.

Log Format

Laminar components emit structured JSON logs.

Rust Components (Backend)

{
  "timestamp": "2024-01-15T10:30:00.000Z",
  "level": "INFO",
  "target": "laminar_controller::scheduler",
  "fields": {
    "message": "Starting pipeline",
    "pipeline_id": "pl_abc123",
    "job_id": "job_xyz789"
  }
}

Node.js Components (Console)

Pino format with numeric levels:

{
  "level": 30,
  "time": 1705315800000,
  "msg": "Request completed",
  "method": "GET",
  "path": "/api/pipelines"
}

Log Fields

After Vector processing, logs are stored with these fields:

FieldTypeDescription
greptime_timestampTimestampLog timestamp
levelStringLog level (info, warn, error)
targetStringLogger target (e.g., laminar_controller)
messageStringLog message
fieldsString (JSON)Additional structured fields
namespaceStringKubernetes namespace (K8s only)
podStringPod name (K8s only)
containerStringContainer name (K8s only)
nodeStringNode name (K8s only)

Log Storage

Logs are stored in GrepTimeDB:

  • Database: laminar_logs
  • Table: logs

Pre-defined Views

Views in laminar_log_views database for common queries:

ViewFilter
api_logstarget LIKE 'laminar_api%'
controller_logstarget LIKE 'laminar_controller%'
operator_logstarget LIKE 'laminar_operator%'
worker_logstarget LIKE 'laminar_worker%'
scheduler_logstarget LIKE '%scheduler%'
error_logslevel = 'error'
warn_logslevel = 'warn'
panic_logsmessage/fields LIKE '%panic%'
eventstarget = 'laminar_event'
job_logsfields LIKE '%job_%'
pipeline_logsPipeline-related logs
job_failures_logsmessage LIKE '%failed%' or '%error%'
state_transitions_logsState change logs
startup_logsStartup messages
task_logsTask-related logs

Querying Logs

Grafana

Use the GrepTimeDB-Logs datasource (MySQL protocol):

-- Recent errors
SELECT greptime_timestamp, level, target, message
FROM laminar_logs.logs
WHERE level = 'error'
  AND greptime_timestamp > NOW() - INTERVAL '1 hour'
ORDER BY greptime_timestamp DESC
LIMIT 100;
 
-- Using pre-defined views
SELECT * FROM laminar_log_views.error_logs
WHERE greptime_timestamp > NOW() - INTERVAL '1 hour'
LIMIT 100;
 
-- Controller logs for specific pipeline
SELECT greptime_timestamp, message, fields
FROM laminar_log_views.controller_logs
WHERE fields LIKE '%pl_abc123%'
ORDER BY greptime_timestamp DESC;

GrepTimeDB HTTP API

curl -X POST "http://localhost:4000/v1/sql" \
  -H "Content-Type: application/x-www-form-urlencoded" \
  -d "sql=SELECT * FROM laminar_log_views.error_logs LIMIT 10"

Log Levels

LevelNumeric (Pino)Description
trace10Detailed tracing
debug20Debug information
info30Normal operations
warn40Warnings
error50+Errors

Retention

Log retention is managed by GrepTimeDB. Configure retention policies via GrepTimeDB settings or by periodically purging old data:

-- Delete logs older than 7 days
DELETE FROM laminar_logs.logs
WHERE greptime_timestamp < NOW() - INTERVAL '7 days';