Logging
Laminar uses structured JSON logging with Vector for collection and GrepTimeDB for storage.
Collection Architecture
Desktop Mode
Laminar Backend ──► Log files ──► Vector ──► GrepTimeDB
(/tmp/laminar/logs)
Vector reads log files and parses JSON (both Rust and Pino formats).
Kubernetes Mode
Pod stdout/stderr ──► Vector (DaemonSet) ──► GrepTimeDB
kubernetes_logs source
Vector uses the kubernetes_logs source to collect logs from all pods.
Log Format
Laminar components emit structured JSON logs.
Rust Components (Backend)
{
"timestamp": "2024-01-15T10:30:00.000Z",
"level": "INFO",
"target": "laminar_controller::scheduler",
"fields": {
"message": "Starting pipeline",
"pipeline_id": "pl_abc123",
"job_id": "job_xyz789"
}
}Node.js Components (Console)
Pino format with numeric levels:
{
"level": 30,
"time": 1705315800000,
"msg": "Request completed",
"method": "GET",
"path": "/api/pipelines"
}Log Fields
After Vector processing, logs are stored with these fields:
| Field | Type | Description |
|---|---|---|
greptime_timestamp | Timestamp | Log timestamp |
level | String | Log level (info, warn, error) |
target | String | Logger target (e.g., laminar_controller) |
message | String | Log message |
fields | String (JSON) | Additional structured fields |
namespace | String | Kubernetes namespace (K8s only) |
pod | String | Pod name (K8s only) |
container | String | Container name (K8s only) |
node | String | Node name (K8s only) |
Log Storage
Logs are stored in GrepTimeDB:
- Database:
laminar_logs - Table:
logs
Pre-defined Views
Views in laminar_log_views database for common queries:
| View | Filter |
|---|---|
api_logs | target LIKE 'laminar_api%' |
controller_logs | target LIKE 'laminar_controller%' |
operator_logs | target LIKE 'laminar_operator%' |
worker_logs | target LIKE 'laminar_worker%' |
scheduler_logs | target LIKE '%scheduler%' |
error_logs | level = 'error' |
warn_logs | level = 'warn' |
panic_logs | message/fields LIKE '%panic%' |
events | target = 'laminar_event' |
job_logs | fields LIKE '%job_%' |
pipeline_logs | Pipeline-related logs |
job_failures_logs | message LIKE '%failed%' or '%error%' |
state_transitions_logs | State change logs |
startup_logs | Startup messages |
task_logs | Task-related logs |
Querying Logs
Grafana
Use the GrepTimeDB-Logs datasource (MySQL protocol):
-- Recent errors
SELECT greptime_timestamp, level, target, message
FROM laminar_logs.logs
WHERE level = 'error'
AND greptime_timestamp > NOW() - INTERVAL '1 hour'
ORDER BY greptime_timestamp DESC
LIMIT 100;
-- Using pre-defined views
SELECT * FROM laminar_log_views.error_logs
WHERE greptime_timestamp > NOW() - INTERVAL '1 hour'
LIMIT 100;
-- Controller logs for specific pipeline
SELECT greptime_timestamp, message, fields
FROM laminar_log_views.controller_logs
WHERE fields LIKE '%pl_abc123%'
ORDER BY greptime_timestamp DESC;GrepTimeDB HTTP API
curl -X POST "http://localhost:4000/v1/sql" \
-H "Content-Type: application/x-www-form-urlencoded" \
-d "sql=SELECT * FROM laminar_log_views.error_logs LIMIT 10"Log Levels
| Level | Numeric (Pino) | Description |
|---|---|---|
| trace | 10 | Detailed tracing |
| debug | 20 | Debug information |
| info | 30 | Normal operations |
| warn | 40 | Warnings |
| error | 50+ | Errors |
Retention
Log retention is managed by GrepTimeDB. Configure retention policies via GrepTimeDB settings or by periodically purging old data:
-- Delete logs older than 7 days
DELETE FROM laminar_logs.logs
WHERE greptime_timestamp < NOW() - INTERVAL '7 days';