Work in Progress: This page is under development. Use the feedback button on the bottom right to help us improve it.

Multi-tenancy

Laminar supports multi-tenant deployments where each tenant runs in complete isolation with dedicated resources.

Architecture

Shared Infrastructure
ā”œā”€ā”€ laminar-monitoring    # Grafana, GrepTimeDB, Vector
ā”œā”€ā”€ laminar-connectors    # MinIO, Redpanda
ā”œā”€ā”€ laminar-product       # Product website
└── ingress-nginx         # Ingress controller

Per-Tenant (tenant-{name} namespace)
ā”œā”€ā”€ Console               # Web UI
ā”œā”€ā”€ Controller            # Stream processing controller
ā”œā”€ā”€ Workers               # Dynamic worker pods
ā”œā”€ā”€ PostgreSQL            # Metadata database
└── ConfigMap             # Engine configuration

What Gets Deployed Per Tenant

When you create a new tenant, the laminar-core Helm chart deploys the following resources into a dedicated tenant-{name} namespace:

Console (Web UI)

ResourceNameDescription
Deploymentlaminar-{tenant}-consoleNext.js web application
Servicelaminar-{tenant}-consoleClusterIP on port 3000
Secretlaminar-{tenant}-console-authNextAuth + Google OAuth credentials

Configuration:

  • Image: 792306802931.dkr.ecr.us-east-1.amazonaws.com/laminar/console
  • Port: 3000
  • Resources: 128Mi memory, 100m CPU (default)
  • Environment: NEXT_PUBLIC_API_URL, GREPTIME_URL, AUTH_SECRET, OAuth credentials

Controller (Stream Processing Engine)

ResourceNameDescription
Deploymentlaminar-{tenant}-controllerRust engine controller
Servicelaminar-{tenant}gRPC (8001), HTTP API (8000), Admin (8004)
ConfigMaplaminar-{tenant}-configEngine configuration (config.yaml)
ServiceAccountlaminar-{tenant}IRSA for AWS access
ClusterRolelaminar-{tenant}Pod/Service management permissions
ClusterRoleBindinglaminar-{tenant}Binds role to service account

Configuration:

  • Image: 792306802931.dkr.ecr.us-east-1.amazonaws.com/laminar/backend
  • Ports: gRPC 8001, Admin 8004, HTTP 8000
  • Resources: 256Mi memory, 500m CPU (EKS default)
  • Init Container: Runs database migrations on startup
  • Strategy: Recreate (not rolling update)

Workers (Dynamic)

Workers are created dynamically by the controller, not by Helm directly.

ConfigurationDefaultEKS
Task slots per pod216
Memory2Gi8Gi
CPU2000m2000m

Workers run the same backend image with command: /app/laminar --config /config/config.yaml worker

PostgreSQL

ResourceNameDescription
StatefulSetlaminar-{tenant}-postgresqlBitnami PostgreSQL
Servicelaminar-{tenant}-postgresqlPort 5432
PVCdata-laminar-{tenant}-postgresql-010Gi persistent storage
Secretlaminar-{tenant}-postgresqlDatabase credentials

Configuration:

  • Username: laminar_{tenant} (underscores, not hyphens)
  • Database: laminar_{tenant}
  • Storage: 10Gi on gp3 StorageClass (EKS)

Ingress

ResourceNameDescription
Ingresslaminar-{tenant}Routes traffic to console and API

Paths:

  • /console/* → Console service (port 3000)
  • /api/* → Controller service (port 8000)

Hosts: Configured per tenant (e.g., e6data.lmnr.cloud)

Storage Paths

Each tenant has isolated storage:

TypeLocalEKS (S3)
Artifactsfile:///tmp/laminar/tenants/{tenant}/artifactss3://laminar-dev/tenants/{tenant}/artifacts
Checkpointsfile:///tmp/laminar/tenants/{tenant}/checkpointss3://laminar-dev/tenants/{tenant}/checkpoints

Tenant Isolation

LayerMechanism
NamespaceEach tenant in tenant-{name} namespace
DatabaseSeparate PostgreSQL instance per tenant
StorageIsolated S3 paths per tenant
NetworkService-level isolation within namespace
RBACServiceAccount scoped to tenant namespace
ConfigurationSeparate ConfigMap per tenant

Shared Resources:

  • Kubernetes cluster
  • Ingress controller (nginx)
  • Monitoring stack (GrepTimeDB, Grafana)
  • Connector services (MinIO, Redpanda)

Adding a New Tenant

Step 1: Create Values File

cd laminar-infra/k8s/laminar-core
 
# Create tenant directory
mkdir -p tenants/acme
 
# Copy template
cp tenants/e6data/values-eks.yaml tenants/acme/values-eks.yaml

Step 2: Configure Tenant Values

Edit tenants/acme/values-eks.yaml:

nameOverride: acme
fullnameOverride: laminar-acme
 
ingress:
  enabled: true
  hosts:
    - acme.lmnr.cloud
 
engine:
  artifactUrl: "s3://laminar-dev/tenants/acme/artifacts"
  checkpointUrl: "s3://laminar-dev/tenants/acme/checkpoints"
 
postgresql:
  fullnameOverride: laminar-acme-postgresql
  auth:
    username: laminar_acme
    password: <secure-password>
    database: laminar_acme
  primary:
    persistence:
      enabled: true
      storageClass: gp3
      size: 10Gi

Step 3: Create ArgoCD Application

Create argocd/applications/tenants/acme.yaml:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: tenant-acme
  namespace: argo
  labels:
    tenant: acme
    type: persistent
spec:
  project: laminar
  source:
    repoURL: git@github.com:e6data/laminar-infra.git
    targetRevision: main
    path: k8s/laminar-core
    helm:
      valueFiles:
        - values-eks.yaml
        - tenants/acme/values-eks.yaml
  destination:
    server: https://kubernetes.default.svc
    namespace: tenant-acme
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true

Step 4: Commit and Deploy

# Commit both files
git add k8s/laminar-core/tenants/acme/values-eks.yaml
git add argocd/applications/tenants/acme.yaml
git commit -m "Add acme tenant"
git push
 
# ArgoCD will automatically sync, or manually:
task argocd:apply

Step 5: Configure DNS

Add DNS record pointing acme.lmnr.cloud to the ingress load balancer.

Tenant Configuration Reference

Required Values

ValueDescriptionExample
nameOverrideShort tenant nameacme
fullnameOverrideFull resource prefixlaminar-acme
ingress.hosts[]Tenant domain(s)acme.lmnr.cloud
engine.artifactUrlS3 path for artifactss3://laminar-dev/tenants/acme/artifacts
engine.checkpointUrlS3 path for checkpointss3://laminar-dev/tenants/acme/checkpoints
postgresql.auth.*Database credentialsSee example above

Optional Values

ValueDefaultDescription
console.image.taglatestConsole image version
engine.image.taglatestEngine image version
engine.controller.resources256Mi/500mController resources
engine.worker.resources8Gi/2000mWorker resources
engine.worker.slots16Task slots per worker
postgresql.primary.persistence.size10GiDatabase storage
ingress.tls.enabledtrueEnable TLS

Console Authentication

console:
  authUrl: "https://acme.lmnr.cloud"
  auth:
    secret: "<base64-nextauth-secret>"
    googleClientId: "<google-oauth-client-id>"
    googleClientSecret: "<google-oauth-client-secret>"

Custom Environment Variables

engine:
  env:
    - name: RUST_LOG
      value: "info"
    - name: CDC_JAR_PATH
      value: "/app/cdc-bridge-1.0.0.jar"
    - name: JVM_HEAP_SIZE
      value: "512m"

Node Affinity (Optional)

engine:
  controller:
    nodeSelector:
      node-type: controller
    tolerations:
      - key: "dedicated"
        operator: "Equal"
        value: "controller"
        effect: "NoSchedule"
  worker:
    nodeSelector:
      node-type: worker

Removing a Tenant

Via ArgoCD

# Delete the application (removes all resources)
argocd app delete tenant-acme --cascade
 
# Or via Taskfile
task argocd:delete APP=tenant-acme

Via Helm

helm uninstall laminar-acme -n tenant-acme
kubectl delete namespace tenant-acme

Cleanup

After removing a tenant:

  1. Delete S3 data: aws s3 rm s3://laminar-dev/tenants/acme/ --recursive
  2. Remove DNS record
  3. Delete ArgoCD application file from git
  4. Delete tenant values directory from git

Directory Structure

laminar-infra/
ā”œā”€ā”€ k8s/laminar-core/
│   ā”œā”€ā”€ Chart.yaml
│   ā”œā”€ā”€ values.yaml              # Base defaults
│   ā”œā”€ā”€ values-eks.yaml          # EKS defaults
│   └── tenants/
│       ā”œā”€ā”€ e6data/
│       │   ā”œā”€ā”€ values.yaml      # Local dev
│       │   └── values-eks.yaml  # EKS production
│       └── highradius/
│           ā”œā”€ā”€ values.yaml
│           └── values-eks.yaml
└── argocd/applications/tenants/
    ā”œā”€ā”€ e6data.yaml
    └── highradius.yaml

Values Merge Order

Helm merges values in this order (later overrides earlier):

  1. k8s/laminar-core/values.yaml (chart defaults)
  2. k8s/laminar-core/values-eks.yaml (environment defaults)
  3. k8s/laminar-core/tenants/{tenant}/values-eks.yaml (tenant overrides)