Multi-tenancy
Laminar supports multi-tenant deployments where each tenant runs in complete isolation with dedicated resources.
Architecture
Shared Infrastructure
āāā laminar-monitoring # Grafana, GrepTimeDB, Vector
āāā laminar-connectors # MinIO, Redpanda
āāā laminar-product # Product website
āāā ingress-nginx # Ingress controller
Per-Tenant (tenant-{name} namespace)
āāā Console # Web UI
āāā Controller # Stream processing controller
āāā Workers # Dynamic worker pods
āāā PostgreSQL # Metadata database
āāā ConfigMap # Engine configuration
What Gets Deployed Per Tenant
When you create a new tenant, the laminar-core Helm chart deploys the following resources into a dedicated tenant-{name} namespace:
Console (Web UI)
| Resource | Name | Description |
|---|---|---|
| Deployment | laminar-{tenant}-console | Next.js web application |
| Service | laminar-{tenant}-console | ClusterIP on port 3000 |
| Secret | laminar-{tenant}-console-auth | NextAuth + Google OAuth credentials |
Configuration:
- Image:
792306802931.dkr.ecr.us-east-1.amazonaws.com/laminar/console - Port: 3000
- Resources: 128Mi memory, 100m CPU (default)
- Environment:
NEXT_PUBLIC_API_URL,GREPTIME_URL,AUTH_SECRET, OAuth credentials
Controller (Stream Processing Engine)
| Resource | Name | Description |
|---|---|---|
| Deployment | laminar-{tenant}-controller | Rust engine controller |
| Service | laminar-{tenant} | gRPC (8001), HTTP API (8000), Admin (8004) |
| ConfigMap | laminar-{tenant}-config | Engine configuration (config.yaml) |
| ServiceAccount | laminar-{tenant} | IRSA for AWS access |
| ClusterRole | laminar-{tenant} | Pod/Service management permissions |
| ClusterRoleBinding | laminar-{tenant} | Binds role to service account |
Configuration:
- Image:
792306802931.dkr.ecr.us-east-1.amazonaws.com/laminar/backend - Ports: gRPC 8001, Admin 8004, HTTP 8000
- Resources: 256Mi memory, 500m CPU (EKS default)
- Init Container: Runs database migrations on startup
- Strategy: Recreate (not rolling update)
Workers (Dynamic)
Workers are created dynamically by the controller, not by Helm directly.
| Configuration | Default | EKS |
|---|---|---|
| Task slots per pod | 2 | 16 |
| Memory | 2Gi | 8Gi |
| CPU | 2000m | 2000m |
Workers run the same backend image with command: /app/laminar --config /config/config.yaml worker
PostgreSQL
| Resource | Name | Description |
|---|---|---|
| StatefulSet | laminar-{tenant}-postgresql | Bitnami PostgreSQL |
| Service | laminar-{tenant}-postgresql | Port 5432 |
| PVC | data-laminar-{tenant}-postgresql-0 | 10Gi persistent storage |
| Secret | laminar-{tenant}-postgresql | Database credentials |
Configuration:
- Username:
laminar_{tenant}(underscores, not hyphens) - Database:
laminar_{tenant} - Storage: 10Gi on gp3 StorageClass (EKS)
Ingress
| Resource | Name | Description |
|---|---|---|
| Ingress | laminar-{tenant} | Routes traffic to console and API |
Paths:
/console/*ā Console service (port 3000)/api/*ā Controller service (port 8000)
Hosts: Configured per tenant (e.g., e6data.lmnr.cloud)
Storage Paths
Each tenant has isolated storage:
| Type | Local | EKS (S3) |
|---|---|---|
| Artifacts | file:///tmp/laminar/tenants/{tenant}/artifacts | s3://laminar-dev/tenants/{tenant}/artifacts |
| Checkpoints | file:///tmp/laminar/tenants/{tenant}/checkpoints | s3://laminar-dev/tenants/{tenant}/checkpoints |
Tenant Isolation
| Layer | Mechanism |
|---|---|
| Namespace | Each tenant in tenant-{name} namespace |
| Database | Separate PostgreSQL instance per tenant |
| Storage | Isolated S3 paths per tenant |
| Network | Service-level isolation within namespace |
| RBAC | ServiceAccount scoped to tenant namespace |
| Configuration | Separate ConfigMap per tenant |
Shared Resources:
- Kubernetes cluster
- Ingress controller (nginx)
- Monitoring stack (GrepTimeDB, Grafana)
- Connector services (MinIO, Redpanda)
Adding a New Tenant
Step 1: Create Values File
cd laminar-infra/k8s/laminar-core
# Create tenant directory
mkdir -p tenants/acme
# Copy template
cp tenants/e6data/values-eks.yaml tenants/acme/values-eks.yamlStep 2: Configure Tenant Values
Edit tenants/acme/values-eks.yaml:
nameOverride: acme
fullnameOverride: laminar-acme
ingress:
enabled: true
hosts:
- acme.lmnr.cloud
engine:
artifactUrl: "s3://laminar-dev/tenants/acme/artifacts"
checkpointUrl: "s3://laminar-dev/tenants/acme/checkpoints"
postgresql:
fullnameOverride: laminar-acme-postgresql
auth:
username: laminar_acme
password: <secure-password>
database: laminar_acme
primary:
persistence:
enabled: true
storageClass: gp3
size: 10GiStep 3: Create ArgoCD Application
Create argocd/applications/tenants/acme.yaml:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: tenant-acme
namespace: argo
labels:
tenant: acme
type: persistent
spec:
project: laminar
source:
repoURL: git@github.com:e6data/laminar-infra.git
targetRevision: main
path: k8s/laminar-core
helm:
valueFiles:
- values-eks.yaml
- tenants/acme/values-eks.yaml
destination:
server: https://kubernetes.default.svc
namespace: tenant-acme
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=trueStep 4: Commit and Deploy
# Commit both files
git add k8s/laminar-core/tenants/acme/values-eks.yaml
git add argocd/applications/tenants/acme.yaml
git commit -m "Add acme tenant"
git push
# ArgoCD will automatically sync, or manually:
task argocd:applyStep 5: Configure DNS
Add DNS record pointing acme.lmnr.cloud to the ingress load balancer.
Tenant Configuration Reference
Required Values
| Value | Description | Example |
|---|---|---|
nameOverride | Short tenant name | acme |
fullnameOverride | Full resource prefix | laminar-acme |
ingress.hosts[] | Tenant domain(s) | acme.lmnr.cloud |
engine.artifactUrl | S3 path for artifacts | s3://laminar-dev/tenants/acme/artifacts |
engine.checkpointUrl | S3 path for checkpoints | s3://laminar-dev/tenants/acme/checkpoints |
postgresql.auth.* | Database credentials | See example above |
Optional Values
| Value | Default | Description |
|---|---|---|
console.image.tag | latest | Console image version |
engine.image.tag | latest | Engine image version |
engine.controller.resources | 256Mi/500m | Controller resources |
engine.worker.resources | 8Gi/2000m | Worker resources |
engine.worker.slots | 16 | Task slots per worker |
postgresql.primary.persistence.size | 10Gi | Database storage |
ingress.tls.enabled | true | Enable TLS |
Console Authentication
console:
authUrl: "https://acme.lmnr.cloud"
auth:
secret: "<base64-nextauth-secret>"
googleClientId: "<google-oauth-client-id>"
googleClientSecret: "<google-oauth-client-secret>"Custom Environment Variables
engine:
env:
- name: RUST_LOG
value: "info"
- name: CDC_JAR_PATH
value: "/app/cdc-bridge-1.0.0.jar"
- name: JVM_HEAP_SIZE
value: "512m"Node Affinity (Optional)
engine:
controller:
nodeSelector:
node-type: controller
tolerations:
- key: "dedicated"
operator: "Equal"
value: "controller"
effect: "NoSchedule"
worker:
nodeSelector:
node-type: workerRemoving a Tenant
Via ArgoCD
# Delete the application (removes all resources)
argocd app delete tenant-acme --cascade
# Or via Taskfile
task argocd:delete APP=tenant-acmeVia Helm
helm uninstall laminar-acme -n tenant-acme
kubectl delete namespace tenant-acmeCleanup
After removing a tenant:
- Delete S3 data:
aws s3 rm s3://laminar-dev/tenants/acme/ --recursive - Remove DNS record
- Delete ArgoCD application file from git
- Delete tenant values directory from git
Directory Structure
laminar-infra/
āāā k8s/laminar-core/
ā āāā Chart.yaml
ā āāā values.yaml # Base defaults
ā āāā values-eks.yaml # EKS defaults
ā āāā tenants/
ā āāā e6data/
ā ā āāā values.yaml # Local dev
ā ā āāā values-eks.yaml # EKS production
ā āāā highradius/
ā āāā values.yaml
ā āāā values-eks.yaml
āāā argocd/applications/tenants/
āāā e6data.yaml
āāā highradius.yaml
Values Merge Order
Helm merges values in this order (later overrides earlier):
k8s/laminar-core/values.yaml(chart defaults)k8s/laminar-core/values-eks.yaml(environment defaults)k8s/laminar-core/tenants/{tenant}/values-eks.yaml(tenant overrides)