This page is intended for integrators who install Popsink in their own Kubernetes cluster. The chart deploys a self-hosted Popsink data plane — the runtime that moves data from your sources to your targets. It connects to a control plane (managed by Popsink atDocumentation Index
Fetch the complete documentation index at: https://docs.popsink.com/llms.txt
Use this file to discover all available pages before exploring further.
control-plane.popsink.com, or
your own) which orchestrates and monitors the deployment.
Architecture overview
The chart deploys the following components:| Component | Purpose | Default |
|---|---|---|
data-plane | Main API + UI. Talks to the control plane, manages connectors | enabled |
tansu | Stateless Kafka-compatible broker, S3-backed | enabled |
karapace | Schema Registry (Avro / JSON Schema) for Tansu | enabled |
metrics-exporter | Exports pipeline & connector metrics to the control plane | enabled |
kafka-ui | Web UI for Kafka inspection | enabled |
postgresql | Bitnami sub-chart, in-cluster Postgres for the data-plane DB | enabled |
- External Kafka —
tansu.enabled=false, configuredefaultKafka.* - External Schema Registry —
karapace.enabled=false, configureschemaRegistry.* - External PostgreSQL —
postgresql.enabled=false, configureexternalDatabase.*
Prerequisites
Cluster
| Requirement | Minimum |
|---|---|
| Kubernetes | 1.23+ |
| Helm | 3.8.0+ |
| Worker nodes | 2 nodes, amd64 |
| Per-node | 4 vCPU, 16 GB RAM |
| Persistent volume | SSD-backed StorageClass, ≥ 200 GB available |
| Ingress | Traefik / NGINX / Istio (or any IngressClass-compatible) |
| Object storage | S3-compatible bucket (recommended for Tansu) |
Access to Popsink images
All images are published to a private GAR registry:- Chart-managed pull secret
- Bring your own pull secret
Let the chart create the
imagePullSecret for you:Onboarding flow (control plane)
Before installing the chart, you need a deployment ID and a deployment JWT token issued by the control plane.Fill the wizard
- Optional pre-fill of PostgreSQL credentials (not stored).
- Retention strategy — pick S3-compatible storage (recommended) or PostgreSQL.
- FQDN — the public URL where the data plane will be reachable
(e.g.
https://popsink.your-company.com).
Copy the values snippet
The wizard outputs a
values-control-plane.yaml snippet containing
controlPlaneUrl, deploymentId and deploymentJwtToken.
Use these values verbatim — they identify and authenticate your data
plane against the control plane.Required secrets to generate yourself
Beyond the values handed by the control plane, you must generate four secrets locally:| Secret | Format | How to generate |
|---|---|---|
adminCredentials.username / password | any | Pick a strong password |
jwt.secret | random string ≥ 32 chars | openssl rand -base64 48 |
connectorConfigEncryptionKey.key | URL-safe base64-encoded 32-byte Fernet key | python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())" |
postgresql.auth.password (or externalDatabase.password) | any | openssl rand -base64 24 |
existingSecret field instead.
Tansu storage (S3)
Tansu is a stateless Kafka broker that stores log segments in S3. You must provide an S3-compatible bucket and a way to authenticate.If you don’t want Tansu, set
tansu.enabled=false and karapace.enabled=false,
then point the data plane at your existing Kafka:Ingress
The chart’s built-iningress.enabled flag is off by default and we
recommend leaving it that way: ingress in production usually needs
company-specific annotations (cert-manager, WAF, allow-lists). Provide your
own Ingress resource that exposes:
- the data-plane HTTP service (
<release>-data-plane, port80), - on the FQDN you declared in the control plane (
ingressUrl).
Ingress for you, set ingress.enabled=true
and configure ingress.hostname, ingress.ingressClassName, ingress.tls.
Putting it together — minimal values.yaml
values.yaml
Install
Ready, head back to the control-plane deployment page —
the status should flip from Awaiting connection… to Connected within
a minute. Open ingressUrl and log in with the adminCredentials you set.
Production checklist
Versions & images
Versions & images
- Pin the chart version (
--version) and the application image (image.tag) — never deploylatest.
State
State
- Use external PostgreSQL (
postgresql.enabled=false,externalDatabase.*) backed by managed snapshots, not the in-cluster Bitnami chart. - Use external S3 with versioning + lifecycle policy for Tansu.
Resources & availability
Resources & availability
- Set explicit
resources.requests/limitson every component (defaults target small-medium clusters; tune for your workload). - Configure
replicaCount ≥ 2ondata-plane(default), keeptansu.replicaCount=3(default). - Enable
pdb.create: true(default) — at least 1 pod stays during node drains.
Secrets
Secrets
- Provide
imagePullSecretsvia a sealed secret / external secrets controller, not plain values. - Back up the four secrets (admin credentials, JWT secret, Fernet key, DB password) in your secret manager.
Safety switches
Safety switches
- Keep
allowDesignLogin: false(default) andpipelineMode: falseunless explicitly told otherwise — these are dev-only switches.
Network
Network
- Ingress: TLS-only, cert-manager (or equivalent), restrict source ranges if your data plane is internet-exposed only for known IPs.
Upgrade
Uninstall
Troubleshooting
| Symptom | Likely cause |
|---|---|
| Control plane stays on “Awaiting connection…” | Wrong controlPlaneUrl / deploymentId / deploymentJwtToken, or egress to the control plane is blocked |
data-plane pods crash-loop on connector-config-encryption-key | Fernet key is not a valid URL-safe base64-encoded 32-byte string |
tansu pods CrashLoopBackOff with S3 errors | storage.engine, region, or IAM role/credentials are wrong; bucket missing |
ImagePullBackOff on every pod | imagePullSecret.token not set / not a valid GAR service-account JSON |
Login at ingressUrl fails with redirect loop | ingressUrl doesn’t match the public URL fronted by your ingress |
Workers (connectors) never become LIVE | The data-plane pod cannot reach Tansu / Karapace inside the cluster — check NetworkPolicy |
Ingress 502 on /api/livez | Service name in your Ingress doesn’t match <release>-data-plane |
Reference — most useful values
A full list lives invalues.yaml (annotated with @param blocks). The most
common knobs are:
| Path | What it does |
|---|---|
controlPlaneUrl / controlPlaneFrontendUrl | Control plane API and UI URLs |
deploymentId, deploymentJwtToken.token | Identity vs. control plane |
ingressUrl | Public URL of this data plane |
image.tag | Pin the data-plane image version |
replicaCount | Data-plane API replicas (default 2) |
resources | Data-plane CPU / memory requests / limits |
imagePullSecret.* / global.imagePullSecrets | How to authenticate to the GAR registry |
adminCredentials.* | First admin login |
jwt.secret | Signs user session tokens |
connectorConfigEncryptionKey.key | Encrypts connector credentials at rest |
tansu.enabled, tansu.storage.* | In-cluster Kafka or BYO; S3 backend |
karapace.enabled / schemaRegistry.* | In-cluster Schema Registry or BYO |
postgresql.enabled / externalDatabase.* | In-cluster Postgres or BYO |
metricsExporter.* | Pushes metrics to the control plane |
kafkaUi.* | Optional Kafka inspection UI |
pdb.create | PodDisruptionBudget (default true) |
autoscaling.hpa.*, autoscaling.vpa.* | HPA / VPA — disabled by default |
Further reading
Installation prerequisites
Hardware, ingress and storage requirements for a Popsink-ready cluster.
Self-hosted onboarding
The control-plane wizard that issues your
deploymentId and JWT token.