Skip to main content
Popsink source connectors provide continuous ingestion of operational data to power your real-time journeys. Use them to synchronize transactional databases, SaaS applications, or event streams with Popsink pipelines.

Configuration flow

Each connector follows a shared journey that guarantees reliable and secure ingestion.
Allow outbound communication to Popsink endpoints (443/TCP), authorize your deployment IP at the database layer, and supply limited-privilege credentials (read + CDC). Review the deployment section for detailed network matrices.
1

Prepare the source

Enable the required transaction log or event export (for example, wal_level=logical for PostgreSQL) and confirm that expected volumes remain compatible with continuous capture (>50k events/min supported by default).
2

Create the connector

In the Popsink interface, select the appropriate family, fill in the connection parameters, and test access to the source.
3

Define the stream

Choose the schemas, tables, or topics to capture, apply any required filters or masking, and define the destination area.
4

Validate and launch

Start the connector, verify that the initial full sync completes, and monitor catch-up latency (<5 s on average) from the dashboard.

Use cases

  • Real-time customer activation: replicate events from your CRM (Salesforce or HubSpot) to marketing destinations to trigger instant campaigns.
  • Reliable finance reporting: capture transactions from Oracle or PostgreSQL and feed a Snowflake warehouse without loading source systems.
  • Product observability: ingest Kafka or Event Hubs streams to compute metrics in under two seconds.
  • Mainframe modernization: use the IBM i connector to expose RPG/DB2 data within your cloud applications.

Monitoring

Combine the Popsink console with native alerting to guarantee service continuity.
  • Pipeline dashboard: track throughput (events/s), catch-up latency, and offset state.
  • Proactive alerts: configure thresholds on latency increases, authentication errors, or stream interruptions in the Alerting section.
  • Audit and traceability: export execution logs to your SIEM to investigate rejects or transformations.
Schedule the rotation in your secrets manager, update the connector through the Popsink API or interface, then rerun a connection test to validate the new pair.
Connectors buffer events locally for 15 minutes and resume automatically thanks to offset management. Schedule a maintenance window for extended outages.
Yes—set an events-per-second quota and enable adaptive throttling to avoid overloading the source database while preserving recovery guarantees.