Isolation and Combination of Connectors in Deployed Environment


Exploring how connectors are isolated and combined in a deployed environment, specifically in the context of managing multiple connectors with distinct endpoints.


Hi folks, how are connectors isolated/combined in a deployed environment? If I need to make 400 connectors with 400 distinct endpoints (think talking to 400 different salesforce instances), do people set up 400 k8s pods with 400 configurations?

This topic has been created from a Slack thread to give it more visibility.
It will be on Read-Only mode here. Click here if you want to access the original thread.

Join the conversation on Slack

["connectors", "isolated", "combined", "deployed-environment", "k8s-pods", "configurations"]

In Airbyte, each connector runs in its own Docker container, which provides a level of isolation between different connectors. When deploying on Kubernetes, the scheduler launches connector-related pods on different nodes, allowing for horizontal scaling of sync workloads [source]. For configuring connectors, there are three different ways to configure connector resource requirements for a Sync: Instance-wide, Connector-specific, and Connection-specific. The narrower the scope of the requirement, the higher the precedence. Connection-specific configuration has the highest precedence and overrides all other configurations. It is recommended to use this on a case-by-case basis [source]. However, the extracts do not provide specific information on whether you would need to set up 400 Kubernetes pods with 400 configurations for 400 connectors with distinct endpoints. It would be best to refer to the Airbyte documentation or ask in the Airbyte community forum for more specific guidance.

You can check this doc:

Source and Destination on diagram are connectors
Pods for Source/Destination are launched only when connection is running. With Kubernetes cluster autoscaling you should be able to reduce costs if running on cloud.
You configure sources/destinations/connections in Airbyte, the rest on Kubernetes is handled by Airbyte “components” as on diagram