Running Airbyte job pods in separate VPC from UI/Scheduler


Airbyte user wants to run job pods in a separate VPC from UI/Scheduler to prevent sensitive data from exiting VPCs. They are considering deploying Airbyte on Kubernetes in AWS EKS and syncing binlog data from AWS Aurora DBs across different VPCs.


Hello all, we are considering using self-managed airbtye deployed on kubernetes in AWS EKS. We would be syncing binlog data from AWS Aurora DBs that are spread across many different vpcs. We are concerned about sensitive data exiting the VPC of the Aurora DBs (it’s in the binlog). Is it possible to run the job pods in a separate VPC from the UI/Scheduler depending on the source? Ideally the job pods would run within the VPC of the DB that they are extracting data from, but we could have a centralized Kubernetes cluster for single point management of the UI/Scheduler.

This topic has been created from a Slack thread to give it more visibility.
It will be on Read-Only mode here. Click here if you want to access the original thread.

Join the conversation on Slack

["airbyte", "kubernetes", "aws-eks", "vpc", "binlog", "aurora-db", "job-pods", "ui", "scheduler"]