Controlling Airbyte Pod Initialization Wait Time

Summary

Airbyte user encountering intermittent errors during sync due to pod initialization delays in AWS EKS with Karpenter as the node provisioner. Looking to control Airbyte’s wait time to check if pods are properly running before returning an error.


Question

Hi everyone,

I just upgraded our Airbyte helm chart version from 0.49.6 to 0.344.2. We have been using the 0.49.6 version for a long time since it was the stable version for our setup. I tried 0.344.2 to keep us updated.

Our instance is deployed in AWS EKS with Karpenter as the node provisioner. We’re encountering some intermittent errors when completing a sync. So far, my findings are due to pod initialization delays where Airbyte thinks there’s a problem with the pod jobs. I’m aware that we can explore improving the assignment of pods to nodes, but this is still not a scalable solution for us.

Can we control Airbyte’s wait time to check if the pods are properly running before returning an error?

My current workaround is to run all our airbyte connections so that Karpenter provisions a large instance to ensure that all syncs have a node immediately, but I don’t think this is still a scalable solution :sweat_smile:



This topic has been created from a Slack thread to give it more visibility.
It will be on Read-Only mode here. Click here if you want
to access the original thread.

Join the conversation on Slack

["airbyte", "pod-initialization", "aws-eks", "karpenter", "sync", "pod-assignment"]