Summary
User is seeking insights, lessons learned, and common pitfalls for deploying Airbyte self-hosted on Kubernetes cluster running on AWS spot instances.
Question
Hey everyone,
I’m currently experimenting with AirByte self-hosted and considering deploying it to production on my Kubernetes cluster, which runs on AWS spot instances. Has anyone here tried this setup before? I’d love to hear any insights, lessons learned, or common pitfalls to watch out for.
Thanks in advance!
This topic has been created from a Slack thread to give it more visibility.
It will be on Read-Only mode here. Click here if you want
to access the original thread.
Join the conversation on Slack
["deploying", "airbyte-self-hosted", "kubernetes-cluster", "aws-spot-instances"]
I advise against using spot instances. This might be too unstable. This might work for very short synchronizations, but for longer ones you might get your EC2 instance terminated in the middle of synchronization, which will require a retry from the last saved state.
Have you considered EC2 instances for core Airbyte services and Fargate for jobs used for synchronizations?
Thanks for replying! I haven’t considered that actually. Do you have that kind of setup in production?
Yes, I have that kind of setup in production and it works pretty well. Stable and quite cost efficient.
For Fargate you need to ensure that containers are cleanup as soon as possible after synchronization is finished
In values.yaml it’s good to have something like this:
enabled: true
timeToDeletePods:
succeeded: 1
unsuccessful: 1```