Summary
Airbyte instance on GKE with AlloyDB postgres database encountering errors with v0-server and v0-temporal pods, causing Airbyte to be temporarily unavailable.
Question
I am trying to deploy a new Airbyte instance on GKE using an external AlloyDB postgres database.
On startup, Airbyte is able to create the tables but the v0-server and v0-temporal pods are in error, resulting in "Airbyte is temporarily unavailable.
GKE and AlloyDB are on the same subnetwork. Has anyone encountered this issue ?
This topic has been created from a Slack thread to give it more visibility.
It will be on Read-Only mode here. Click here if you want
to access the original thread.
Join the conversation on Slack
["deploy", "airbyte-instance", "gke", "alloydb-postgres", "v0-server", "v0-temporal", "error"]
I haven’t tried AlloyDB, but we’re running it on Cloud SQL Postgres just fine.
I’d check the logs, especially for the bootloader container, but if you don’t find anything there look at all of them. Could well be that there’s either a config error or a connectivity error or timeout that will help you narrow the potential issues.
(not sure how much you’ve worked with Kubernetes or GKE, but most of this you can get by going to Workloads in the GKE subnav and then Logs at the top tab, or the specific deployment/pod and then Logs in its nav—this is effectively equivalent to docker container logs
. Sometimes you may also find it useful to look at Cloud Logging/Log Explorer around the timestamp of your startup as you’ll see any potential auth errors there)
Also, if you’re using an HTTP/S LB, make sure you increase the backend timeouts, as they default to 30 seconds which is way too low when the Airbyte server has actual work to do (or has to wait for nodes to spin up for new containers like during connection checks or schema discovery). This is another common source of 502s on GCP