Airbyte Server unable to start after configuring an external database backend

Hi all,

I am running Airbyte locally (hosted on a VM on GCP) and would like to set-up an external database backend for Airbyte. The external DB is Cloud SQL (Postgres hosted on GCP), accessible through public IP from my VM. My Airbyte version is 0.35.65-alpha.

So far, I deactivated the service called db from the docker-compose.yaml file as well as in the db volume in the volumes section. I also modified the variables in the .env file as follows:

DATABASE_USER=db-user
DATABASE_PASSWORD=db-password
DATABASE_HOST=XX.XX.XX.XX #PUBLIC IP OF THE POSTGRES DB MASKED
DATABASE_PORT=5432
DATABASE_DB=db-instance-airbyte
DATABASE_URL=jdbc:postgresql://XX.XX.XX.XX:5432/db-instance-airbyte

After restarting Airbyte, I get this error message showing that the server cannot start:

airbyte-server      | 2022-04-10 17:51:19 WARN i.a.d.Databases(createPostgresDatabaseWithRetryTimeout):74 - Waiting for database to become available...
airbyte-server      | 2022-04-10 17:51:19 INFO i.a.d.i.BaseDatabaseInstance(lambda$isDatabaseConnected$5):127 - Testing airbyte configs database connection...
airbyte-server      | 2022-04-10 17:51:19 INFO i.a.d.Databases(createPostgresDatabaseWithRetryTimeout):83 - Database is not ready yet. Please wait a moment, it might still be initializing...
airbyte-scheduler   | 2022-04-10 17:51:21 INFO i.a.s.a.SchedulerApp(waitForServer):218 - Waiting for server to become available...

Please note that I see new tables being created by Airbyte in my Postgres DB on GCP. So it doesn’t look like an access problem to the DB. I am also able to access to the DB using psql from the VM running Airbyte.

Any idea of what is preventing Airbyte to start after this change?
Thanks

1 Like

Hi @sinwise,
Do you mind sharing your docker-compose.yaml file and a longer sample of your server logs?
If you’ database was successfully provisioned it means the bootloader ran as expected but the server might try to connect to the wrong database. Do you mind re-adding the db service in your docker-compose and restart? If it works it means the server still tries to access the docker db…

I guess you have read the content of this documentation, but if you did not please have a look.

Thanks @augustin .I just tried re-adding the db service, and I still get an issue to start the server. So clearly, the server is looking for the right db but gets stuck somewhere. I indeed read the documentation above, but it doesn’t help further. I am attaching my docker-compose.yaml and my server logs. Thanks for your help in advance!

Docker-compose.yaml

version: "3.7"
#https://github.com/compose-spec/compose-spec/blob/master/spec.md#using-extensions-as-fragments
x-logging: &default-logging
  options:
    max-size: "100m"
    max-file: "5"
  driver: json-file
services:
  # hook in case we need to add init behavior
  # every root service (no depends_on) should depend on init
  init:
    image: airbyte/init:${VERSION}
    logging: *default-logging
    container_name: init
    command: /bin/sh -c "./scripts/create_mount_directories.sh /local_parent ${HACK_LOCAL_ROOT_PARENT} ${LOCAL_ROOT}"
    environment:
      - LOCAL_ROOT=${LOCAL_ROOT}
      - HACK_LOCAL_ROOT_PARENT=${HACK_LOCAL_ROOT_PARENT}
    volumes:
      - ${HACK_LOCAL_ROOT_PARENT}:/local_parent
  bootloader:
    image: airbyte/bootloader:${VERSION}
    logging: *default-logging
    container_name: airbyte-bootloader
    environment:
      - AIRBYTE_VERSION=${VERSION}
      - CONFIG_DATABASE_PASSWORD=${CONFIG_DATABASE_PASSWORD:-}
      - CONFIG_DATABASE_URL=${CONFIG_DATABASE_URL:-}
      - CONFIG_DATABASE_USER=${CONFIG_DATABASE_USER:-}
      - DATABASE_PASSWORD=${DATABASE_PASSWORD}
      - DATABASE_URL=${DATABASE_URL}
      - DATABASE_USER=${DATABASE_USER}
      - LOG_LEVEL=${LOG_LEVEL}
      - RUN_DATABASE_MIGRATION_ON_STARTUP=${RUN_DATABASE_MIGRATION_ON_STARTUP}
        #db:
        #image: airbyte/db:${VERSION}
        #logging: *default-logging
        #container_name: airbyte-db
        #restart: unless-stopped
        #environment:
        #- CONFIG_DATABASE_PASSWORD=${CONFIG_DATABASE_PASSWORD:-}
        #- CONFIG_DATABASE_URL=${CONFIG_DATABASE_URL:-}
        #- CONFIG_DATABASE_USER=${CONFIG_DATABASE_USER:-}
        #- DATABASE_PASSWORD=${DATABASE_PASSWORD}
        #- DATABASE_URL=${DATABASE_URL}
        #- DATABASE_USER=${DATABASE_USER}
        #- POSTGRES_PASSWORD=${DATABASE_PASSWORD}
        #- POSTGRES_USER=${DATABASE_USER}
        #volumes:
        #- db:/var/lib/postgresql/data
  scheduler:
    image: airbyte/scheduler:${VERSION}
    logging: *default-logging
    container_name: airbyte-scheduler
    restart: unless-stopped
    environment:
      - AIRBYTE_ROLE=${AIRBYTE_ROLE:-}
      - AIRBYTE_VERSION=${VERSION}
      - CONFIG_DATABASE_PASSWORD=${CONFIG_DATABASE_PASSWORD:-}
      - CONFIG_DATABASE_URL=${CONFIG_DATABASE_URL:-}
      - CONFIG_DATABASE_USER=${CONFIG_DATABASE_USER:-}
      - CONFIG_ROOT=${CONFIG_ROOT}
      - DATABASE_PASSWORD=${DATABASE_PASSWORD}
      - DATABASE_URL=${DATABASE_URL}
      - DATABASE_USER=${DATABASE_USER}
      - INTERNAL_API_HOST=${INTERNAL_API_HOST}
      - JOB_MAIN_CONTAINER_CPU_LIMIT=${JOB_MAIN_CONTAINER_CPU_LIMIT}
      - JOB_MAIN_CONTAINER_CPU_REQUEST=${JOB_MAIN_CONTAINER_CPU_REQUEST}
      - JOB_MAIN_CONTAINER_MEMORY_LIMIT=${JOB_MAIN_CONTAINER_MEMORY_LIMIT}
      - JOB_MAIN_CONTAINER_MEMORY_REQUEST=${JOB_MAIN_CONTAINER_MEMORY_REQUEST}
      - LOCAL_ROOT=${LOCAL_ROOT}
      - LOCAL_DOCKER_MOUNT=${LOCAL_DOCKER_MOUNT}
      - LOG_LEVEL=${LOG_LEVEL}
      - NEW_SCHEDULER=${NEW_SCHEDULER}
      - SECRET_PERSISTENCE=${SECRET_PERSISTENCE}
      - SYNC_JOB_MAX_ATTEMPTS=${SYNC_JOB_MAX_ATTEMPTS}
      - SYNC_JOB_MAX_TIMEOUT_DAYS=${SYNC_JOB_MAX_TIMEOUT_DAYS}
      - SUBMITTER_NUM_THREADS=${SUBMITTER_NUM_THREADS}
      - TEMPORAL_HOST=${TEMPORAL_HOST}
      - TRACKING_STRATEGY=${TRACKING_STRATEGY}
      - WEBAPP_URL=${WEBAPP_URL}
      - WORKER_ENVIRONMENT=${WORKER_ENVIRONMENT}
      - WORKSPACE_DOCKER_MOUNT=${WORKSPACE_DOCKER_MOUNT}
      - WORKSPACE_ROOT=${WORKSPACE_ROOT}
    volumes:
      - data:${CONFIG_ROOT}
      - workspace:${WORKSPACE_ROOT}
      - ${LOCAL_ROOT}:${LOCAL_ROOT}
  worker:
    image: airbyte/worker:${VERSION}
    logging: *default-logging
    container_name: airbyte-worker
    restart: unless-stopped
    environment:
      - AIRBYTE_VERSION=${VERSION}
      - AUTO_DISABLE_FAILING_CONNECTIONS=${AUTO_DISABLE_FAILING_CONNECTIONS}
      - CONFIG_DATABASE_PASSWORD=${CONFIG_DATABASE_PASSWORD:-}
      - CONFIG_DATABASE_URL=${CONFIG_DATABASE_URL:-}
      - CONFIG_DATABASE_USER=${CONFIG_DATABASE_USER:-}
      - CONFIG_ROOT=${CONFIG_ROOT}
      - DATABASE_PASSWORD=${DATABASE_PASSWORD}
      - DATABASE_URL=${DATABASE_URL}
      - DATABASE_USER=${DATABASE_USER}
      - JOB_MAIN_CONTAINER_CPU_LIMIT=${JOB_MAIN_CONTAINER_CPU_LIMIT}
      - JOB_MAIN_CONTAINER_CPU_REQUEST=${JOB_MAIN_CONTAINER_CPU_REQUEST}
      - JOB_MAIN_CONTAINER_MEMORY_LIMIT=${JOB_MAIN_CONTAINER_MEMORY_LIMIT}
      - JOB_MAIN_CONTAINER_MEMORY_REQUEST=${JOB_MAIN_CONTAINER_MEMORY_REQUEST}
      - LOCAL_DOCKER_MOUNT=${LOCAL_DOCKER_MOUNT}
      - LOCAL_ROOT=${LOCAL_ROOT}
      - LOG_LEVEL=${LOG_LEVEL}
      - MAX_CHECK_WORKERS=${MAX_CHECK_WORKERS}
      - MAX_DISCOVER_WORKERS=${MAX_DISCOVER_WORKERS}
      - MAX_SPEC_WORKERS=${MAX_SPEC_WORKERS}
      - MAX_SYNC_WORKERS=${MAX_SYNC_WORKERS}
      - SECRET_PERSISTENCE=${SECRET_PERSISTENCE}
      - SYNC_JOB_MAX_ATTEMPTS=${SYNC_JOB_MAX_ATTEMPTS}
      - SYNC_JOB_MAX_TIMEOUT_DAYS=${SYNC_JOB_MAX_TIMEOUT_DAYS}
      - TEMPORAL_HOST=${TEMPORAL_HOST}
      - TRACKING_STRATEGY=${TRACKING_STRATEGY}
      - WEBAPP_URL=${WEBAPP_URL}
      - WORKER_ENVIRONMENT=${WORKER_ENVIRONMENT}
      - WORKSPACE_DOCKER_MOUNT=${WORKSPACE_DOCKER_MOUNT}
      - WORKSPACE_ROOT=${WORKSPACE_ROOT}
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - workspace:${WORKSPACE_ROOT}
      - ${LOCAL_ROOT}:${LOCAL_ROOT}
  server:
    image: airbyte/server:${VERSION}
    logging: *default-logging
    container_name: airbyte-server
    restart: unless-stopped
    environment:
      - AIRBYTE_ROLE=${AIRBYTE_ROLE:-}
      - AIRBYTE_VERSION=${VERSION}
      - CONFIG_DATABASE_PASSWORD=${CONFIG_DATABASE_PASSWORD:-}
      - CONFIG_DATABASE_URL=${CONFIG_DATABASE_URL:-}
      - CONFIG_DATABASE_USER=${CONFIG_DATABASE_USER:-}
      - CONFIGS_DATABASE_MINIMUM_FLYWAY_MIGRATION_VERSION=${CONFIGS_DATABASE_MINIMUM_FLYWAY_MIGRATION_VERSION:-}
      - CONFIG_ROOT=${CONFIG_ROOT}
      - DATABASE_PASSWORD=${DATABASE_PASSWORD}
      - DATABASE_URL=${DATABASE_URL}
      - DATABASE_USER=${DATABASE_USER}
      - JOB_MAIN_CONTAINER_CPU_LIMIT=${JOB_MAIN_CONTAINER_CPU_LIMIT}
      - JOB_MAIN_CONTAINER_CPU_REQUEST=${JOB_MAIN_CONTAINER_CPU_REQUEST}
      - JOB_MAIN_CONTAINER_MEMORY_LIMIT=${JOB_MAIN_CONTAINER_MEMORY_LIMIT}
      - JOB_MAIN_CONTAINER_MEMORY_REQUEST=${JOB_MAIN_CONTAINER_MEMORY_REQUEST}
      - JOBS_DATABASE_MINIMUM_FLYWAY_MIGRATION_VERSION=${JOBS_DATABASE_MINIMUM_FLYWAY_MIGRATION_VERSION:-}
      - LOG_LEVEL=${LOG_LEVEL}
      - NEW_SCHEDULER=${NEW_SCHEDULER}
      - SECRET_PERSISTENCE=${SECRET_PERSISTENCE}
      - TEMPORAL_HOST=${TEMPORAL_HOST}
      - TRACKING_STRATEGY=${TRACKING_STRATEGY}
      - WEBAPP_URL=${WEBAPP_URL}
      - WORKER_ENVIRONMENT=${WORKER_ENVIRONMENT}
      - WORKSPACE_ROOT=${WORKSPACE_ROOT}
    ports:
      - 8001:8001
    volumes:
      - workspace:${WORKSPACE_ROOT}
      - data:${CONFIG_ROOT}
      - ${LOCAL_ROOT}:${LOCAL_ROOT}
  webapp:
    image: airbyte/webapp:${VERSION}
    logging: *default-logging
    container_name: airbyte-webapp
    restart: unless-stopped
    ports:
      - 8000:80
    environment:
      - AIRBYTE_ROLE=${AIRBYTE_ROLE:-}
      - AIRBYTE_VERSION=${VERSION}
      - API_URL=${API_URL:-}
      - FULLSTORY=${FULLSTORY:-}
      - INTERNAL_API_HOST=${INTERNAL_API_HOST}
      - IS_DEMO=${IS_DEMO:-}
      - OPENREPLAY=${OPENREPLAY:-}
      - PAPERCUPS_STORYTIME=${PAPERCUPS_STORYTIME:-}
      - TRACKING_STRATEGY=${TRACKING_STRATEGY}
  airbyte-temporal:
    image: airbyte/temporal:${VERSION}
    logging: *default-logging
    container_name: airbyte-temporal
    restart: unless-stopped
    ports:
      - 7233:7233
    environment:
      - DB=postgresql
      - DB_PORT=${DATABASE_PORT}
      - DYNAMIC_CONFIG_FILE_PATH=config/dynamicconfig/development.yaml
      - LOG_LEVEL=${LOG_LEVEL}
      - POSTGRES_PWD=${DATABASE_PASSWORD}
      - POSTGRES_SEEDS=${DATABASE_HOST}
      - POSTGRES_USER=${DATABASE_USER}
    volumes:
      - ./temporal/dynamicconfig:/etc/temporal/config/dynamicconfig
volumes:
  workspace:
    name: ${WORKSPACE_DOCKER_MOUNT}
  # the data volume is only needed for backward compatibility; when users upgrade
  # from an old Airbyte version that relies on file-based configs, the server needs
  # to read this volume to copy their configs to the database
  data:
    name: ${DATA_DOCKER_MOUNT}
  db:
    name: ${DB_DOCKER_MOUNT}

Server logs:

WARNING: The RUN_DATABASE_MIGRATION_ON_STARTUP variable is not set. Defaulting to a blank string.
WARNING: The SECRET_PERSISTENCE variable is not set. Defaulting to a blank string.
WARNING: The WORKER_ENVIRONMENT variable is not set. Defaulting to a blank string.
Creating network "airbyte_default" with the default driver
Creating airbyte-server     ... done
Creating airbyte-worker     ... done
Creating airbyte-temporal   ... done
Creating init               ... done
Creating airbyte-scheduler  ... done
Creating airbyte-webapp     ... done
Creating airbyte-bootloader ... done
Attaching to init, airbyte-webapp, airbyte-scheduler, airbyte-temporal, airbyte-bootloader, airbyte-server, airbyte-worker
airbyte-temporal    | Start init
airbyte-temporal    | + DBNAME=temporal
airbyte-temporal    | + VISIBILITY_DBNAME=temporal_visibility
airbyte-temporal    | + DB_PORT=5432
airbyte-temporal    | + POSTGRES_SEEDS=34.118.52.236
airbyte-temporal    | + POSTGRES_USER=db-user
airbyte-temporal    | + POSTGRES_PWD=db-password
airbyte-temporal    | + SCHEMA_DIR=/etc/temporal/schema/postgresql/v96/temporal/versioned
airbyte-temporal    | + VISIBILITY_SCHEMA_DIR=/etc/temporal/schema/postgresql/v96/visibility/versioned
airbyte-temporal    | + SKIP_DEFAULT_NAMESPACE_CREATION=false
airbyte-temporal    | + DEFAULT_NAMESPACE=default
airbyte-temporal    | + DEFAULT_NAMESPACE_RETENTION=1
airbyte-temporal    | + init_entry_point
airbyte-temporal    | + echo 'Start init'
airbyte-temporal    | ++ hostname -i
airbyte-temporal    | + export BIND_ON_IP=172.20.0.2
airbyte-temporal    | + BIND_ON_IP=172.20.0.2
airbyte-temporal    | + [[ 172.20.0.2 =~ : ]]
airbyte-temporal    | + export TEMPORAL_CLI_ADDRESS=172.20.0.2:7233
airbyte-temporal    | + TEMPORAL_CLI_ADDRESS=172.20.0.2:7233
airbyte-temporal    | + dockerize -template ./config/config_template.yaml:./config/docker.yaml
airbyte-temporal    | Done init
airbyte-temporal    | + echo 'Done init'
airbyte-temporal    | + wait_for_postgres
airbyte-temporal    | + nc -z 34.118.52.236 5432
airbyte-temporal    | + echo 'PostgreSQL started.'PostgreSQL started.
airbyte-temporal    |
airbyte-temporal    | + update_postgres_schema
airbyte-temporal    | + CONTAINER_ALREADY_STARTED=CONTAINER_ALREADY_STARTED_PLACEHOLDER
airbyte-temporal    | + '[' '!' -e CONTAINER_ALREADY_STARTED_PLACEHOLDER ']'
airbyte-temporal    | + touch CONTAINER_ALREADY_STARTED_PLACEHOLDER
airbyte-temporal    | touch: CONTAINER_ALREADY_STARTED_PLACEHOLDER: Permission denied
airbyte-temporal    | + temporal-sql-tool --plugin postgres --ep 34.118.52.236 -u db-user -p 5432 create --db temporal
airbyte-webapp      | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
airbyte-webapp      | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
airbyte-webapp      | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
airbyte-webapp      | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
airbyte-webapp      | 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
airbyte-webapp      | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
airbyte-webapp      | 20-envsubst-on-templates.sh: Running envsubst on /etc/nginx/templates/default.conf.template to /etc/nginx/conf.d/default.conf
airbyte-webapp      | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
airbyte-webapp      | /docker-entrypoint.sh: Configuration complete; ready for start up
init                | MOUNT: /local_parent
init                | ROOT_PARENT: /tmp
init                | ROOT: /tmp/airbyte_local
init                | MOUNT_ROOT: /local_parent//airbyte_local
init exited with code 0
airbyte-temporal    | 2022-04-11T18:01:18.429Z	ERROR	Unable to create SQL database.	{"error": "pq: database \"temporal\" already exists", "logging-call-at": "handler.go:97"}
airbyte-temporal    | + temporal-sql-tool --plugin postgres --ep 34.118.52.236 -u db-user -p 5432 --db temporal setup-schema -v 0.0
airbyte-temporal    | 2022-04-11T18:01:18.497Z	INFO	Starting schema setup	{"config": {"SchemaFilePath":"","InitialVersion":"0.0","Overwrite":false,"DisableVersioning":false}, "logging-call-at": "setuptask.go:57"}
airbyte-temporal    | 2022-04-11T18:01:18.498Z	DEBUG	Setting up version tables	{"logging-call-at": "setuptask.go:67"}
airbyte-temporal    | 2022-04-11T18:01:18.500Z	ERROR	Unable to setup SQL schema.	{"error": "pq: relation \"schema_version\" already exists", "logging-call-at": "handler.go:57"}
airbyte-temporal    | + temporal-sql-tool --plugin postgres --ep 34.118.52.236 -u db-user -p 5432 create --db temporal_visibility
airbyte-temporal    | 2022-04-11T18:01:18.585Z	ERROR	Unable to create SQL database.	{"error": "pq: database \"temporal_visibility\" already exists", "logging-call-at": "handler.go:97"}
airbyte-temporal    | + temporal-sql-tool --plugin postgres --ep 34.118.52.236 -u db-user -p 5432 --db temporal_visibility setup-schema -v 0.0
airbyte-temporal    | 2022-04-11T18:01:18.672Z	INFO	Starting schema setup	{"config": {"SchemaFilePath":"","InitialVersion":"0.0","Overwrite":false,"DisableVersioning":false}, "logging-call-at": "setuptask.go:57"}
airbyte-temporal    | 2022-04-11T18:01:18.674Z	DEBUG	Setting up version tables	{"logging-call-at": "setuptask.go:67"}
airbyte-temporal    | 2022-04-11T18:01:18.676Z	ERROR	Unable to setup SQL schema.	{"error": "pq: relation \"schema_version\" already exists", "logging-call-at": "handler.go:57"}
airbyte-temporal    | + echo 'Starting to update the temporal DB'
airbyte-temporal    | Starting to update the temporal DB
airbyte-temporal    | + temporal-sql-tool --plugin postgres --ep 34.118.52.236 -u db-user -p 5432 --db temporal update-schema -d /etc/temporal/schema/postgresql/v96/temporal/versioned
airbyte-temporal    | 2022-04-11T18:01:18.775Z	INFO	UpdateSchemeTask started	{"config": {"DBName":"","TargetVersion":"","SchemaDir":"/etc/temporal/schema/postgresql/v96/temporal/versioned","IsDryRun":false}, "logging-call-at": "updatetask.go:98"}
airbyte-temporal    | 2022-04-11T18:01:18.777Z	DEBUG	found zero updates from current version 1.6	{"logging-call-at": "updatetask.go:128"}
airbyte-temporal    | 2022-04-11T18:01:18.777Z	INFO	UpdateSchemeTask done	{"logging-call-at": "updatetask.go:121"}
airbyte-temporal    | + echo 'Update the temporal DB is done'
airbyte-temporal    | Update the temporal DB is done
airbyte-temporal    | + echo 'Starting to update the temporal visibility DB'
airbyte-temporal    | Starting to update the temporal visibility DB
airbyte-temporal    | + temporal-sql-tool --plugin postgres --ep 34.118.52.236 -u db-user -p 5432 --db temporal_visibility update-schema -d /etc/temporal/schema/postgresql/v96/visibility/versioned
airbyte-temporal    | 2022-04-11T18:01:18.873Z	INFO	UpdateSchemeTask started	{"config": {"DBName":"","TargetVersion":"","SchemaDir":"/etc/temporal/schema/postgresql/v96/visibility/versioned","IsDryRun":false}, "logging-call-at": "updatetask.go:98"}
airbyte-temporal    | 2022-04-11T18:01:18.875Z	DEBUG	found zero updates from current version 1.1	{"logging-call-at": "updatetask.go:128"}
airbyte-temporal    | 2022-04-11T18:01:18.876Z	INFO	UpdateSchemeTask done	{"logging-call-at": "updatetask.go:121"}
airbyte-temporal    | + echo 'Update the temporal visibility DB is done'
airbyte-temporal    | Update the temporal visibility DB is done
airbyte-temporal    | + echo 'starting temporal server'
airbyte-temporal    | starting temporal server
airbyte-temporal    | + ./start-temporal.sh
airbyte-temporal    | + setup_server
airbyte-temporal    | + echo 'Temporal CLI address: 172.20.0.2:7233.'
airbyte-temporal    | Temporal CLI address: 172.20.0.2:7233.
airbyte-temporal    | + grep SERVING
airbyte-temporal    | + tctl cluster health
airbyte-temporal    | 2022/04/11 18:01:19 Loading config; env=docker,zone=,configDir=config
airbyte-temporal    | 2022/04/11 18:01:19 Loading config files=[config/docker.yaml]
airbyte-temporal    | {"level":"info","ts":"2022-04-11T18:01:19.112Z","msg":"Updated dynamic config","logging-call-at":"file_based_client.go:143"}
airbyte-temporal    | {"level":"info","ts":"2022-04-11T18:01:19.113Z","msg":"Starting server for services","value":["history","matching","frontend","worker"],"logging-call-at":"server.go:123"}
airbyte-temporal    | + echo 'Waiting for Temporal server to start...'
airbyte-temporal    | Waiting for Temporal server to start...
airbyte-temporal    | + sleep 1
airbyte-temporal    | {"level":"info","ts":"2022-04-11T18:01:19.144Z","msg":"PProf not started due to port not set","logging-call-at":"pprof.go:67"}
airbyte-temporal    | [Fx] SUPPLY	*resource.BootstrapParams
airbyte-temporal    | [Fx] SUPPLY	chan struct {}
airbyte-temporal    | [Fx] PROVIDE	*client.factoryImpl <= go.temporal.io/server/common/persistence/client.NewFactoryImplProvider()
airbyte-temporal    | [Fx] PROVIDE	client.Factory <= go.temporal.io/server/common/persistence/client.BindFactory()
airbyte-temporal    | [Fx] PROVIDE	resource.SnTaggedLogger <= go.temporal.io/server/common/resource.SnTaggedLoggerProvider()
airbyte-temporal    | [Fx] PROVIDE	resource.ThrottledLogger <= go.temporal.io/server/common/resource.ThrottledLoggerProvider()
airbyte-temporal    | [Fx] PROVIDE	*config.Persistence <= go.temporal.io/server/common/resource.PersistenceConfigProvider()
airbyte-temporal    | [Fx] PROVIDE	tally.Scope <= go.temporal.io/server/common/resource.MetricsScopeProvider()
airbyte-temporal    | [Fx] PROVIDE	resource.HostName <= go.temporal.io/server/common/resource.HostNameProvider()
airbyte-temporal    | [Fx] PROVIDE	resource.ServiceName <= go.temporal.io/server/common/resource.ServiceNameProvider()

Thanks. I investigated further the issue and found the solution. Basically, it was due to a limit on the number of connections raised by Airbyte to the Postgres DB on GCP. I decreased the value of the variables SQL_MAX_CONNS and SQL_MAX_IDLE_CONNS. I also added a flag on the Cloud SQL DB to accept a higher number of max_connections.

This solved the issue!

Great! I saw in your docker-compose file that you’re also keeping the CONFIG_DATABASE* variable. If you are using the same database for jobs you can remove these var as by default they use DATABASE_* values, this could prevent some confusion in the future if you encounter this type of error.

By default, the Config Database and the Job Database use the same database instance based on the above setting. It is possible, however, to separate the former from the latter by specifying a separate parameters. For example:

Hi there from the Community Assistance team.
We’re letting you know about an issue we discovered with the back-end process we use to handle topics and responses on the forum. If you experienced a situation where you posted the last message in a topic that did not receive any further replies, please open a new topic to continue the discussion. In addition, if you’re having a problem and find a closed topic on the subject, go ahead and open a new topic on it and we’ll follow up with you. We apologize for the inconvenience, and appreciate your willingness to work with us to provide a supportive community.