Trouble deploying Airbyte on EKS

Summary

Unable to deploy Airbyte on EKS, encountering timeout error during Helm installation. Pod statuses show pending and error states.


Question

Hello Team,

I am trying to deploy Airbyte on EKS, but it is not working.
I have followed the steps in this doc : https://docs.airbyte.com/deploying-airbyte/on-kubernetes-via-helm#custom-deployment
At the step helm install %release_name% airbyte/airbyte , the error occurred :

        * timed out waiting for the condition```
The Pod status is as follows :
```$ kubectl get pod -n airbyte2
NAME                            READY   STATUS    RESTARTS   AGE
airbyte-db-0                    0/1     Pending   0          16m
airbyte-minio-0                 0/1     Pending   0          16m
my-airbyte-airbyte-bootloader   0/1     Error     0          16m```
I added the EBS CSI driver add-on and updated the VPC CNI and CoreDNS versions, but the situation did not change.
Please tell me what to do.

<br>

---

This topic has been created from a Slack thread to give it more visibility.
It will be on Read-Only mode here. [Click here](https://airbytehq.slack.com/archives/C021JANJ6TY/p1722232984884749) if you want 
to access the original thread.

[Join the conversation on Slack](https://slack.airbyte.com)

<sub>
["eks", "airbyte", "deployment", "helm-installation", "timeout-error", "pod-status", "ebs-csi-driver", "vpc-cni", "coredns"]
</sub>

Have you checked details about pods?
kubectl describe pod airbyte-db-0 -n airbyte2
kubectl describe pod airbyte-minio-0 -n airbyte2
kubectl describe pod my-airbyte-airbyte-bootloader -n airbyte2

What exactly did your helm install ... command look like?

<@U05JENRCF7C>
Thank you for your reply.
The exact installation command is helm install my-airbyte airbyte/airbyte -n airbyte2 --debug .

When I checked the PVC details, I got a message that no storage class is set, so I tried adding the EBS driver, but it didn’t work.

・・・
Events:
  Type    Reason         Age                   From                         Message
  ----    ------         ----                  ----                         -------
  Normal  FailedBinding  114s (x182 over 47m)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set```
Pod details are shown below. What seems to be the problem?
Name:             airbyte-db-0
Namespace:        airbyte2
Priority:         0
Service Account:  default
Node:             &lt;none&gt;
Labels:           <http://app.kubernetes.io/instance=my-airbyte|app.kubernetes.io/instance=my-airbyte>
                  <http://app.kubernetes.io/name=my-airbyte-db|app.kubernetes.io/name=my-airbyte-db>
                  <http://apps.kubernetes.io/pod-index=0|apps.kubernetes.io/pod-index=0>
                  controller-revision-hash=airbyte-db-598486464f
                  <http://statefulset.kubernetes.io/pod-name=airbyte-db-0|statefulset.kubernetes.io/pod-name=airbyte-db-0>
Annotations:      &lt;none&gt;
Status:           Pending
IP:
IPs:              &lt;none&gt;
Controlled By:    StatefulSet/airbyte-db
Containers:
  airbyte-db-container:
    Image:           airbyte/db:0.63.11
    Port:            5432/TCP
    Host Port:       0/TCP
    SeccompProfile:  RuntimeDefault
    Environment:
      POSTGRES_DB:        db-airbyte
      POSTGRES_PASSWORD:  airbyte
      POSTGRES_USER:      airbyte
      PGDATA:             /var/lib/postgresql/data/pgdata
    Mounts:
      /var/lib/postgresql/data from airbyte-volume-db (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d2b8h (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  airbyte-volume-db:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  airbyte-volume-db-airbyte-db-0
    ReadOnly:   false
  kube-api-access-d2b8h:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       &lt;nil&gt;
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              &lt;none&gt;
Tolerations:                 <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
                             <http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Warning  FailedScheduling  4m29s (x3 over 9m56s)  default-scheduler  0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.```
Name:             airbyte-minio-0
Namespace:        airbyte2
Priority:         0
Service Account:  default
Node:             &lt;none&gt;
Labels:           <http://app.kubernetes.io/instance=my-airbyte|app.kubernetes.io/instance=my-airbyte>
                  <http://app.kubernetes.io/name=my-airbyte-minio|app.kubernetes.io/name=my-airbyte-minio>
                  <http://apps.kubernetes.io/pod-index=0|apps.kubernetes.io/pod-index=0>
                  controller-revision-hash=airbyte-minio-67c865784c
                  <http://statefulset.kubernetes.io/pod-name=airbyte-minio-0|statefulset.kubernetes.io/pod-name=airbyte-minio-0>
Annotations:      &lt;none&gt;
Status:           Pending
IP:
IPs:              &lt;none&gt;
Controlled By:    StatefulSet/airbyte-minio
Containers:
  airbyte-minio:
    Image:           minio/minio:RELEASE.2023-11-20T22-40-07Z
    Port:            9000/TCP
    Host Port:       0/TCP
    SeccompProfile:  RuntimeDefault
    Args:
      server
      /storage
    Limits:
      cpu:     200m
      memory:  1Gi
    Requests:
      cpu:     200m
      memory:  1Gi
    Environment:
      MINIO_ROOT_USER:      &lt;set to the key 'DEFAULT_MINIO_ACCESS_KEY' in secret 'my-airbyte-airbyte-secrets'&gt;  Optional: false
      MINIO_ROOT_PASSWORD:  &lt;set to the key 'DEFAULT_MINIO_SECRET_KEY' in secret 'my-airbyte-airbyte-secrets'&gt;  Optional: false
    Mounts:
      /storage from airbyte-minio-pv-claim (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-spmz7 (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  airbyte-minio-pv-claim:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  airbyte-minio-pv-claim-airbyte-minio-0
    ReadOnly:   false
  kube-api-access-spmz7:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       &lt;nil&gt;
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              &lt;none&gt;
Tolerations:                 <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
                             <http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  95s (x3 over 12m)  default-scheduler  0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.```
Name:             my-airbyte-airbyte-bootloader
Namespace:        airbyte2
Priority:         0
Service Account:  airbyte-admin
Node:             ip-192-168-**-***.ap-northeast-1.compute.internal/192.168.**.***
Start Time:       Tue, 30 Jul 2024 01:22:58 +0000
Labels:           <http://app.kubernetes.io/instance=my-airbyte|app.kubernetes.io/instance=my-airbyte>
                  <http://app.kubernetes.io/managed-by=Helm|app.kubernetes.io/managed-by=Helm>
                  <http://app.kubernetes.io/name=airbyte-bootloader|app.kubernetes.io/name=airbyte-bootloader>
                  <http://app.kubernetes.io/version=0.63.11|app.kubernetes.io/version=0.63.11>
                  <http://helm.sh/chart=airbyte-bootloader-0.363.0|helm.sh/chart=airbyte-bootloader-0.363.0>
Annotations:      <http://helm.sh/hook|helm.sh/hook>: pre-install,pre-upgrade
                  <http://helm.sh/hook-weight|helm.sh/hook-weight>: 0
Status:           Failed
IP:               192.168.xx.xx
IPs:
  IP:  192.168.xx.xx
Containers:
  airbyte-bootloader-container:
    Container ID:    <containerd://047758858370b6f25b5d270b23ab7b1c82a567ab5da1cd86d6b08262e284xxx>x
    Image:           airbyte/bootloader:0.63.11
    Image ID:        <http://docker.io/airbyte/bootloader@sha256:fe7facffcc4425210f6a00fa07bca5b0e81c0f8e6b4b7b4eadfbbe86842cxxxx|docker.io/airbyte/bootloader@sha256:fe7facffcc4425210f6a00fa07bca5b0e81c0f8e6b4b7b4eadfbbe86842cxxxx>
    Port:            &lt;none&gt;
    Host Port:       &lt;none&gt;
    SeccompProfile:  RuntimeDefault
    State:           Terminated
      Reason:        Error
      Exit Code:     255
      Started:       Tue, 30 Jul 2024 01:23:12 +0000
      Finished:      Tue, 30 Jul 2024 01:29:22 +0000
    Ready:           False
    Restart Count:   0
    Environment:
      AIRBYTE_VERSION:                    &lt;set to the key 'AIRBYTE_VERSION' of config map 'my-airbyte-airbyte-env'&gt;                    Optional: false
      RUN_DATABASE_MIGRATION_ON_STARTUP:  &lt;set to the key 'RUN_DATABASE_MIGRATION_ON_STARTUP' of config map 'my-airbyte-airbyte-env'&gt;  Optional: false
      DATABASE_HOST:                      &lt;set to the key 'DATABASE_HOST' of config map 'my-airbyte-airbyte-env'&gt;                      Optional: false
      DATABASE_PORT:                      &lt;set to the key 'DATABASE_PORT' of config map 'my-airbyte-airbyte-env'&gt;                      Optional: false
      DATABASE_DB:                        &lt;set to the key 'DATABASE_DB' of config map 'my-airbyte-airbyte-env'&gt;
               Optional: false
      DATABASE_USER:                      &lt;set to the key 'DATABASE_USER' in secret 'my-airbyte-airbyte-secrets'&gt;                      Optional: false
      DATABASE_PASSWORD:                  &lt;set to the key 'DATABASE_PASSWORD' in secret 'my-airbyte-airbyte-secrets'&gt;                  Optional: false
      DATABASE_URL:                       &lt;set to the key 'DATABASE_URL' of config map 'my-airbyte-airbyte-env'&gt;                       Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z66lr (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   False
  Initialized                 True
  Ready                       False
  ContainersReady             False
  PodScheduled                True
Volumes:
  kube-api-access-z66lr:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       &lt;nil&gt;
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              &lt;none&gt;
Tolerations:                 <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
                             <http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  12m   default-scheduler  Successfully assigned airbyte2/my-airbyte-airbyte-bootloader to ip-192-168-58-217.ap-northeast-1.compute.internal
  Normal  Pulling    12m   kubelet            Pulling image "airbyte/bootloader:0.63.11"
  Normal  Pulled     12m   kubelet            Successfully pulled image "airbyte/bootloader:0.63.11" in 12.872s (12.872s including waiting). Image size: 487669286 bytes.
  Normal  Created    12m   kubelet            Created container airbyte-bootloader-container
  Normal  Started    12m   kubelet            Started container airbyte-bootloader-container```

I haven’t came across those errors 0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. so you need to check internet or maybe somebody will jump in with advice how to fix it
Another option is to use external database and s3 instead of airbyte-db-0 and airbyte-minio-0 by configuring it via values.yaml for helm charts.

Second thing, what logs do you get for bootloader kubectl logs my-airbyte-airbyte-bootloader -n airbyte2 ?

<@U05JENRCF7C>
my-airbyte-airbyte-bootloader’s log is shown below. Any hints in this log?

・・・
2024-07-30 01:23:15 INFO i.m.c.e.DefaultEnvironment(&lt;init&gt;):168 - Established active environments: [k8s, cloud]
2024-07-30 01:23:18 INFO i.m.r.Micronaut(start):101 - Startup completed in 4699ms. Server Running: <http://my-airbyte-airbyte-bootloader:9002>
2024-07-30 01:23:19 INFO i.a.f.ConfigFileClient(&lt;init&gt;):105 - path /flags does not exist, will return default flag values
2024-07-30 01:23:19 WARN i.a.m.l.MetricClientFactory(initialize):72 - MetricClient was not recognized or not provided. Accepted values are `datadog` or `otel`.
2024-07-30 01:23:19 INFO i.a.c.s.RemoteDefinitionsProvider(&lt;init&gt;):75 - Creating remote definitions provider for URL '<https://connectors.airbyte.com/files/>' and registry 'OSS'...
2024-07-30 01:23:19 INFO i.a.c.i.c.SeedBeanFactory(seedDefinitionsProvider):39 - Using local definitions provider for seeding
2024-07-30 01:23:19 INFO i.a.b.Bootloader(load):107 - Initializing databases...
2024-07-30 01:23:19 INFO i.a.b.Bootloader(initializeDatabases):232 - Initializing databases...
2024-07-30 01:23:19 WARN i.a.d.c.DatabaseAvailabilityCheck(check):38 - Waiting for database to become available...
2024-07-30 01:23:19 INFO i.a.d.c.DatabaseAvailabilityCheck(lambda$isDatabaseConnected$1):75 - Testing airbyte configs database connection...
2024-07-30 01:23:26 WARN i.m.s.r.u.Loggers$Slf4JLogger(warn):299 - [0d3288ad, L:/127.0.0.1:44793 - R:localhost/127.0.0.1:8125] An exception has been observed post termination, use DEBUG level to see the full stack: java.net.PortUnreachableException: recvAddress(..) failed: Connection refused

--- Repeated similar output ---
2024-07-30 01:23:50 ERROR i.a.d.c.DatabaseAvailabilityCheck(lambda$isDatabaseConnected$1):78 - Failed to verify database connection.
org.jooq.exception.DataAccessException: Error getting connection from data source HikariDataSource (HikariPool-1)
        at org.jooq_3.19.7.POSTGRES.debug(Unknown Source) ~[?:?]
        at org.jooq.impl.DataSourceConnectionProvider.acquire(DataSourceConnectionProvider.java:90) ~[jooq-3.19.7.jar:?]
        at org.jooq.impl.DefaultExecuteContext.connection(DefaultExecuteContext.java:651) ~[jooq-3.19.7.jar:?]
        at org.jooq.impl.AbstractQuery.connection(AbstractQuery.java:388) ~[jooq-3.19.7.jar:?]
        at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:308) ~[jooq-3.19.7.jar:?]
        at org.jooq.impl.AbstractResultQuery.fetchLazy(AbstractResultQuery.java:301) ~[jooq-3.19.7.jar:?]
        at org.jooq.impl.AbstractResultQuery.fetchLazyNonAutoClosing(AbstractResultQuery.java:322) ~[jooq-3.19.7.jar:?]
        at org.jooq.impl.SelectImpl.fetchLazyNonAutoClosing(SelectImpl.java:3256) ~[jooq-3.19.7.jar:?]
        at org.jooq.impl.ResultQueryTrait.fetchOne(ResultQueryTrait.java:509) ~[jooq-3.19.7.jar:?]
        at org.jooq.impl.Tools.attach(Tools.java:1652) ~[jooq-3.19.7.jar:?]
        at org.jooq.impl.DefaultDSLContext.fetchOne(DefaultDSLContext.java:5019) ~[jooq-3.19.7.jar:?]
        at org.jooq.impl.DefaultDSLContext.lambda$fetchValue$55(DefaultDSLContext.java:5039) ~[jooq-3.19.7.jar:?]
        at org.jooq.impl.Tools.attach(Tools.java:1652) ~[jooq-3.19.7.jar:?]
        at org.jooq.impl.DefaultDSLContext.fetchValue(DefaultDSLContext.java:5039) ~[jooq-3.19.7.jar:?]
        at org.jooq.impl.DefaultDSLContext.fetchValue(DefaultDSLContext.java:5058) ~[jooq-3.19.7.jar:?]
        at org.jooq.impl.DefaultDSLContext.fetchExists(DefaultDSLContext.java:5150) ~[jooq-3.19.7.jar:?]
        at io.airbyte.db.check.DatabaseAvailabilityCheck.lambda$isDatabaseConnected$0(DatabaseAvailabilityCheck.java:76) ~[io.airbyte.airbyte-db-db-lib-0.63.11.jar:?]
        at io.airbyte.db.Database.query(Database.java:23) ~[io.airbyte.airbyte-db-db-lib-0.63.11.jar:?]
        at io.airbyte.db.check.DatabaseAvailabilityCheck.lambda$isDatabaseConnected$1(DatabaseAvailabilityCheck.java:76) ~[io.airbyte.airbyte-db-db-lib-0.63.11.jar:?]
        at io.airbyte.db.check.DatabaseAvailabilityCheck.check(DatabaseAvailabilityCheck.java:47) [io.airbyte.airbyte-db-db-lib-0.63.11.jar:?]
        at io.airbyte.db.init.DatabaseInitializer.initialize(DatabaseInitializer.java:45) [io.airbyte.airbyte-db-db-lib-0.63.11.jar:?]
        at io.airbyte.bootloader.Bootloader.initializeDatabases(Bootloader.java:233) [io.airbyte-airbyte-bootloader-0.63.11.jar:?]
        at io.airbyte.bootloader.Bootloader.load(Bootloader.java:108) [io.airbyte-airbyte-bootloader-0.63.11.jar:?]
        at io.airbyte.bootloader.Application.main(Application.java:22) [io.airbyte-airbyte-bootloader-0.63.11.jar:?]
Caused by: java.sql.SQLTransientConnectionException: HikariPool-1 - Connection is not available, request timed out after 30001ms (total=0, active=0, idle=0, waiting=0)
        at com.zaxxer.hikari.pool.HikariPool.createTimeoutException(HikariPool.java:686) ~[HikariCP-5.1.0.jar:?]
        at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:179) ~[HikariCP-5.1.0.jar:?]
        at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:144) ~[HikariCP-5.1.0.jar:?]
        at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:99) ~[HikariCP-5.1.0.jar:?]
        at org.jooq.impl.DataSourceConnectionProvider.acquire(DataSourceConnectionProvider.java:87) ~[jooq-3.19.7.jar:?]
        ... 22 more
Caused by: org.postgresql.util.PSQLException: Connection to airbyte-db-svc:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
        at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:346) ~[postgresql-42.7.3.jar:42.7.3]
        at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:54) ~[postgresql-42.7.3.jar:42.7.3]
        at org.postgresql.jdbc.PgConnection.&lt;init&gt;(PgConnection.java:273) ~[postgresql-42.7.3.jar:42.7.3]
        at org.postgresql.Driver.makeConnection(Driver.java:446) ~[postgresql-42.7.3.jar:42.7.3]
        at org.postgresql.Driver.connect(Driver.java:298) ~[postgresql-42.7.3.jar:42.7.3]
        at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:137) ~[HikariCP-5.1.0.jar:?]
        at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:360) ~[HikariCP-5.1.0.jar:?]
        at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:202) ~[HikariCP-5.1.0.jar:?]
        at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:461) ~[HikariCP-5.1.0.jar:?]
        at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:724) ~[HikariCP-5.1.0.jar:?]
        at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:703) ~[HikariCP-5.1.0.jar:?]
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) ~[?:?]
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.base/java.lang.Thread.run(Thread.java:1583) ~[?:?]
Caused by: java.net.ConnectException: Connection refused
        at java.base/sun.nio.ch.Net.pollConnect(Native Method) ~[?:?]
        at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:682) ~[?:?]
        at java.base/sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:542) ~[?:?]
        at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:592) ~[?:?]
        at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:327) ~[?:?]
        at java.base/java.net.Socket.connect(Socket.java:751) ~[?:?]
        at org.postgresql.core.PGStream.createSocket(PGStream.java:243) ~[postgresql-42.7.3.jar:42.7.3]
        at org.postgresql.core.PGStream.&lt;init&gt;(PGStream.java:98) ~[postgresql-42.7.3.jar:42.7.3]
        at org.postgresql.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:136) ~[postgresql-42.7.3.jar:42.7.3]
        at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:262) ~[postgresql-42.7.3.jar:42.7.3]
        at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:54) ~[postgresql-42.7.3.jar:42.7.3]
        at org.postgresql.jdbc.PgConnection.&lt;init&gt;(PgConnection.java:273) ~[postgresql-42.7.3.jar:42.7.3]
        at org.postgresql.Driver.makeConnection(Driver.java:446) ~[postgresql-42.7.3.jar:42.7.3]
        at org.postgresql.Driver.connect(Driver.java:298) ~[postgresql-42.7.3.jar:42.7.3]
        at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:137) ~[HikariCP-5.1.0.jar:?]
        at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:360) ~[HikariCP-5.1.0.jar:?]
        at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:202) ~[HikariCP-5.1.0.jar:?]
        at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:461) ~[HikariCP-5.1.0.jar:?]
        at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:724) ~[HikariCP-5.1.0.jar:?]
        at com.zaxxer.hikari.pool.HikariPool$PoolEntryCreator.call(HikariPool.java:703) ~[HikariCP-5.1.0.jar:?]
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) ~[?:?]
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
        at java.base/java.lang.Thread.run(Thread.java:1583) ~[?:?]
2024-07-30 01:23:50 INFO i.a.d.c.DatabaseAvailabilityCheck(check):49 - Database is not ready yet. Please wait a moment, it might still be initializing...
------------------------------

・・・
2024-07-30 01:29:20 ERROR i.a.b.Application(main):25 - Unable to bootstrap Airbyte environment.
io.airbyte.db.init.DatabaseInitializationException: Database availability check failed.
        at io.airbyte.db.init.DatabaseInitializer.initialize(DatabaseInitializer.java:54) ~[io.airbyte.airbyte-db-db-lib-0.63.11.jar:?]
        at io.airbyte.bootloader.Bootloader.initializeDatabases(Bootloader.java:233) ~[io.airbyte-airbyte-bootloader-0.63.11.jar:?]
        at io.airbyte.bootloader.Bootloader.load(Bootloader.java:108) ~[io.airbyte-airbyte-bootloader-0.63.11.jar:?]
        at io.airbyte.bootloader.Application.main(Application.java:22) [io.airbyte-airbyte-bootloader-0.63.11.jar:?]
Caused by: io.airbyte.db.check.DatabaseCheckException: Unable to connect to the database.
        at io.airbyte.db.check.DatabaseAvailabilityCheck.check(DatabaseAvailabilityCheck.java:40) ~[io.airbyte.airbyte-db-db-lib-0.63.11.jar:?]
        at io.airbyte.db.init.DatabaseInitializer.initialize(DatabaseInitializer.java:45) ~[io.airbyte.airbyte-db-db-lib-0.63.11.jar:?]
        ... 3 more
2024-07-30 01:29:20 INFO i.m.r.Micronaut(lambda$start$0):118 - Embedded Application shutting down
2024-07-30T01:29:22.591943352Z Thread-3 INFO Loading mask data from '/seed/specs_secrets_mask.yaml
2024-07-30T01:29:22.611072163Z Thread-3 WARN Unable to register Log4j shutdown hook because JVM is shutting down. Using SimpleLogger```

bootloader fails because Database is not ready yet.
and airbyte-db-0 cannot start because of 0/2 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

As I mentioned previously, you can try to use S3 and external database.
You can find details in docs how to configure it
https://docs.airbyte.com/deploying-airbyte/on-kubernetes-via-helm#custom-deployment

<@U05JENRCF7C>
Thank you for your advice. I’ll try it.