Minio is in pending states on kubenates

  • Is this your first time deploying Airbyte?: No
  • OS Version / Instance: EKS cluster, kubernates Version 1.19
  • Deployment: Kubernetes deployment
  • Airbyte Version: 0.38.3-alpha
  • Description: When installed in Helm chart for the first time, the following log is displayed and minio is fixed in the pending state.
    The airbyte is executable, but the connector cannot be connected.
    Internal Server Error: Unable to execute HTTP request: Connect to airbyte-minio:9000 [airbyte-minio/10.100.122.189] failed: Connection refused
Warning  FailedScheduling  2m3s (x45 over 7h29m)  default-scheduler  error while running "VolumeBinding" prebind plugin for pod "airbyte-minio-7fcf9845b-d2z4n": Failed to bind volumes: timed out waiting for the condition
5m6s        Warning   FailedScheduling         pod/airbyte-minio-7fcf9845b-d2z4n                                     error while running "VolumeBinding" prebind plugin for pod "airbyte-minio-7fcf9845b-d2z4n": Failed to bind volumes: timed out waiting for the condition
4m9s        Normal    Provisioning             persistentvolumeclaim/airbyte-minio                                   External provisioner is provisioning volume for claim "airbyte/airbyte-minio"
2m49s       Normal    ExternalProvisioning     persistentvolumeclaim/airbyte-minio                                   waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator

On the other hand, both pvc and pv were created and connected normally in postgres.

Hey can you share the complete scheduler and server logs here?

The server log is related to minio as follows.

Cannot start publish with com.van.logging.aws.S3PublishHelper@f460af5 due to error: Cannot start publishing: Unable to execute HTTP request: Connect to airbyte-minio:9000 [airbyte-minio/10.100.122.189] failed: Connection refused
Cannot end publish with com.van.logging.aws.S3PublishHelper@f460af5 due to error: Cannot end publishing: Cannot publish to S3: Unable to execute HTTP request: Connect to airbyte-minio:9000 [airbyte-minio/10.100.122.189] failed: Connection refused

I think it’s related to S3 connection, but I installed help with the default setting. Should I set the values related to s3 credential?

Hey airbyte-minio is one more container we launch with all our other services. Can you check if there is a container airbyte-minio?

Of course there is an airbyte-minio pod. However, the point is that this pod is in the pending state
Helm was installed as the default setting. Do I need to set a separate setting to run minio?

I don’t think you have to run any separate settings. Can you describe the pod and see why it’s in pending state?

Can you describe the pod and see why it’s in pending state?

Status:         Pending
IP:
IPs:            <none>
Controlled By:  ReplicaSet/airbyte-minio-7fcf9845b
Containers:
  minio:
    Image:      docker.io/bitnami/minio:2021.9.3-debian-10-r2
    Port:       9000/TCP
    Host Port:  0/TCP
    Liveness:   http-get http://:minio/minio/health/live delay=5s timeout=5s period=5s #success=1 #failure=5
    Readiness:  tcp-socket :minio delay=5s timeout=1s period=5s #success=1 #failure=5
    Environment:
      BITNAMI_DEBUG:               false
      MINIO_SCHEME:                http
      MINIO_FORCE_NEW_KEYS:        no
      MINIO_ACCESS_KEY:            <set to the key 'access-key' in secret 'airbyte-minio'>  Optional: false
      MINIO_SECRET_KEY:            <set to the key 'secret-key' in secret 'airbyte-minio'>  Optional: false
      MINIO_BROWSER:               on
      MINIO_PROMETHEUS_AUTH_TYPE:  public
    Mounts:
      /data from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from airbyte-minio-token-8tdgp (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  airbyte-minio
    ReadOnly:   false
  airbyte-minio-token-8tdgp:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  airbyte-minio-token-8tdgp
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Warning  FailedScheduling  7m32s (x173 over 29h)  default-scheduler  error while running "VolumeBinding" prebind plugin for pod "airbyte-minio-7fcf9845b-d2z4n": Failed to bind volumes: timed out waiting for the condition

I think it failed when mounting the volume.

Got it. If it is possible to create PV first and then deploy minio? Also can you deploy using kustomize yamls? As that is easy to debug

I think it’s not an airbyte problem, it’s a minio problem

The same phenomenon as this link is occurring.

Oh got it. Did it help with the resolution they suggested?

@harshith
Hey, I solved this problem by proceeding this way.
There is one more question. Is there any way to load the log on s3 without going through minio?
In values.yaml, even if the minio enabled item is designated as false and s3 is changed to true, minio pod still works.

On Kubernetes (Beta) | Airbyte Documentation you can refer to this. Those env variables (of minio) should be still present and should be empty

I already saw that document.
But that document is a description of kcustomize.
The explanation of helm is for reference to github readme.

In values.yaml , even if the minio enabled item is designated as false and s3 is changed to true , minio pod still works.

And this is set up by referring to github readme.

Is it possible to share the values.yaml file?

Ok, I changed the value.yaml file as below.

## @section Logs parameters
logs:
  ## @param logs.accessKey.password Logs Access Key
  ## @param logs.accessKey.existingSecret
  ## @param logs.accessKey.existingSecretKey
  accessKey:
    password: accessKey of AWS credential
    existingSecret: ""
    existingSecretKey: ""
  ## @param logs.secretKey.password Logs Secret Key
  ## @param logs.secretKey.existingSecret
  ## @param logs.secretKey.existingSecretKey
  secretKey:
    password: secret key of AWS credential
    existingSecret: ""
    existingSecretKey: ""

  ## @param logs.minio.enabled Switch to enable or disable the Minio helm chart
  minio:
    enabled: false

  ## @param logs.externalMinio.enabled Switch to enable or disable an external Minio instance
  ## @param logs.externalMinio.host External Minio Host
  ## @param logs.externalMinio.port External Minio Port
  externalMinio:
    enabled: false
    host: localhost
    port: 9000

  ## @param logs.s3.enabled Switch to enable or disable custom S3 Log location
  ## @param logs.s3.bucket Bucket name where logs should be stored
  ## @param logs.s3.bucketRegion Region of the bucket (must be empty if using minio)
  s3:
    enabled: true
    bucket: bucket-name
    bucketRegion: "bucket Region"

Right now you are saying logs are written to s3 and also minio pods are up?

Yes, exactly. Is this a normal case?

I don’t think this is normal. Can you check in Charts.yaml the condition is minio.enabled if so can you change it to logs.minio.enabled?

Minio does not appear when installing with ‘helm’ from this version(0.39.17).
I don’t know why yet.

Hey, was minio enabled?