Troubleshooting custom connector pod creation failure due to CPU limits in abctl migration

Summary

User is facing pod creation failure for custom connector in abctl migration due to CPU limits. They have limited experience with Kubernetes and Helm charts and seek guidance on adjusting CPU limits for optimization.


Question

[abctl issues]

Hi, quite a longer read.

Recently saw this whole move from docker native to abctl. Ok,tried running it. Migration kinda failed, soo started a new fresh machine on GCP (N2D with 4 cores, 16gb ram, so I think quite plentiful)

I am using some custom connectors that i’ve written some time ago using python cdk. They were working fine just before the migration. Now… I get this recurring “Failed to create pod for read step” OR “Failed to create pod for write step” error, on that custom connector. Native connectors work fine.

I dig deeper. Get into docker exec. Check pods. See that writer pod fails due to CPU limits:

0/1 nodes are available: 1 Insufficient cpu

AHA!

But now… I have close to 0 experience with kubernetes or Helm charts. How do I adjust to optimize chance for any of this to work?



This topic has been created from a Slack thread to give it more visibility.
It will be on Read-Only mode here. Click here if you want
to access the original thread.

Join the conversation on Slack

["abctl", "custom-connector", "pod-creation", "cpu-limits", "kubernetes", "helm-charts", "optimization"]

Thanks for the response! If you have a spare minute to look at my limits - does that look like anything? I’ve tried chatGPTing into what could those limits be, but I don’t yet understand logic of what they do, and hence, of what approximate values there could be (especially for kind )

I’d recommend reading docs https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

when it comes to values, you need to check them by trial and error
e.g. for worker and jobs different values might be better depending how much data you’re processing, if you are using Java-based or Python-based connectors, or if you are running multiple synchronizations simultaneously
maybe using lower replicaCount value for worker and more “computing power” will work better

don’t forget about specifying requests/limits for other Airbyte pods
good luck :wink:

here is the custom config (values.yaml) I am blindly trying to use to see if that changes anything:

# Global
global:
  # -- Auth configuration
  auth:
    # -- Whether auth is enabled
    enabled: false

  # -- Environment variables
  # env_vars: {}

  # Jobs resource requests and limits, see <http://kubernetes.io/docs/user-guide/compute-resources/>
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube.
  jobs:
    resources:
      requests:
        cpu: 75m
        memory: 150m
      limits:
        cpu: 250m
        memory: 500m

## @section Webapp Parameters
webapp:

  ## Web app resource requests and limits
  ## ref: <http://kubernetes.io/docs/user-guide/compute-resources/>
  ## We usually recommend not to specify default resources and to leave this as a conscious
  ## choice for the user. This also increases chances charts run on environments with little
  ## resources, such as Minikube. If you do want to specify resources, uncomment the following
  ## lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  resources:
    limits:
      cpu: 200m
      memory: 1Gi


## @section Server parameters

server:
  enabled: true
  # -- Number of server replicas
  replicaCount: 1


  ## server resource requests and limits
  ## ref: <http://kubernetes.io/docs/user-guide/compute-resources/>
  ## We usually recommend not to specify default resources and to leave this as a conscious
  ## choice for the user. This also increases chances charts run on environments with little
  ## resources, such as Minikube. If you do want to specify resources, uncomment the following
  ## lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  resources:
    requests:
      cpu: 150m
      memory: 250m
    limits:
      cpu: 500m
      memory: 500m

worker:
  enabled: true
  # -- Number of worker replicas
  replicaCount: 5

  ## worker resource requests and limits
  ## ref: <http://kubernetes.io/docs/user-guide/compute-resources/>
  ## We usually recommend not to specify default resources and to leave this as a conscious
  ## choice for the user. This also increases chances charts run on environments with little
  ## resources, such as Minikube. If you do want to specify resources, uncomment the following
  ## lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  resources:
      requests:
        cpu: 75m
        memory: 150m
      limits:
        cpu: 250m
        memory: 500m```

Does it make any sense?

yes, changing resources requests/limits for Airbyte pods is a way to go