Issue with abctl migration and 'Workload is claimed' error

Summary

User facing issue after migrating EC2 instance from docker-compose to abctl, encountering ‘Workload is claimed’ error. Attempted to update values.yaml with resource configurations but problem persists.


Question

Hello, I just migrated my EC2 instance from docker-compose to abctl. I’m facing an issue where I’m getting Workload 9c209cb8-34f2-44f0-85fd-3615fc9bdcd3_4_1_sync is claimed indefinitely. I tried update the values.yaml with the following inside globals since i thought it might be some resource management problem but still got the same problem

I’m using a t3a.large (2 CPU and 8GB of ram). It always worked fine for our purposes

    resources:
      requests:
         memory: 256Mi
         cpu: 250m
      limit:
         memory: 2Gi
         cpu: 2```
Appreciated the help :slightly_smiling_face:

I'm uploading the full logs and my user_data.sh

<br>

---

This topic has been created from a Slack thread to give it more visibility.
It will be on Read-Only mode here. [Click here](https://airbytehq.slack.com/archives/C021JANJ6TY/p1727888142560189) if you want 
to access the original thread.

[Join the conversation on Slack](https://slack.airbyte.com)

<sub>
["abctl", "migration", "workload-is-claimed", "values.yaml", "resource-management", "t3a.large", "logs", "user_data.sh"]
</sub>

user_data.sh

#!/bin/bash
sudo yum update -y
sudo yum install -y docker
sudo usermod -a -G docker $USER
sudo systemctl start docker
sudo systemctl enable docker

mkdir airbyte && cd airbyte
install airbyte
abctl local install --values ./values.yaml --secret ./secrets.yaml

If you’re familiar with kubectl you can poke around the cluster with something like
docker exec -it airbyte-abctl-control-plane kubectl -n airbyte-abctl get pods
(you can also use kubectl directly if you have it installed on the host)

2 CPU and 8GB is fairly small for airbyte, currently. You could try setting the job requests to zero:

    resources:
      requests:
         memory: 0
         cpu: 0```
which should help the job get scheduled, unclear how well it will run though. Note that the `--low-resource-mode` on abctl does this.

I ran with --low-resource-mode and these are the resource request for the pod. Is this normal?

Container Name: orchestrator
Requests:{"cpu":"1","memory":"2Gi"}
Limits:{"cpu":"3","memory":"4Gi"}
Container Name: source
Requests:{"cpu":"200m","memory":"1Gi"}
Limits:{"cpu":"3","memory":"4Gi"}
Container Name: destination
Requests:{"cpu":"200m","memory":"1Gi"}
Limits:{"cpu":"3","memory":"4Gi"}```

These are the resources in the cluster:

  Namespace                   Name                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                                       ------------  ----------  ---------------  -------------  ---
  airbyte-abctl               airbyte-abctl-connector-builder-server-59db485bc8-54wzz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
  airbyte-abctl               airbyte-abctl-cron-7fcffb4964-jf8hv                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
  airbyte-abctl               airbyte-abctl-pod-sweeper-pod-sweeper-59c9f5966f-8tmst     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
  airbyte-abctl               airbyte-abctl-server-54669b7d45-qflbw                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m25s
  airbyte-abctl               airbyte-abctl-temporal-868dfc7b65-qxmsg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
  airbyte-abctl               airbyte-abctl-webapp-695d887c4b-vsmcg                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
  airbyte-abctl               airbyte-abctl-worker-679d694d55-84k49                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
  airbyte-abctl               airbyte-abctl-workload-api-server-56bfccc948-hx4mz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
  airbyte-abctl               airbyte-abctl-workload-launcher-d78b6bcd7-ctm5f            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
  airbyte-abctl               airbyte-minio-0                                            200m (10%)    200m (10%)  1Gi (12%)        1Gi (12%)      13m
  ingress-nginx               ingress-nginx-controller-f884455f-blc27                    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         9m19s
  kube-system                 coredns-76f75df574-nz4pc                                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
  kube-system                 coredns-76f75df574-nzvst                                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
  kube-system                 etcd-airbyte-abctl-control-plane                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
  kube-system                 kindnet-sp5qk                                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
  kube-system                 kube-apiserver-airbyte-abctl-control-plane                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
  kube-system                 kube-controller-manager-airbyte-abctl-control-plane        200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
  kube-system                 kube-proxy-stvmx                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
  kube-system                 kube-scheduler-airbyte-abctl-control-plane                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
  local-path-storage          local-path-provisioner-888b7757b-sh4tc                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests      Limits
  --------           --------      ------
  cpu                1250m (62%)   300m (15%)
  memory             1404Mi (17%)  1414Mi (17%)
  ephemeral-storage  0 (0%)        0 (0%)
  hugepages-1Gi      0 (0%)        0 (0%)
  hugepages-2Mi      0 (0%)        0 (0%)```

> the resource request for the pod. Is this normal?
Those seem wrong. Did you use the --low-resource-mode flag or did you set those to zero in the values file?

I used --low-resource-mode like this:

abctl local install --values ./values.yaml --secret ./secrets.yaml --low-resource-mode

Can you try deleting the server and workload-launcher pods? Sometimes you have to manually bounce these pods to get config changes.

this does seem to work :slightly_smiling_face:

The job was attempting to use a previous request. Thanks a lot!! :purple_heart:

Great! Sorry, that’s a rough edge in the platform we need to work out.