Sync job not respecting CPU/Memory limits after Airbyte upgrade on Kubernetes

Summary

After upgrading a self-hosted Airbyte instance on Kubernetes, sync jobs are not respecting CPU/Memory limits set in values.yml configuration. Environment variables show correct values but logs indicate higher resource requests.


Question

Hey everyone, I recently upgraded a self hosted Airbyte instance running on Kubernetes via Helm to chart 0.293.4 and app version 0.63.8 . In my values.yml configuration, I have some restrictions on the requested CPU / memory limits, but my sync jobs no longer seem to be respecting these configurations. Is this potentially a bug or is there another place I need to add these restrictions?

My values.yml:

  jobs:
    resources:
      requests:
        cpu: 75m
        memory: 150m
      limits:
        cpu: 150m
        memory: 500m```
I correctly see these values set in the environment variables for the Airbyte installation, but when looking at logs for the sync jobs, I still see it requesting far more CPU than it needs:
```[cpuRequest=1,cpuLimit=2,memoryRequest=1Gi,memoryLimit=2Gi,additionalProperties={}]```

<br>

---

This topic has been created from a Slack thread to give it more visibility.
It will be on Read-Only mode here. [Click here](https://airbytehq.slack.com/archives/C021JANJ6TY/p1722452745655889) if you want 
to access the original thread.

[Join the conversation on Slack](https://slack.airbyte.com)

<sub>
["sync-job", "cpu-memory-limits", "kubernetes", "upgrade", "values.yml", "environment-variables"]
</sub>

Hmm this appears to have just started working after some time away from the machine. I’m not sure what caused it to kick in, as I already had restarted the workers after applying the config change, and they did not restart between then and now.

It did make me realize I had a typo in the amount of memory I was allocating though! Use M for megabytes, not m :wink: