Sync Jobs Failing: Non-Zero Error Code 143

  • Is this your first time deploying Airbyte?: No
  • OS Version / Instance: Docker
  • Memory / Disk: you can use something like 4Gb
  • Deployment: Kubernetes
  • Airbyte Version: 0.35.65-alpha
  • Source name/version: Gitlab
  • Destination name/version: Snowflake
  • Step: During Sync
  • Description:

Two particular connections are continuing to fail syncs with the following error message:

Source process exited with non-zero exit code 143

The jobs were previously running and completing successfully (as are other Gitlab connections), but I have no idea on how to remediate the connections/jobs that are in this state. Once they start failing, they typically don’t recover and successfully complete again

Hi @adamatzip,
Could you please share your full sync log file?

Amusingly and frustratingly at the same time, the last sync for one of the connections suffering this issue worked all of a sudden.

logs-1063.txt (93.8 KB)

This job has been consistently failing for the last couple of days.

logs-1073.txt (89.8 KB)

Hi @adamatzip,
The error looks related to the connection with the Snowflake destination. Could you please check if the connections succeeds when clicking on “Retest destination” in the destination settings?
What loading method are you using for snowflake?
What is the available memory on the kubernetes node on which the sync is running?

Hey @alafanechere,

Ran the checks as suggested:

  • Retest destination works ok
  • Loading method is set to “Internal staging”
  • Pod size is set to 1 (requested) and 2 (limit) GB

Do you mind trying to increase the memory given to your sync pods, by increasing the JOB_MAIN_CONTAINER_MEMORY_REQUEST env var on the scheduler and worker pods?
You can try something like 4GB.

The memory settings for those pods is already set at 4GB request and 8GB limit

@adamatzip could you please try to replicate a single stream and share the sync logs?
I don’t think that it’s a load problem as the volume you sync is quite small (Total records read: 42512 (36 MB)).
But as this error is transient it might be worth checking how the connection behave in a simple scenario, with a single stream.

Hi there from the Community Assistance team.
We’re letting you know about an issue we discovered with the back-end process we use to handle topics and responses on the forum. If you experienced a situation where you posted the last message in a topic that did not receive any further replies, please open a new topic to continue the discussion. In addition, if you’re having a problem and find a closed topic on the subject, go ahead and open a new topic on it and we’ll follow up with you. We apologize for the inconvenience, and appreciate your willingness to work with us to provide a supportive community.