Summary
User encounters a timeout issue with the Google Ads connector in Airbyte, where replication jobs fail due to a 5-minute timeout in the request_records_job method. They seek advice on increasing the timeout without source code changes, optimizations for the connector, and possible retries.
Question
Hi everyone,
I’m experiencing an issue with a replication job in Airbyte using the Google Ads connector. Here’s the context:
• The job fails due to a timeout related to the request_records_job
method, which is limited to 5 minutes (via the @detached(timeout_minutes=5)
decorator in the source code).
• The resources allocated to the pod (source
, destination
, orchestrator
) seem more than sufficient:
◦ Source: CPU 14m, memory 103Mi (well below the set limits).
• I’ve checked the logs from the source
container, but I can’t find any clear indication of why the operation takes so long.
• The data being processed is from the shopping_performance_view
, and I suspect the issue might be related to the size of the data or the response time of the Google Ads API.
Steps I’ve already tried:
- Fragmenting the data by reducing the date range in the connector configuration.
- Increasing the resources of the Kubernetes pods (no impact, as the resources aren’t fully utilized).
- Monitoring metrics in real-time to confirm the job isn’t constrained by CPU or memory.
Questions: - Is there any way to increase the
timeout_minutes
value for the@detached
decorator without modifying Airbyte’s source code? - Has anyone faced similar issues with the Google Ads connector and found specific optimizations for requests?
- Is there a way to add retries or dynamically increase the timeout for this type of operation via the connector’s configuration?
Thank you in advance for your help and advice!
This topic has been created from a Slack thread to give it more visibility.
It will be on Read-Only mode here. Click here if you want
to access the original thread.