Hey @alafanechere,
Good idea on splitting the job into several connections, but I am benchmarking the solution with just one big table (20 mil rows), as I described here.
There you can see the numbers as well, but you will find them extremely scattered taking different numbers as the baseline
What’s more, during this sync we only utilize <10% of the instance resources
As per the logs, they were huge, so I removed some thousands of rows like
Table _AIRBYTE_TMP_YWW_VALIDATION_CONFIG column _AIRBYTE_AB_ID (type VARCHAR[16777216]) -> Json type io.airbyte.protocol.models.JsonSchemaType@49c17ba4
:
logs-159.txt (3.1 MB)