Source Postgres - Is it possible to progressively push data in destination to save memory?

Hi,

I have Airbyte deployed via Docker on an EC2 AWS instance (t2.xlarge, 16Gb RAM, 8 cores).

I am using it to sync our Postgres prod database to our Google BigQuery datawarehouse.
After running in several issues I found that it was failing because of a too high memory usage.
The logs indeed show that table after table the memory usage keeps growing and it goes down only when all tables have been processed.
Wouldn’t it be possible to progressively push tables into the destination to keep memory usage reasonable ?
The current situation is that the memory needed to fully sync a DB is more or less equal to its original disk usage. That does not sound reasonable.
Besides, it means that for the initial sync you need a huge amount of RAM and that won’t be needed anymore for next syncs if you are in incremental mode. And changing an EC2 capacity after its creation is not straightforward.

Hi @anatole, thanks for your post. Have you tried creating multiple connections for different tables that all sync to the same destination? This would be parallelizing the syncs at a connector level.