Issue with cron syncs after restoring Airbyte OSS DB

Summary

After restoring Airbyte OSS DB into another EC2 instance, cron syncs are not executing properly. User suspects that the cron jobs that did run correctly were modified after the restore. User is seeking help to find the state of the scheduled jobs in the DB or OS.


Question

For Airbyte OSS, I backed up the DB and restored it into another EC2 instance and I ran into a problem that most of my cron synch’s did not execute.
I’m thinking that the couple that did run correctly on schedule were modified after the restore and during the save did something to fully schedule them.
Has anyone else run into this after a DB restore or know where in the DB or OS I can find the state of the scheduled jobs?



This topic has been created from a Slack thread to give it more visibility.
It will be on Read-Only mode here. Click here if you want to access the original thread.

Join the conversation on Slack

["airbyte-oss", "db-restore", "cron-syncs", "scheduled-jobs", "ec2-instance"]

Did you restore the Temporal databases as well? That may be where the disconnect is.

Temporal handles a lot of the orchestration under the hood. Airbyte has documented some of <https://airbyte.com/blog/scale-workflow-orchestration-with-temporal|their reasons for and how they are using Temporal>, and you can also see how it fits into the bigger picture in their <https://docs.airbyte.com/understanding-airbyte/high-level-view|Architecture Overview>.

Temporal creates two databases in the configured DB (whether internal or external): temporal and temporal_visibility.

Nope, just the airbyte DB. I can see now how that would cause this. I’ll try updating each of them since it’s bee a couple of days and I wouldn’t want to move those now.

Thank you <@U035912NS77>!

FYI, I found this reference that didn’t mention those https://discuss.airbyte.io/t/how-to-import-export-airbyte-to-a-new-instance-docker-to-docker-deploy/3514

Yeah, I’ve migrated the full DB before (including Temporal), but I wouldn’t have noticed any scheduling issues since we have all of ours set as manual and then trigger them via the API based on things that happen in our platform.

I’m sort of guessing most things would snap back if you edited each connection in any way. If you have a ton and don’t want to do something manually, you might be able to loop over all the active connections in the API and reset them to the same schedule as they currently are and I bet you Airbyte would recreate all the Temporal stuff.

I’d also look for any bugs or issues and see if maybe it’s supposed to do that on its own. Feels like if Temporal gets reset, it would know that and could reset the state there during initialization.

I’m guessing people doing things like restoring from backups wouldn’t notice, because they’re probably restoring the whole DB instance (at least if they’re using the internal DB). Or if they’re using a dedicated DB, probably a snapshot of the instance which would also include Temporal. So it may be “accidentally” working for a lot of folks simply because of how they restore or migrate. But at minimum it should probably be documented somewhere what you need to do when you migrate to ensure continuity.