Summary
After upgrading connectors on a self-hosted Airbyte platform on GCP VM, workspace became empty. User upgraded the platform using ‘abctl local install’ command.
Question
Hi Team
I have been using self-hosted version of airbyte on GCP VM and I upgraded all the connectors yesterday, then the ingestion started failing asking to either downgrade the connector or upgrade the airbyte platform version, so I upgraded using new command abctl local install
and all my integrations are now gone, my workspace is empty. What HAPPENED, I have no clue?
This topic has been created from a Slack thread to give it more visibility.
It will be on Read-Only mode here. Click here if you want
to access the original thread.
Join the conversation on Slack
["self-hosted", "airbyte-platform", "GCP-VM", "upgrade", "connectors", "workspace", "empty", "ingestion", "abctl-local-install"]
Let me know how it goes. If the migration is unsuccessful I can try and help recover the database from the docker compose instance.
I found a yaml for my custom connector, although a bit old but it still has the configuration I need. So I can import that and create my source in the builder. But now I am facing a new problem, where the airbyte connection fails stating “Unable to start the destination”.
What size (CPU and memory) are you running on?
I am running a n2-standard-4 i.e., vCPU 4 (2 cores) and 16GB memory
did you run abctl local install --low-resource-mode
? You could try that and see if you have more success
I didn’t try that yet but this configuration was more than enough when I was running on docker-compose, so why the problem with abctl?
It should be more than enough with abctl as well. We have tweak some of the default settings to try and use more resources for better performance though. So the --low-resource-mode changes that configuration to not ask for as many resources while launching the job pods.
ok, yeah i will give that a try and see. And it is taking way too long just to create a pod and then kicking off the job, before it used to finish the ingestion within ~4 minutes
Let me know if you are still having performance issues with the low resource mode.
earlier not even a single stream was working but now it did under 2 mins, awesome! That was helpful, now I will try to run all the streams.
another “never-encountered-before” error for one of the streams
Internal Server Error: com.fasterxml.jackson.databind.JsonMappingException: String value length (20019002) exceeds the maximum allowed (20000000, from `StreamReadConstraints.getMaxStringLength()`)
What are you trying to do? It looks to me like some JSON unmarshelling is failing because a field contains some data that is too large.
is it possible to override the limit? I never encountered this error before.
I am pulling a data that has json data type but I don’t think there should be a string that is so long
this error occurs during testing in builder section but just for this one stream, I tried on postman and I am able to get the results
So you are seeing this using the connector builder?
any solution for it? I really appreciate your support even during the weekend, you don’t need to answer it today.
I don’t know as much about the connector builder, so I don’t think I can help much here. I would suggest opening a new thread in this channel with the error and seeing if anyone else can help troubleshoot.
Builder team checking in. Yeah, we’ve seen this — happens when your response is bigger than the backend can swallow in one go. To workaround this, in Builder in there’s a cog in the top right corner (near “testing values”) — set records limit to something much smaller, i.e. 100 records on 2 pages max — that should do it.
This error is Builder specific, so your actual connector will be fine once you publish.
<@U065RJ879QT> - I refreshed the schema and this time it pulled all the keys, even the ones with null values.