Error setting up Open Source Salesforce connector

Summary

Error encountered when setting up the Open Source Salesforce connector. User is receiving an internal error message from Airbyte platform. User is unsure if the issue is with their setup or Airbyte processing the connection request.


Question

Hey community! Newer Airbyte user, I am getting an error when setting up the Open source Salesforce connector. I created a connected app in Salesforce with the appropriate settings, then retrieved the client ID, Secret, and refresh token. Once I test the connection I recieve an error “Check failed because of an internal error, Internal message: Check failed because of an internal error
Failure origin: airbyte_platform” is this an issue with my setup or with Airbyte processing my request to connect? Please let me know what thoughts are here and how I can resolve this issue!



This topic has been created from a Slack thread to give it more visibility.
It will be on Read-Only mode here. Click here if you want
to access the original thread.

Join the conversation on Slack

["error", "open-source", "salesforce-connector", "airbyte-platform", "internal-error"]

I’m using Salesforce heavily on OSS and haven’t run into this one. You might want to check the deployment or server logs and see if there’s an error there.

A common issue during connector checks is that the timeout on your load balancer or gateway/reverse proxy is too low, and by the time the container spins up (potentially having to wait for a node to provision in Kubernetes deployments) and then runs the checks, the gateway has timed out with an HTTP 502 error (something to look for in your logs!). You don’t notice this in normal runs, because it’s happening unattended—but for connection check and schema discovery tasks, the client is often waiting so you run into an error. So make sure your timeouts are set suitably high (I usually set this to 600-1200 seconds to be safe) and the issue may go away.

The how of this various by deployment method and platform, but if you provide some details on that I may be able to point you in the right direction.

I’d start there since it’s an easy fix and you’re likely to run into it eventually, but maybe report back on any errors you see on the deployment side and whether the timeout change fixes it for you and we can work through next steps of troubleshooting

<@U035912NS77> Awesome insight, for reference I am running Airbyte on an Amazon Linux VM in an EC2 instance. Heres the error from the logs: 2024-07-03 17:12:43 ERROR i.a.w.t.FailureConverter(getFailureReason-Q2Q30fc):32 - exception classified as NOT_A_TIMEOUT

Did you try checking another source connection ?
I got the same issue but with all my sources. I found a way to make it by removing docker volumes airbyte_data and airbyte_workspaces.
Hope it’ll help