We need the ability to provide OAuth access tokens to connections at sync time instead of allowing connections to retrieve their own access tokens via the third-party’s OAuth endpoints (using client id, client secret, etc.) Since access tokens are extremely short-lived, it doesn’t make sense to put them in the connection configuration.
We currently have an OAuth implementation that allows our clients to connect third-party services to their accounts in our system. The OAuth authorization code flow results in access tokens and refresh tokens stored in our database (non-Airbyte system).
We are looking to use Airbyte to ELT our clients’ data from third-party systems into our database for use by our user-facing analysis tools. We will have a separate Airbyte connection for each user’s third-party integration.
We cannot provide our OAuth client id, client secrets, refresh tokens etc. to Airbyte since we will end up with a “split-brain” scenario. Thanks to the fact that refresh tokens are often invalidated once used to obtain an access token, one system will lose access when the other one requests a new access token.
Here are some of the approaches we are considering:
Update connectors to add the ability to override the OAuth URLs used. This would allow us to host our own OAuth endpoint that can provide access tokens from our internal systems.
Use Docker’s http(s) proxy capability, pass all traffic through a custom proxy server that will intercept OAuth token requests and respond with access tokens obtained from our internal systems.
Create a new connector that wraps the target connector’s docker image. When our connector is started, it first obtains a fresh access token from our internal system and passes it into the target connector image (environment variable, config setting?)
Fork the connector in question and change the OAuth access token retrieval code to contact our system instead of the third-party system.
I’d love to hear thoughts on these or other potential approaches. Thanks all.