Error setting up Snowflake destination with Airbyte on k3s

Summary

Error message ‘Airbyte is temporarily unavailable. Please try again (HTTP 502)’ when creating Snowflake destination. Logs show warnings and failure to find output files from connector.


Question

Hello guys, I’m having issues setting up a Snowflake destination on my Airbyte server (installed with helm on k3s).
When I try to create the destination using Airbyte’s web UI, I get an error saying “Airbyte is temporarily unavailable. Please try again (HTTP 502)”. I’m using the latest version of the Snowflake connector - 3.11.5.
When I check the logs of the pod that was created for this task, I see this:

2024-08-09 14:11:17 WARN c.a.l.CommonsLog(warn):113 - JAXB is unavailable. Will fallback to SDK implementation which may be less performant.If you are using Java 9+, you will need to include javax.xml.bind:jaxb-api as a dependency.
2024-08-09 14:20:16 WARN i.a.c.ConnectorWatcher(run):74 - Failed to find output files from connector within timeout 9 minute(s). Is the connector still running?
2024-08-09 14:20:16 INFO i.a.c.ConnectorWatcher(failWorkload):277 - Failing workload 424892c4-daac-4491-b35d-c6688ba547ba_93c985f5-0dde-474a-b7d3-a1e2343dfc78_0_check.
2024-08-09 14:20:16 INFO i.a.c.ConnectorWatcher(exitFileNotFound):201 - Deliberately exiting process with code 2.
2024-08-09 14:20:16 WARN c.v.l.l.Log4j2Appender(close):108 - Already shutting down. Cannot remove shutdown hook.```
Needless to say that I tried again many times and got the same problem every time. Any ideas?
I'm using a Debian11 instance on GCP with 30GB of memory and 8 vCPUs and since I'm not synchronizing any data (yet), I don't think it is a resource limitation issue.
Any ideas?

<br>

---

This topic has been created from a Slack thread to give it more visibility.
It will be on Read-Only mode here. [Click here](https://airbytehq.slack.com/archives/C021JANJ6TY/p1723214787674919) if you want 
to access the original thread.

[Join the conversation on Slack](https://slack.airbyte.com)

<sub>
["snowflake-destination", "airbyte-server", "k3s", "http-502", "connector-warnings", "resource-limitation"]
</sub>

Updates: When checking the server pod logs, I see a java.net.ConnectException: finishConnect(..) failed: Connection refused error. This seems to be related to Temporal

When checking the server pod logs, I see an error related to Temporal:

io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
        at io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:268) ~[grpc-stub-1.64.0.jar:1.64.0]```
And when checking the temporal logs, I see this error:

```unable to health check "temporal.api.workflowservice.v1.WorkflowService" service: connection error: desc = "transport: Error while dialing: dial tcp 10.42.0.19:7233: connect: connection refused"```
When running `kubectl get services`, I see that the temporal service is actually running on 10.43.112.175:7233. In fact, all services are running on the 10.43.x.x range. I think this might be the cause for this whole issue

Update: I was using a k3s deployment and decided to switch to minikube. All my problems disappeared like magic

I’ve seen temporal work in local K8s in docker but not work in K8s in orbstack. /shrug.