Fixing HTTP 413 error when creating connection between Salesforce and Databricks

Summary

The user is encountering an HTTP 413 error when trying to create a connection between Salesforce and Databricks, specifically when selecting just one object. This is preventing them from saving their changes.


Question

Kindly help how to fix this An unknown error occurred. (HTTP 413). i am trying to create connection with salesforce to datbricks, even selecting just one object has this issue and i cant save my changes



This topic has been created from a Slack thread to give it more visibility.
It will be on Read-Only mode here. Click here if you want
to access the original thread.

Join the conversation on Slack

["http-413-error", "salesforce-connector", "databricks-connector", "connection-issue"]

I recommend using search in Slack

check this thread
https://airbytehq.slack.com/archives/C021JANJ6TY/p1726756825963869?thread_ts=1726756277.348269&cid=C021JANJ6TY
you should find solutions for helm charts and abctl there

<@U05JENRCF7C> i tried docker exec -it airbyte-abctl-control-plane kubectl -n airbyte-abctl annotate ingress ingress-abctl <http://nginx.ingress.kubernetes.io/proxy-body-size=200m|nginx.ingress.kubernetes.io/proxy-body-size=200m> --overwrite and restart nginx controller but error is still happening

currently add apiVersion: <http://networking.k8s.io/v1|networking.k8s.io/v1>
kind: Ingress
metadata:
annotations:
http://nginx.ingress.kubernetes.io/client-body-buffer-size|nginx.ingress.kubernetes.io/client-body-buffer-size: 128k
http://nginx.ingress.kubernetes.io/proxy-body-size|nginx.ingress.kubernetes.io/proxy-body-size: 1024m
http://nginx.ingress.kubernetes.io/proxy-buffer-size|nginx.ingress.kubernetes.io/proxy-buffer-size: 128k
http://nginx.ingress.kubernetes.io/proxy-buffering|nginx.ingress.kubernetes.io/proxy-buffering: “off”
http://nginx.ingress.kubernetes.io/proxy-connect-timeout|nginx.ingress.kubernetes.io/proxy-connect-timeout: “600”
http://nginx.ingress.kubernetes.io/proxy-read-timeout|nginx.ingress.kubernetes.io/proxy-read-timeout: “600”
http://nginx.ingress.kubernetes.io/proxy-send-timeout|nginx.ingress.kubernetes.io/proxy-send-timeout: “600”
creationTimestamp: “2024-10-07T05:18:04Z”
generation: 1
name: ingress-abctl
namespace: airbyte-abctl
resourceVersion: “778486”
uid: f4bbd4bc-346f-4cb4-829a-cb3415c12728
spec:
ingressClassName: nginx
rules:

  • http:
    paths:
    • backend:
      service:
      name: airbyte-abctl-airbyte-webapp-svc
      port:
      name: http
      path: /
      pathType: Prefix
      status:
      loadBalancer:

what do you get for docker exec -it airbyte-abctl-control-plane kubectl describe ingress ingress-abctl -n airbyte-abctl?
do you access that ingress directly or you have some load balancer in front of it?

Name: ingress-abctl
Labels: <none>
Namespace: airbyte-abctl
Address: 10.96.24.156
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends


  • /  airbyte-abctl-airbyte-webapp-svc:http (10.254.0.10:8080)
    

Annotations: http://nginx.ingress.kubernetes.io/client-body-buffer-size|nginx.ingress.kubernetes.io/client-body-buffer-size: 128k
http://nginx.ingress.kubernetes.io/proxy-body-size|nginx.ingress.kubernetes.io/proxy-body-size: 1024m
http://nginx.ingress.kubernetes.io/proxy-buffer-size|nginx.ingress.kubernetes.io/proxy-buffer-size: 128k
http://nginx.ingress.kubernetes.io/proxy-buffering|nginx.ingress.kubernetes.io/proxy-buffering: off
http://nginx.ingress.kubernetes.io/proxy-connect-timeout|nginx.ingress.kubernetes.io/proxy-connect-timeout: 600
http://nginx.ingress.kubernetes.io/proxy-read-timeout|nginx.ingress.kubernetes.io/proxy-read-timeout: 600
http://nginx.ingress.kubernetes.io/proxy-send-timeout|nginx.ingress.kubernetes.io/proxy-send-timeout: 600
Events: <none>

access it directly in azure vm via ssh

Can you check in browser’s developer tools how big payload is sent for error that has status 413? (size column)

https://developer.chrome.com/docs/devtools/open
https://firefox-source-docs.mozilla.org/devtools-user/

i have same issue my body size for 33 of the 4k streams is 5mB
so i hit both the timeout retrieving schema (5minutes) and payload size

<@U07QPGM74LB> from browser you don’t get two different HTTP statuses, so I assume you got timeout, because payload size would cause an issue immediately. Search for HTTP_IDLE_TIMEOUT, READ_TIMEOUT on Slack

yeah i have temp fixes by editing the nginx config just providing context