So the short answers is . . . it’s complicated
We have this working (and it’s wonderful), but getting there is a little complex.
Here’s what we’re doing (just so you have a lay of the land, as there’s a lot of variation to different people’s setups and some are easier than others):
• Networking: Shared VPC from VPC host project
• GKE: Autopilot mode, private
• DB: Cloud SQL, also private
• Logs/state: Cloud Storage (GCS)
• LB: Native “Application (Classic)” HTTP/S LB, including HTTP->HTTPS redirect, modified backend timeout
• SSL: Google-issued (automated on LB)
• IP: Static reserved IP for inbound LB, static reserved IP for outbound (using Cloud NAT) to allow for IP-whitelisting with APIs
• Auth: IAP
I’ve toyed a LOT with trying to get Airbyte’s helm charts to deploy/re-deploy the load balancer correctly with only changes to values.yaml
, but haven’t quite gotten there without some intervention still needed.
So my recommendation is to not try to fight it—and instead just do the minimum you need to make it break a manually configured LB. That’s most likely just disabling the ingress section of the chart and making your load balancer yourself.
Note that depending on your setup, you may not be able to directly point the LB to the pods you want (called “container-native load balancing”).For example, it’s disabled by default for us because we use a Shared VPC. In these cases, you need to make sure that this annotation is present in values.yaml:
service:
annotations:
<http://cloud.google.com/neg|cloud.google.com/neg>: '{"ingress": true}'```
. . . which should allow you to point traffic at the service you want. From there you can either set the ingress up for the cluster (from the Services list in GKE, check the box for `*-airbyte-webapp-svc` and then click Create Ingress). Or if you want the LB to live outside of the cluster (which makes it less prone to being nuked when you're fiddling with your deployment), you can create it independently under Network Services > Load Balancers in GCP. There are trade-offs from a visibility/management standpoint, so chose your poison :slightly_smiling_face:
*So, that's probably the easy way.*
But, if you don't want to listen to me (I wouldn't, because I'm a glutton for punishment), you would need to figure out some combination of annotations on the service and ingress to make everything play nice together. Here's part of the config I've been playing with, in case it helps.
`values.yaml`:
```webapp:
service:
annotations:
# Note: you'd need to create this BackendConfig via kubectl before deploying. example below
<http://cloud.google.com/backend-config|cloud.google.com/backend-config>: '{"default": "your-custom-backend-config"}'
<http://cloud.google.com/neg|cloud.google.com/neg>: '{"ingress": true}'
ingress:
enabled: true
annotations:
# Note: different values here trigger different LB types
<http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: gce
<http://kubernetes.io/ingress.global-static-ip-name|kubernetes.io/ingress.global-static-ip-name>: your-static-reserved-ingress-ip
<http://networking.gke.io/managed-certificates|networking.gke.io/managed-certificates>: your-generated-cert-if-you-already-created-it
# Note: provisioning the cert takes a long time, so I wanted to pre-provision it and pass the CertMap but haven't been able to get it to work right
# <http://networking.gke.io/certmap|networking.gke.io/certmap>: your-cert-map
# Note: I only do this to configure the HTTP->HTTPS redirect
<http://networking.gke.io/v1beta1.FrontendConfig|networking.gke.io/v1beta1.FrontendConfig>: your-custom-frontend-config
hosts:
- host: <http://your-hostname.example.com|your-hostname.example.com>
paths:
- path: /*
pathType: ImplementationSpecific```
`your-custom-backend-config.yaml`:
```apiVersion: <http://cloud.google.com/v1|cloud.google.com/v1>
kind: BackendConfig
metadata:
name: your-custom-backend-config
spec:
timeoutSec: 600
iap:
enabled: true
oauthclientCredentials:
# you'd create this like any other secret, and it's based on your Oauth config
secretName: your-iap-secret```
`your-custom-frontend-config.yaml`:
```apiVersion: <http://networking.gke.io/v1beta1|networking.gke.io/v1beta1>
kind: FrontendConfig
metadata:
name: your-custom-frontend-config
spec:
redirectToHttps:
enabled: true
responseCodeName: MOVED_PERMANENTLY_DEFAULT```
That gets it painfully close to working right, but cert provisioning is slow (and while that's happening you can't connect), and IAP likes to toggle off sometimes during upgrades/re-deploys. It also seems to try to auto-link a second set of backend endpoint groups, which conflict with the specified NEGs and you have to manually remove them and reset the defaults.
Google has added a lot more annotations (some of which might help here), but I haven't been able to spend a lot of time fiddling with it again.
If you come up with anything to make it smoother, let me know!