Issue with S3 credentials configuration in Airbyte deployment

Summary

The user is facing an issue with configuring S3 credentials in Airbyte deployment. The error message indicates a problem with finding the key in the secret. The user has provided details of the values files, deployment yaml, and secret yaml.


Question

Hello Everyone,
I am using s3 as storage for logs, and I want to configure credentials for accessing it. I have passed the parameters in the values files as below, but when airbyte server and workers pods are created we are getting this issue.
Error: couldn't find key s3-access-key-id in Secret dev/airbyte-airbyte-secrets
As In these deployments the env vars are configured from template as:

          valueFrom:
            configMapKeyRef:
              name: airbyte-airbyte-env
              key: STORAGE_BUCKET_WORKLOAD_OUTPUT
        - name: AWS_ACCESS_KEY_ID
          valueFrom:
            secretKeyRef:
              name: airbyte-airbyte-secrets
              key: s3-access-key-id
        - name: AWS_SECRET_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              name: airbyte-airbyte-secrets
              key: s3-secret-access-key```
While in the secret we have key values are following:
```apiVersion: v1
kind: Secret
metadata:
  name: airbyte-airbyte-secrets
  namespace: dev
data:
  AWS_ACCESS_KEY_ID: <keyid>
  AWS_SECRET_ACCESS_KEY: <key-sec>
  DATABASE_PASSWORD: <db-pass>
  DATABASE_USER: <user>```
So in the deployment yaml of aibryte server and worker, it should be like this:
```     - name: AWS_ACCESS_KEY_ID
          valueFrom:
            secretKeyRef:
              name: airbyte-airbyte-secrets
              key: AWS_ACCESS_KEY_ID
        - name: AWS_SECRET_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              name: airbyte-airbyte-secrets
              key: AWS_SECRET_ACCESS_KEY```
Please let me know if I am missing anything here? or is it really a bug in the helm that needs to be fixed in its template?
Thanks in advance!

*Airbyte version: v1.0.0*
*Helm version: 0.634.3*

<br>

---

This topic has been created from a Slack thread to give it more visibility.
It will be on Read-Only mode here. [Click here](https://airbytehq.slack.com/archives/C021JANJ6TY/p1730354182751879) if you want 
to access the original thread.

[Join the conversation on Slack](https://slack.airbyte.com)

<sub>
["s3-credentials", "airbyte-deployment", "helm", "s3-access-key-id", "aws-secret-access-key"]
</sub>

thank you <@U05JENRCF7C> :raised_hands:

I have one more query related to v1.0 deployment.
I have setup JOB_KUBE_TOLERATIONS env variable correctly in my helm. And we can see it in airbyte-workload-launcher pod as well. But my sync pods are not being created with this toleration applied. Can you please help me out here ? like I want my sync pods (check/read/write) to have tolerations applied on them.

Please use a new thread for a new issue.

You might check if this works

  jobs:
    kube:
      tolerations: ...```
<https://github.com/airbytehq/airbyte-platform/blob/595a4c945b8ccaebbe08896affbe264678431386/charts/airbyte-workload-launcher/templates/deployment.yaml#L186-L192|deployment.yaml#L186-L192>

<https://github.com/airbytehq/airbyte/issues/28389#issuecomment-2443887014>

When in doubt, just extract specific version of helm chart, e.g `tar xzf airbyte-1.1.1.tgz` and check the code:
<https://github.com/airbytehq/helm-charts>

yes, tried this thing. Thanks

There’s another issue here: https://github.com/airbytehq/airbyte/issues/45903#issuecomment-2450333362

you can find released helm charts here: https://github.com/airbytehq/helm-charts

with command like this tar xzf airbyte-0.634.3.tgz you can extract specific version and check helm chart code

Returns S3 environment variables.
*/}}
{{- define "airbyte.storage.s3.envs" }}
{{- if eq .Values.global.storage.s3.authenticationType "credentials" }}
- name: AWS_ACCESS_KEY_ID 
  valueFrom:
    secretKeyRef:
      name: {{ include "airbyte.storage.secretName" . }}
      key: {{ .Values.global.storage.s3.accessKeyIdSecretKey | default "s3-access-key-id" }}
- name: AWS_SECRET_ACCESS_KEY 
  valueFrom:
    secretKeyRef:
      name: {{ include "airbyte.storage.secretName" . }}
      key: {{ .Values.global.storage.s3.secretAccessKeySecretKey | default "s3-secret-access-key" }}
{{- end }}
{{- if .Values.global.storage.s3.region }}
- name: AWS_DEFAULT_REGION 
  valueFrom:
    configMapKeyRef:
      name: {{ .Release.Name }}-airbyte-env
      key: AWS_DEFAULT_REGION
{{- end }}
{{- end}}```
probably you need something like this in your values.yaml to overwrite defaults
```global:
  storage:
    s3:
      accessKeyIdSecretKey: ...
      secretAccessKeySecretKey: ...```

Thanks , this worked for me. like this:

  storage:
    s3:
      secretAccessKeySecretKey: "AWS_SECRET_ACCESS_KEY"
      accessKeyIdSecretKey: "AWS_ACCESS_KEY_ID" ```

I am still getting some errors in airbyte server logs, as it is not able to accesss right S3 bucket. If someone can help me with it?

java.lang.RuntimeException: Cannot end publishing: Cannot publish to S3: The bucket is in this region: eu-west-1. Please use this region to retry the request (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: RBBZHF447NKK8FSH; S3 Extended Request ID: NhvCwGZRVVylHOAgcalBEfoLt+dd/Kl+T/d/nSuI=; Proxy: null)
	at com.van.logging.AbstractFilePublishHelper.end(AbstractFilePublishHelper.java:66) ~[appender-core-5.3.2.jar:?]
	at com.van.logging.BufferPublisher.endPublish(BufferPublisher.java:67) ~[appender-core-5.3.2.jar:?]
	at com.van.logging.LoggingEventCache.publishEventsFromFile(LoggingEventCache.java:198) ~[appender-core-5.3.2.jar:?]
	at com.van.logging.LoggingEventCache.lambda$publishCache$0(LoggingEventCache.java:243) ~[appender-core-5.3.2.jar:?]
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572) ~[?:?]
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317) ~[?:?]
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) ~[?:?]
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) ~[?:?]
	at java.base/java.lang.Thread.run(Thread.java:1583) [?:?]
Caused by: java.lang.RuntimeException: Cannot publish to S3: The bucket is in this region: eu-west-1. Please use this region to retry the request (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: RBBZHF447NKK8FSH; S3 Extended Request ID: +bMnkdRkZDZlIdddw5K8sSE/K/nSuI=; Proxy: null)
	at com.van.logging.aws.S3PublishHelper.publishFile(S3PublishHelper.java:131) ~[appender-core-5.3.2.jar:?]
	at com.van.logging.AbstractFilePublishHelper.end(AbstractFilePublishHelper.java:61) ~[appender-core-5.3.2.jar:?]
	... 8 more
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: The bucket is in this region: eu-west-1. Please use this region to retry the request (Service: Amazon S3; Status Code: 301; Error Code: PermanentRedirect; Request ID: RBBZHF447NKK8FSH; S3 Extended Request ID: NhvCwGZRVVylHOAdddgcalBEfoLt+ddd/Kl+T//nSuI=; Proxy: null)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1880) ~[aws-java-sdk-core-1.12.770.jar:?]
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1418) ~[aws-java-sdk-core-1.12.770.jar:?]
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1387) ~[aws-java-sdk-core-1.12.770.jar:?]
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1157) ~[aws-java-sdk-core-1.12.770.jar:?]
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:814) ~[aws-java-sdk-core-1.12.770.jar:?]
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:781) ~[aws-java-sdk-core-1.12.770.jar:?]
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:755) ~[aws-java-sdk-core-1.12.770.jar:?]
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:715) ~[aws-java-sdk-core-1.12.770.jar:?]
	at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:697) ~[aws-java-sdk-core-1.12.770.jar:?]
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:561) ~[aws-java-sdk-core-1.12.770.jar:?]
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:541) ~[aws-java-sdk-core-1.12.770.jar:?]
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5575) ~[aws-java-sdk-s3-1.12.770.jar:?]
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5522) ~[aws-java-sdk-s3-1.12.770.jar:?]```


here is my configs in values file:
```global:
  storage:
    daps-dev-airbyte-logs: ## S3 bucket name
      log: airbyte-bucket
      state: airbyte-bucket
      workloadOutput: airbyte-bucket
    s3:
      secretAccessKeySecretKey: "AWS_SECRET_ACCESS_KEY"
      accessKeyIdSecretKey: "AWS_ACCESS_KEY_ID" 
      secretAccessKey: "<>"
      accessKeyId: "<>"
      region: "us-east-2" ## e.g. us-east-1
      authenticationType: credentials ## Use "credentials" or "instanceProfile"```

Have you read the error message Cannot end publishing: Cannot publish to S3: The bucket is in this region: eu-west-1.?
in values.yaml you have region: "us-east-2"

But my bucket is in us-east-2 region. And In values files as well, I have provided us-east-2 I am not able to understand, from where does it pick eu-west-1 region

If you copied exactly your values.yaml then this part

      log: airbyte-bucket
      state: airbyte-bucket
      workloadOutput: airbyte-bucket```
should be replaced with
```    bucket:
      log: daps-dev-airbyte-logs
      state: daps-dev-airbyte-logs
      workloadOutput: daps-dev-airbyte-logs```