Airbyte_* networks subnet increments after docker-compose down/up

  • Is this your first time deploying Airbyte?: Yes
  • OS Version / Instance: Oracle Linux 8 Ec2 Instance
  • Memory / Disk: 2Gb / 100GB
  • Deployment: Docker
  • Airbyte Version: 0.40.22
  • Source name/version: Postgres
  • Destination name/version: S3
  • Step: The issue is happening when running docker-compose up
  • Description:

I’ve been occasionally getting locked out of my ssh session with my airbyte instance after performing a docker-compose up.
I’ve tracked down that this is due to the subnet of the three airbyte docker networks increasing after every docker-compose down and then up.

[airbyte@development-airbyte airbyte]$ docker network ls
NETWORK ID     NAME                       DRIVER    SCOPE
3943d367a72e   airbyte_airbyte_internal   bridge    local
c0a50636e38f   airbyte_airbyte_public     bridge    local
4c78a1d7bf79   airbyte_default            bridge    local
f404daae64c4   bridge                     bridge    local
b828a33e4272   host                       host      local
4645af91faf8   none                       null      local

[airbyte@development-airbyte airbyte]$ docker network inspect c0a50636e38f | grep Subnet
                    "Subnet": "172.20.0.0/16",
[airbyte@development-airbyte airbyte]$ docker network inspect 4c78a1d7bf79 | grep Subnet
                    "Subnet": "172.18.0.0/16",
[airbyte@development-airbyte airbyte]$ docker network inspect f404daae64c4 | grep Subnet
                    "Subnet": "172.17.0.0/16"

After a docker-compose down and up;

[root@development-airbyte ~]# docker network ls
NETWORK ID     NAME                       DRIVER    SCOPE
eb3bc555c420   airbyte_airbyte_internal   bridge    local
1cc8a85e2472   airbyte_airbyte_public     bridge    local
863c56b7b3ed   airbyte_default            bridge    local
f404daae64c4   bridge                     bridge    local
b828a33e4272   host                       host      local
4645af91faf8   none                       null      local
[root@development-airbyte ~]# docker network inspect eb3bc555c420 | grep Subnet
                    "Subnet": "172.22.0.0/16",
[root@development-airbyte ~]# docker network inspect 1cc8a85e2472 | grep Subnet
                    "Subnet": "172.23.0.0/16",
[root@development-airbyte ~]# docker network inspect 863c56b7b3ed | grep Subnet
                    "Subnet": "172.21.0.0/16",

My ec2 instance happens to be a member of a subnet which is 172.31.33.0/24.
So after a few stop-starts, I’m locked out of the ssh session and I have to destroy and recreate the ec2 instance and redeploy airbyte.

I’m attempting to define what the subnets for each of the airbyte_ docker networks via docker-compose.yaml and have managed to do this for all but the airbyte_default network, which is not defined in docker-compose.yaml, but is created somehow.

networks:
  airbyte_public:
    ipam:
      driver: default
      config:
        - subnet: 172.118.0.0/24
  airbyte_internal:
    ipam:
      driver: default
      config:
        - subnet: 172.118.1.0/24

I’ve tried defining the subnet for airbyte_default as above, however, this results in a message at docker-compose up stating;

WARNING: Some networks were defined but are not used by any service: airbyte_default

and the airbyte_default subnet increments anyway.

I’ve cracked this by using the following network config;

networks:
  airbyte_public:
    ipam:
      driver: default
      config:
        - subnet: 172.118.0.0/24
  airbyte_internal:
    ipam:
      driver: default
      config:
        - subnet: 172.118.1.0/24
  default:
    name: airbyte_default
    ipam:
      driver: default
      config:
        - subnet: 172.118.2.0/24

However, the original config included a gateway, which i seem to be unable to set in docker-compose.yaml.

Original network;

[
    {
        "Name": "airbyte_default",
        "Id": "f8fac5a5a28e13fdaea94443e517701160adab1ca0f81f3dbb24cd347825c009",
        "Created": "2022-12-01T15:21:08.409036476Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.22.0.0/16",
                    "Gateway": "172.22.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {
            "com.docker.compose.network": "default",
            "com.docker.compose.project": "airbyte",
            "com.docker.compose.version": "1.26.2"
        }
    }
]

Modified network;

[airbyte@scholarpack-development-airbyte airbyte]$ docker network inspect airbyte_default
[
    {
        "Name": "airbyte_default",
        "Id": "74a13dec68bd557c07af26fdab4fa2fde75ba5a60a9735947e36a2dd03977c8f",
        "Created": "2022-12-01T15:38:57.523644415Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.118.2.0/24"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {
            "com.docker.compose.network": "airbyte_default",
            "com.docker.compose.project": "airbyte",
            "com.docker.compose.version": "1.26.2"
        }
    }
]

After testing a postgres to s3 connection this configuration looks okay, but I suppose someone may want to take a look and confirm the network configuration in this state is valid.

Hello there! You are receiving this message because none of your fellow community members has stepped in to respond to your topic post. (If you are a community member and you are reading this response, feel free to jump in if you have the answer!) As a result, the Community Assistance Team has been made aware of this topic and will be investigating and responding as quickly as possible.
Some important considerations that will help your to get your issue solved faster:

  • It is best to use our topic creation template; if you haven’t yet, we recommend posting a followup with the requested information. With that information the team will be able to more quickly search for similar issues with connectors and the platform and troubleshoot more quickly your specific question or problem.
  • Make sure to upload the complete log file; a common investigation roadblock is that sometimes the error for the issue happens well before the problem is surfaced to the user, and so having the tail of the log is less useful than having the whole log to scan through.
  • Be as descriptive and specific as possible; when investigating it is extremely valuable to know what steps were taken to encounter the issue, what version of connector / platform / Java / Python / docker / k8s was used, etc. The more context supplied, the quicker the investigation can start on your topic and the faster we can drive towards an answer.
  • We in the Community Assistance Team are glad you’ve made yourself part of our community, and we’ll do our best to answer your questions and resolve the problems as quickly as possible. Expect to hear from a specific team member as soon as possible.

Thank you for your time and attention.
Best,
The Community Assistance Team

Do you need any assistance here Billy?

Hey, No I managed to resolve this myself, but the Datadog Setup documentation could use an update to make this an easier setup.

Feel free to update the docs! Any help are welcomed.