Skip to Content

Docker network - complete guide

All you have to know about docker networks - bridge, macvlan or overlay

Share on:

Containers need to communicate with each other and outside world.
Docker has wide network capabilities.

What we can do with docker network? How to use it?

Docker Network Theory

What elements we have in docker network?

  • CNM
  • libnetwork
  • drivers

Docker network concept is based on open-source design specification called Container Network Model(CNM). CNM assume that network drivers should be pluggable.
Docker CNM implementation is in libnetwork library.

What we can do with docker CNM?

  • single-host bridges
  • multi-host overlay networks
  • networks plugged into VLAN’s
  • ingress networks with load balancing Also we get auto service discovery for our containers.

What elements we have in docker CNM implementation?

  • sandboxes - container network stack - isolated - ethernet interfaces, ports, routing tables and DNS config
  • endpoint - virtual interfaces in containers - veth
  • networks - virtual switches(bridges) - connect endpoints

What drivers are built-in by default into docker(Linux)?

  • bridge
  • overlay
  • macvlan

Single-host network bridge

By default docker creates one brigde after instalation.

All containers are by default connected to it unless we override it.
To set another non-default network we use --network flag for docker run.

What is the diference between default bridge and user created one:

  • Default brigde is less secure - provide less isolation - by default all containers will be connected to it
  • We can connect/disconnect containers from user defined bridges without restarting containers
  • Defualt bridge does not have DNS included - we must use IP in such network - on user defined bridge we get DNS(all containers added to bridge network automatically will be added also to DNS) - just connect container to bridge and you can talk with other containers in network by their names(works for named containers with --name at creation time)

We can see that all default networks are local - what means that we can’t connect with them containers on multiple docker hosts.

[root@docker-host1 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
740857ece0a5        bridge              bridge              local
13396ccbb663        host                host                local
44424eae56f4        none                null                local

We can check details about default bridge network

[root@docker-host1 ~]# docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "740857ece0a5fc38891b919d2f7506e32e699b4e79a449f8fef9824d0cde39b2",
        "Created": "2020-05-07T21:07:19.137450998+02:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

As we see docker bridge is build on Linux bridge:

"com.docker.network.bridge.name": "docker0"
[root@docker-host1 ~]# ip link show docker0
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
    link/ether 02:42:9d:73:fa:48 brd ff:ff:ff:ff:ff:ff

Creating network bridge

[root@docker-host1 ~]# docker network create -d bridge lukas-bridge
3836e34c6950d80e032e322f915935b51472059ea19cfd65c409263c74ba2b45
[root@docker-host1 ~]# ip a
<snip>
5: br-3836e34c6950: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:0b:b8:19:54 brd ff:ff:ff:ff:ff:ff
    inet 172.19.0.1/16 brd 172.19.255.255 scope global br-6c8ad698150b
       valid_lft forever preferred_lft forever

As we see there is new bridge at OS level with name coresponding to ID of our newly created docker bridge.

Starting container connected to new bridge

[root@docker-host1 ~]# docker container run -d --name web_server --network lukas-bridge httpd
08e885099d0ebca6ab5b533bffd0245dccfaab79149657ad9161fe26c370b879

List containers connected to network

[root@docker-host1 ~]# docker network inspect lukas-bridge --format "{{json .Containers}}"
{"08e885099d0ebca6ab5b533bffd0245dccfaab79149657ad9161fe26c370b879":{"Name":"web_server","EndpointID":"ade95c4b553f8cebfbf8f24338b6d0f9e7b2d41744c501816ac734d1bb8ccc1f","MacAddress":"02:42:ac:14:00:02","IPv4Address":"172.20.0.2/16","IPv6Address":""}}

Test connection between containers connected to bridge

If we use --name flag when creating container we can use docker built-in DNS feature. Containers automatically know hostname to ip mapping of other containers.

Create second container:

[root@docker-host1 ~]# docker container run -d --name web_server2 --network lukas-bridge httpd
34bb16b0c43c64c935c8a44c0ed13fa2a7972f79700db290144eba54aa93336a

Get into one of containers and ping other one

[root@docker-host1 ~]# docker exec -it web_server2 /bin/bash
root@34bb16b0c43c:/usr/local/apache2# ping web_server

PING web_server (172.20.0.2) 56(84) bytes of data.
64 bytes from web_server (172.20.0.2): icmp_seq=1 ttl=64 time=0.712 ms
64 bytes from web_server (172.20.0.2): icmp_seq=2 ttl=64 time=0.649 ms
^C

Changing DNS in container

We can add DNS to container - this DNS will be queired when Docker built-in DNS won’t be able to resolve request.

[root@docker-host1 ~]# docker run -d --dns 8.8.8.8 httpd
4bde944868b492a5cef84676de1cfb1a211069c67c494d4ab742115d4fc17ee9

[root@docker-host1 ~]# docker exec -it 4bde944868b492a5cef84676de1cfb1a211069c67c494d4ab742115d4fc17ee9 bash
root@4bde944868b4:/usr/local/apache2# cat /etc/resolv.conf
search lukas.int
nameserver 8.8.8.8

Port mapping

We can tell docker to expose port from container on docker host port that other server or clients can access service running in container.
We can achive this with --publish flag.

Start container with port published

We start container with httpd service which expose port 80 - to connect to this port in container we can connect to docker host on port 1234.

[root@docker-host1 ~]# docker run -d --name web_server --network lukas-bridge --publish published=1234,target=80  httpd
304ddf53a801763c769545eb91c6c5522fc2eedf7cf5f4d252bace1d5e0b8a37

Check what ports are published from container

[root@docker-host1 ~]# docker port web_server
80/tcp -> 0.0.0.0:1234

Connecting container directly to host network stack

Docker gives us possibility to integrate container network stack into host stack.
In such situation container will not get his own IP address and MAC(neither externally from docker host network, neither internally from network created by docker engine) - he will be reachable as he will be normal software running on our host.

This is good solution if we want to run some software that use network but we want to have full isolation of process(pid), mount(mnt), user and IPC namespaces.

To active such mode we use --network=host parameter to docker run.
In this mode --publish doesn’t work - if service in container listen on port 80 - port 80 on docker host will be taken.

[root@docker-host1 ~]# docker run -d --name web_server --network host httpd
dedc6c9e1c05ccb3621fa4272975036f7b9266371a20221f3ba9dbc237eff3b0

If we inspect container we will see that there is no IP adress and network mode is set to host:

[root@docker-host1 ~]# docker container inspect web_server
[...]
"NetworkMode": "host",
            "PortBindings": {
                "80/tcp": [
                    {
                        "HostIp": "",
                        "HostPort": "1234"
                    }
[..]
"Networks": {
    "host": {
        "IPAMConfig": null,
        "Links": null,
        "Aliases": null,
        "NetworkID": "13396ccbb6635016d4381588204f9724401cf7200b67cb8f44f065ebfbb8f069",
        "EndpointID": "256edba4bd98f86e4a3af9331ff24aca958805ff5f0a3bb202fa67c0b8d4ab07",
        "Gateway": "",
        "IPAddress": "",

Apache server is listening correctly:

[root@docker-host1 ~]# curl -X GET 127.0.0.1:80
<html><body><h1>It works!</h1></body></html>

Connecting container directly to host network - MACVLAN

If we want to expose our container to world but it is important for us that container has got own IP and MAC adress from network where docker host is connected, we can use MACVLAN network driver. It allows also to connect to certain VLAN.

It will create for us docker network - all containers connected to this network will be visible in docker host network as they were normal separate from docker host that host them machines.

Remember that:

  • with MACVLAN you can easily exhaust IP adresses in your network
  • you have to set network card in promiscious mode
  • this solution is created mainly for legacy apps or network monitoring apps

Important!

To use MACVLAN mode our network card has to be in promiscious mode!
Activiting promoscius mode in CentOS(ens18 is my main network card):

[root@docker-host1 ~]# ip link set ens18 promisc on

Check if mode was activeted - PROMISC flag:

[root@docker-host1 ~]# ip a
[..]
2: ens18: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 62:ae:db:67:8c:08 brd ff:ff:ff:ff:ff:ff
    inet 10.10.10.20/24 brd 10.10.10.255 scope global noprefixroute ens18
       valid_lft forever preferred_lft forever
    inet6 fe80::8f98:eb7:2724:a548/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
[...]

As my docker host is in 10.10.10.0/24 network connected by ens18 network card, I will create proper network with macvlan driver.

I also add --ip-range flag because I want that containers get IP’s from second half of my network(in first half are my docker hosts so with this flag I will avoid IP collision)

[root@docker-host1 ~]# docker network create -d macvlan --subnet 10.10.10.0/24 --ip-range 10.10.10.128/25 --gateway 10.10.10.1 -o parent=ens18 lukas_macvlan
75a25e8867e6d2a6e91f3bfecfe70d5420b2049a474b4aeb97e792f23fce1ddf

Let’s create some container connected to macvlan network.

[root@docker-host1 ~]# docker run -d --network lukas_macvlan --name web_server httpd
f856de1521116a70632126d014ff1b14e61b8309d3e41cb593b8be78bb29693f

Check connectivity from inside container IP configuration:

[root@docker-host1 ~]# docker exec -it web_server bash
root@f856de152111:/usr/local/apache2# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=52 time=38.2 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=52 time=41.1 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=52 time=83.1 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=52 time=107 ms
^C
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 8ms
rtt min/avg/max/mdev = 38.157/67.276/106.762/28.910 ms

root@f856de152111:/usr/local/apache2# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.10.10.128  netmask 255.255.255.0  broadcast 10.10.10.255
        ether 02:42:0a:0a:0a:80  txqueuelen 0  (Ethernet)
        RX packets 3714  bytes 8949950 (8.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2252  bytes 159458 (155.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

We can see that our container is now in docker host network directly.
Let’s check if we can ping in from another docker host in network

[root@docker-host2 ~]# ping 10.10.10.128
PING 10.10.10.128 (10.10.10.128) 56(84) bytes of data.
64 bytes from 10.10.10.128: icmp_seq=1 ttl=64 time=0.703 ms
64 bytes from 10.10.10.128: icmp_seq=2 ttl=64 time=0.514 ms
^C
--- 10.10.10.128 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 60ms
rtt min/avg/max/mdev = 0.514/0.608/0.703/0.097 ms

This feature gives us possibility to connect to multiple VLAN’s - we can have our containers in diffrent external networks all trunking with single network card of our docker host.

Overlay network - multi-host - Docker Swarm

Overlay network allows us to set communication between containers deployed on multiple nodes. We use this type of network in Swarm Clusters.


For more information about Docker Swarm look at:

Docker Swarm - complete guide


Environment used - two host in diffrent networks connected by router:

Role in Swarm Server IP
Manager docker-host1.lukas.int 10.10.10.20
Worker1 docker-host2.lukas.int 192.168.1.100

Create overlay network

Creating new network - we optionally use -o encrypted to make data in network secure

[root@docker-host1 ~]# docker network create -d overlay -o encrypted lukas-overnet
fyb4s4ydv3vnl6cjhq62ojdo6

Check overlay network

On manager node:

[root@docker-host1 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
6b9cea7d8900        bridge              bridge              local
5cbd99a08e42        docker_gwbridge     bridge              local
13396ccbb663        host                host                local
di48hsm4e4fw        ingress             overlay             swarm
l4hcqght8ulh        lukas-overnet       overlay             swarm
44424eae56f4        none                null                local

On worker node:

[root@docker-host2 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
7e16c6986b6c        bridge              bridge              local
b905ff175794        docker_gwbridge     bridge              local
1b02943d42e5        host                host                local
di48hsm4e4fw        ingress             overlay             swarm
cde72e9ea1c3        none                null                local

As we see new custom overlay networks are always visible on manager.
Network will be visible on worker node as soon as there will be container attached to this network.
After creating service connected to lukas-overnet - network starts to be visilble on docker-host2 worker

[root@docker-host1 ~]# docker service create --name web_server --replicas 2 --network lukas-overnet httpd
jktxvf185ab8d52uw1g3f74gw
overall progress: 2 out of 2 tasks
1/2: running   [==================================================>]
2/2: running   [==================================================>]
verify: Service converged
[root@docker-host2 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
7e16c6986b6c        bridge              bridge              local
b905ff175794        docker_gwbridge     bridge              local
1b02943d42e5        host                host                local
di48hsm4e4fw        ingress             overlay             swarm
l4hcqght8ulh        lukas-overnet       overlay             swarm
cde72e9ea1c3        none                null                local

Inspect overlay network

We can see that created network adressation not match any of docker host networks.

[root@docker-host1 ~]# docker network inspect lukas-overnet
[
    {
        "Name": "lukas-overnet",
        "Id": "l4hcqght8ulhxydypdnh9l45f",
        "Created": "2020-05-11T20:39:30.261903606+02:00",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.2.0/24",
                    "Gateway": "10.0.2.1"
                }
  [...]

Check container IP address

[root@docker-host2 ~]# docker container inspect 71b279ce322b --format "{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}"
10.0.2.3

Test connection in overlay network

Connection works with IP and container name also:

root@25d3dea663a0:/usr/local/apache2# ping 10.0.2.3
PING 10.0.2.3 (10.0.2.3) 56(84) bytes of data.
64 bytes from 10.0.2.3: icmp_seq=1 ttl=64 time=0.774 ms
64 bytes from 10.0.2.3: icmp_seq=2 ttl=64 time=0.777 ms
64 bytes from 10.0.2.3: icmp_seq=3 ttl=64 time=0.838 ms
64 bytes from 10.0.2.3: icmp_seq=4 ttl=64 time=0.602 ms
^C
--- 10.0.2.3 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 84ms
rtt min/avg/max/mdev = 0.602/0.747/0.838/0.094 ms

root@25d3dea663a0:/usr/local/apache2# ping web_server.1.ptbr9gbf7ygzn4i4dgoww6ne5
PING web_server.1.ptbr9gbf7ygzn4i4dgoww6ne5 (10.0.2.3) 56(84) bytes of data.
64 bytes from web_server.1.ptbr9gbf7ygzn4i4dgoww6ne5.lukas-overnet (10.0.2.3): icmp_seq=1 ttl=64 time=0.497 ms
64 bytes from web_server.1.ptbr9gbf7ygzn4i4dgoww6ne5.lukas-overnet (10.0.2.3): icmp_seq=2 ttl=64 time=0.568 ms
64 bytes from web_server.1.ptbr9gbf7ygzn4i4dgoww6ne5.lukas-overnet (10.0.2.3): icmp_seq=3 ttl=64 time=0.627 ms
64 bytes from web_server.1.ptbr9gbf7ygzn4i4dgoww6ne5.lukas-overnet (10.0.2.3): icmp_seq=4 ttl=64 time=0.687 ms
64 bytes from web_server.1.ptbr9gbf7ygzn4i4dgoww6ne5.lukas-overnet (10.0.2.3): icmp_seq=5 ttl=64 time=0.759 ms
^C
--- web_server.1.ptbr9gbf7ygzn4i4dgoww6ne5 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 65ms
rtt min/avg/max/mdev = 0.497/0.627/0.759/0.095 ms