#DockerDays Day 3- Networking in Docker

In the previous part of the #DockerDays Series, we learned to run our first container. We also learned how to run a container and connect it from the host machine.

As mentioned in the closing comments of the last part, so far we have worked on a single container environment. What happens when you have multiple containers running concurrently. How do the containers interact with each other? Before we answer these questions, let us outline the Agenda of Day 3 of #DockerDays.

Agenda

  1. Need for Network
  2. Docker Networking
    • 2.1. Bridge network
      • 2.1.1 Default Bridge
      • 2.1.2 User Defined Bridge
      • 2.1.3 Create a Bridge network
      • 2.1.4 Connect container to network
      • 2.1.5 Disconnect container from network
    • Host network
  3. Conclusion

Need for Network

To demonstrate the problem we are attempting to address, let us work on a scenario that you are likely to run into in real life.

Similar to how you ran the Sql Server in a container in the previous part of this tutorial, in this blog, we will run Postgres in a container. However, there is a difference. Instead of using the client in Host Machine to connect to the container, we will use containerized pgAdmin4 to connect to the Postgres instance. This would require us to connect the two containers.

Let us begin by downloading the Postgres image and running the container.

> docker run --name nt-auth-postgres -e POSTGRES_PASSWORD=YourPassword -d -p 5432:5432 postgres

We have already familiarized ourselves with the docker run command previously, so we will not get into details of the same. The important part to remember here is the container instance name, along with Password as we would be requiring it shortly.

Let us now download and run an image of pgAdmin. PgAdmin4 supports a containerized web client for Postgres. We would be using it for connecting to our Postgres instance.

> docker run --name pgadmin -e "PGADMIN_DEFAULT_EMAIL=anu.viswan@gmail.com" -e "PGADMIN_DEFAULT_PASSWORD=Admin123" -p 5050:80 -d dpage/pgadmin4

Once again the docker run command is quite self-explanatory. In here, we are running the container with the name pgadmin. You can now access the pgAdmin instance using http://localhost:5050.

At this point, you have two containers, one each for Postgres Database and PgAdmin Client, running. How do they connect to each other?

Imagine if you have two independent machines having the Database and client installed. How do you connect them? Of course, you place them in the same network. That’s exactly what you would be doing here.

Docker Networking

We have already seen how powerful Docker is with its ability to containerize applications. But that is not all of it. You can also connect different containers together using the networking capabilities of Docker. In fact, the applications don’t even need to know they are in a container. They work as if they are in a normal host environment connected via networks.

Docker’s pluggable driver system supports the following network drivers by default

  • bridge
  • host
  • overlay
  • ipvlan
  • macvlan

In this introductory part of networking, we will focus on single host networks like bridge and host as it could be a bit overwhelming to understand multi-host drivers at this point. We will discuss them in detail soon in a separate blog post in this series.

You can list all networks in docker by running the following command.

docker network ls

By default, it would show the following networks.

NETWORKIDNAMEDRIVERSCOPE
eadb763f28f3bridgebridgelocal
4b5894a15921hosthostlocal
c76c80eddd04nonenulllocal

bridge network

A bridge in docker is a software network bridge, which allows communication between containers connected to the same bridge while providing isolation from containers that are not connected to the bridge. Bridge networks are applied only on containers that run on the same daemon host.

default Bridge

A bridge is the default driver in Docker. When you are run a container in Docker without specifying network, by default a bridge network (with the same name bridge) is used by the container.

You can verify the same by running the inspect command on docker container.

docker inspect pgadmin

If you observe the Network key of the Json result, you can view the associated Networks, which in this case has default bridge network.

"Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "eadb763f28f3ec616977319344f78758229e54036083b5239b95b9a15197057d",
                    "EndpointID": "72b52e1544dd5cf62d8dcf37ca7a0267ef5225ba68d68d2c7c3786056469f5cd",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:02",
                    "DriverOpts": null
                }
            }

Another way to view the same is to use the inspect command on docker network.

docker network inspect bridge

Verifying the Containers key of the Json result would reveal all the containers that are associated with the network.

 "Containers": {
            "139bf91bd9f498536b679566e5cbc79ca05a6ae6f9585d9a9e4d73c66cc5e125": {
                "Name": "pgadmin",
                "EndpointID": "72b52e1544dd5cf62d8dcf37ca7a0267ef5225ba68d68d2c7c3786056469f5cd",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }

User Specified Bridges

In addition to the default bridge, the User can create his own custom bridges. There are some significant differences between the two.

  • User-defined bridges allows automatic DNS resolution, while with the default bridge, IP address has to be used to address other containers in the same network.
  • Containers can be attached/detached from user-defined bridges on the fly. With default bridges, you need to stop it before attaching it to other networks.

User-defined bridges allow to group similar/related containers together and hence ensure only the related containers are allowed to communicate with each other. The unrelated containers can be in separate networks, thus providing isolation.

Create network 

User-defined bridges can be created using the following command.

// docker network create --driver bridge [networkName]

> docker network create --driver bridge pgnetwork

Connect a container to a network 

In order to connect our previously created containers for pgadmin and postgres database to the network we created, we can use the docket network connect command.

// docker network connect [networkName] [containerName]

> docker network connect pgnetwork pgadmin
> docker network connect pgnetwork nt-auth-postgres

Disconnect container from a network

A container can be detached from the network using the docker network disconnect command.

docker network disconnect pgnetwork pgadmin

You can specify the network while running/starting the containers using the --network flag. For example

docker run --name pgadmin --network pgnetwork -e "PGADMIN_DEFAULT_EMAIL=anu.viswan@gmail.com" -e "PGADMIN_DEFAULT_PASSWORD=Admin123" -p 5050:80 -d dpage/pgadmin4

host Network

when using the host networking, the containers are not allocated their own IP addresses. In fact, they are not isolated from the host. This allows you to use the container as a part of the host and avoids exposing ports to ensure the host can access it.

Let us understand the difference by using an example case. Let us run the ngninx container, first in the bridge network.

> docker run --name nginxServer -d nginx

At this point, the nginxServer instance of the nginx is connected to the default bridge network. You can verify this using the docker inspect nginxServer command.

Notice that I have intentionally skipped the port mapping. As one would expect, if an attempt to reach my nginx instance from the host machine (http://localhost), it would be unreachable.

This could be rectified by specifying the publish ports.

docker run --name nginxServer -d -p 80:80 nginx

As you noticed, with bridge network, one had to specify the publish ports and expose them to the host to ensure it is accessed from host machine. The host and the container are completely isolated unless the ports are exposed.

Things are different when one uses the host networking. Let us try to run the nginx instance connected to the default host network. We will use the --network flags with docker run for the same.

docker run --name nginxServer -d --network host nginx

Once again, you can verify that the nginxServer is connected to the desired host network using the docker inspect nginxServer command.

Let us now attempt to access the docker instance from the host machine (http://localhost).

As you can observe, the host machine can now access the nginx instance using port 80 (nginx run on port 80) without additional configuration. The host networking ensured that the host machine and the container are not isolated (they do not need to expose ports).

Conclusion

In this part of the tutorial, we familiarized ourselves with some of the networking capabilities of docker. We also understood the difference between the two single host networks host and bridge. We will address the multi-host networking capabilities in a later chapter. But prior to that, in the next part of this series, we will look into providing persistence capabilities for the docker containers.

Advertisement

One thought on “#DockerDays Day 3- Networking in Docker

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s