
Docker has been around for a long time now. Honestly, I am one of the very few who have started using Dockers only recently. In this series of short blogs, I am attempting to document my own understanding of various things involving docker. Do note that this is not an in-depth analysis, but one that focuses on getting you familiar with Docker and what you can achieve with it. For the same reason, this series will not attempt to go into the depth of architecture or any other details involved.
Docker Concepts
The first and foremost thing that needs to be done is of course understanding the vocabulary. As the first part of the series, this post focuses on understanding the basic concepts involved in Docker.
Docker and Containers
The first question that needs an answer is _”What is Docker ?”_. In simplest words, Docker is essentially a container management platform. It allows the developers to simplify the process of building, running, managing, and distributing applications by introducing the concepts of containers. Containers are standardized executable components that package application code and dependencies, allowing them to run it isolated from other processes in the host operating system.
Hold on – doesn’t that sound similar to _Virtual Machines_? Both containers and virtual machines are resource virtualization technologies and they do sound similar in many ways, but there are some essential differences. The Virtual Machines, or VMs, virtualize the hardware layers, emulating a physical computer. The hypervisor, the software(or firmware) component that virtualizes the hardware and creates/runs virtual machines, allows the host machine to support multiple VMs by virtually sharing the resources such as memory and processing power. They are heavy packages that run insolation as fully standalone operating systems and hence consume a lot of resources for simpler scenarios.
Containers
Containers, on other hand, do not emulate hardware. In fact, they do not have an operating system for themselves. They are sandboxed processes running on a host operating system, which runs isolated from other processes in the operating system. The isolated containers are capable of running and managing their software and dependencies. They are extremely lightweight, portable, and easy to distribute.
Docker allows you to manage your application and dependencies in a much easier and more efficient way, thereby supporting your CI/CD pipelines. It also helps developers to work in a standardized environment by using containers that provide your application with the required services.
Image
Docker Images are blueprints for containers. In other words, containers are running instances of an image. Docker images read-only templates that are used to build containers. They contain instructions to build a docker container and include a collection of files (or layers) that include application code, and dependencies that is required to run the application.
Images make it easier for sharing the environment with your teammates. Developers working on the same projects could create a docker image of their required services and share them with each other. This helps all the developers to be working in a similar environment.
In many ways, it is comparable to snapshots in Virtual machines.

Volumes
If the docker container needs to persist any data, it needs to be done in what is known as volumes. They are managed by Docker using Docker API. They exist outside the Docker container in the host file system. These help in persisting data and in sharing data between containers. They are essentially directories and files that usually exist in the file system.
Docker Hub
Docker Hub is the largest registry of Docker Images. It facilitates sharing of container images with your team. It also provides a large collection of public open-source image repositories. It hosts many images of the most commonly used databases and other software. As a developer, you can pull/push images from Docker Hub
Daemon
Daemon is a background service that runs in the host operating system and is part of the Docker engine. They allow the creation and management of docker objects, including images, networks, and volumes. Docker
Docker Client
Docker client is the primary way of interacting with docker daemon. Docker Users interact with the Daemon and manage their contains using the Docker client. Docker Client interacts with the Daemon using the Docker commands (internally using Docker APIs).
Docker Clients can communicate with more than one daemon, including a remote one.
In this first part of #DockerDays we familiarized ourselves with different concepts of Docker. In the next part, we will launch our first container and understand the different commands required to manage containers.
One thought on “#DockerDays Day 1 – Introduction to Docker Concepts”