Introduction to Docker: Simplifying Application Deployment

Introduction to Docker: Simplifying Application Deployment

Docker has revolutionized the way we build, test, and deploy applications. It is a powerful container platform that enables developers to package applications and their dependencies into lightweight, portable containers. These containers can then be deployed consistently across different environments, ensuring that the application behaves the same way everywhere.

Understanding Docker's Working Principle

At the core of Docker's functionality is the Dockerfile, a text-based script that defines the specifications of an application's environment. By leveraging Dockerfiles, developers can easily create Docker images, which are self-contained snapshots of an application along with its dependencies.

Unlocking the Potential: Use Cases for Docker

Docker finds applications in a wide range of scenarios, making it a versatile tool in the world of software development. Some popular use cases include:

  1. Microservices: Docker enables the seamless deployment and scaling of microservices-based architectures. By encapsulating each microservice within its own container, Docker promotes modularity and flexibility in building complex systems.

  2. Data Processing: Docker simplifies the deployment of data processing applications by providing a consistent environment for running tools such as Apache Spark or Hadoop. It ensures that the required dependencies and configurations are properly set up, saving time and effort.

  3. Continuous Integration and Delivery: Docker has become an integral part of CI/CD pipelines. With Docker, developers can create standardized build environments that ensure consistent testing and deployment across various stages of the software development lifecycle.

  4. Containers as a Service: Docker provides a foundation for container-based services, allowing organizations to offer scalable and managed container environments. It enables the provisioning of containers on-demand, facilitating the deployment of applications without the need for manual configuration.

Embracing Containerization: Lightweight and Efficient

One of Docker's key advantages lies in its utilization of containerization. Unlike traditional virtual machines that run complete operating systems on top of a host OS, Docker containers operate on a shared OS kernel. This approach eliminates the overhead associated with running multiple operating systems simultaneously.

By leveraging OS virtualization, Docker containers are lightweight, portable, and fast to start. They encapsulate the application along with its dependencies, ensuring consistent behavior regardless of the underlying host environment.

Unveiling Docker's Architecture

Docker follows a client-server architecture model, which enables smooth communication and interaction between different components:

  • Docker Client: The Docker client serves as the primary interface for developers to interact with Docker. It sends commands and requests to the Docker daemon for building, running, and managing containers.

  • Docker Daemon: The Docker daemon is responsible for the core functionalities of Docker. It handles tasks such as building Docker images, managing containers, and orchestrating their execution.

Here's a visual representation of Docker's architecture:

Docker Architecture

With this architecture, Docker enables seamless collaboration between developers and system administrators, allowing for efficient application deployment and management.

In the next section, we'll dive deeper into the fundamental concepts of Docker and explore how to get started with using Docker to create and deploy applications.

Important Docker Terminologies

Docker Daemon

It listens to the API requests being made through the Docker client and manages Docker objects such as images, containers, networks, and volumes.

Docker client

It is used to interact with Docker. When we run the command, the client sends the request to the daemon, which performs the actions required. One client can interact with numerous daemons.

Docker registries

It is used to store the Docker images. Docker Hub is a public registry that anyone can use.

  • When you pull an image, Docker by default looks for it in the public registry and saves the image on your local system on DOCKER_HOST.

  • You can also store images on your local machine or push them to the public registry.

Dockerfile

It contains the steps to create a Docker image. These images can be pulled to create containers in any environment. These images can also be stored online at Docker Hubs.

  • When you run a Docker image, you get Docker containers.

Docker Image

It is a file that defines a Docker container (instruction to make container). It is similar to the snapshot of a VM.

  • Docker Images are run to create Docker containers.

  • Images are immutable. The files making up an image do not change.

  • Images can be stored locally or in remote locations.

  • An image can be a collection of images. This is called layering.

  • The format name:version is commonly used for image tagging in Docker.

Containers

A container is a runnable instance of an image, basically, the place where our application is running.

  • We can manage containers using the Docker API or CLI.

  • We can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state.

  • As we know that the container does not communicate with the outside world. This container contains a small version of the OS and the dependency files that are needed to run the application.

  • Every container has a unique ID.

Let's visualize it in a simpler form:

  +-------------+           +-------+          +-----------+
  | Docker File |  --run--> | Image | --run--> | Container |
  +-------------+           +-------+          +-----------+

Parts of Docker

1. Docker Runtime

Allows us to start and stop the containers. The Docker daemon works with the runtime to run our commands. It has two types:

  • runc: It is a low-level runtime. Its role is to work with the OS and start/stop the container.

  • containerd: It is a high-level runtime. Its role is to manage the runc (low-level runtime).

    • It also manages other things like interaction with the network, pulling the data and images, etc.

    • It also acts as the runtime for Kubernetes.

2. Docker Engine

It helps us to interact with Docker.

  • Example: Docker daemon.

3. Docker Orchestration

Docker Orchestration is the management of multiple Docker containers, automating their deployment, scaling, and coordination to work together seamlessly, ensuring efficient and reliable operation of containerized applications.

  • For example, if we think of Docker containers as individual building blocks that contain different parts of your application. Docker Orchestration is like the conductor that brings these building blocks together and ensures they work in harmony.

Docker Commands

Let us now look at the various command associated with Docker.

1. To run an image.

docker run hello-world
  • It means hey Docker, run the image named hello-world, if it is not found locally then download it from the Docker hub.

  • We can also use inspect command for the same (using both container-ID and image-name). Example:

      docker inspect container-ID
    
      docker inspect image-name
    
  • We can also run the image in the background using -d command. Example:

      docker run -d ubuntu
    
    • Here -d flag means detach mode.

2. To check all the locally downloaded images.

docker images

3. To download an image.

docker pull image-name
  • We can also download specific versions using ; symbols. Example:

      docker pull ubuntu:16.04
    
  • We can also check only the IDs of the images.

      docker images q
    

4. To list down the containers.

docker container ls

5. To start interactive shell or environment.

docker run -it image-name
  • We can also attach a specific shell to the container. For example, if we want to attach a bash shell, we can specify it with the container's ID.

      docker container exec -it container-ID bash
    
    • To run the above command, the container must be running in the background.

6. To stop a container.

docker stop container-ID

7. To check all the containers (also the stopped ones).

docker ps -a

8. To remove a container.

docker rm container-ID

9. To delete all the stopped containers.

docker prune -f
  • Here -f flag means don't ask again just delete them.

10. To remove the images.

docker rmi image-name -f
  • To remove all the images, we can pass the images-IDs of all the active images with the rmi command:

      docker rmi $(docker images -q)
    
    • Note: When a container is running, then its image will not be deleted.

11. To share our container.

First, we can commit the changes just like we do in Git.

  • Command:

      docker commit -m "message" container-ID Image-name
    
  • Example:

      docker commit -m "modified info.txt" 408bd12547vg MyApplication:1.01
    

Notes:

  • We can also specify the port to run a container using -p flag. Example: -p 8080:80 nginx.

Creating our own Docker Images.

  1. First, create a docker file. The name convention is Dockerfile. Example:

     touch Dockerfile
    
  2. Write the content of the Dockerfile. Example:

     FROM ubuntu
     MAINTAINER maintainer-name <maintainer-email-id>
     RUN apt-get update
     CMD [“echo”, “Hi”]
    

    In the above command:

    1. First, we are giving the name of the base image.

    2. We are giving Maintainer Details.

    3. Writing the command that should run while the container is being created i.e. RUN apt-get update.

    4. Finally writing the command that should run after the container is created i.e. CMD [“echo”, “Hi”]. Here we are using the CMD tag for specifying the executable commands to run.

      • We are giving the commands in the form of an array like: ["command-1", "command-2", ...].
  3. To build the image out of the docker file, we can use the docker build command. Example:

     docker build -t myImage:1.01 <path-to-docker-file>
    

    Here the -t flag (tag flag) is used to tag the image being built with a specific name and optionally a version number.

  4. For pushing the data to the Docker repository, we first need to log in to the Docker with the terminal. We can use the docker login command and then enter the username and password.

  5. Finally, to push the Docker image to a Docker repository, such as Docker Hub, we can use the docker push command. Command:

     docker push [OPTIONS] NAME[:TAG]
    

    Example:

     docker push username/myImage:1.01