What is Docker Orchestration?
Docker orchestration is a set of practices and technologies for managing Docker containers at large scale. As soon as containerized applications scale to a large number of containers, there is a need for container orchestration capabilities, such as provisioning containers, scaling up and down, managing networking, load balancing, and other concerns.
There are currently two main options for Docker orchestration:
- Kubernetes—the de-facto standard for container orchestration, commonly used to orchestrate Docker containers. Kubernetes is a powerful platform but is complex and has a steep learning curve. There are other orchestrators, such as Nomad and OpenShift Container Platform (which has Kubernetes at its core but expands its capabilities), which can also be used for Docker orchestration, but are outside the scope of this article.
- Swarm and other Docker-native tools—the Docker ecosystem offers several tools for orchestration, which are considered less powerful than Kubernetes, but are easy to use and are readily available for Docker users. These include Docker Swarm, a lightweight container orchestrator, and Docker Compose, which automatically runs applications consisting of multiple containers.
- Cloud container platforms—such as Amazon Elastic Container Service (EKS), Google Cloud Run, or Azure Container Instances (ACI), which provide basic orchestration capabilities and are less complex than full-scale orchestrators like Kubernetes.
Important note: At the time of this writing, Docker Swarm is not dead. It is included in the Docker Community edition and Docker has not announced plans to deprecate it. Learn more below.
In this article:
- What are Docker Containers?
- What is Container Orchestration?
- Do You Need Orchestration for Your Docker Containers?
- Kubernetes Orchestration for Docker
- Docker Orchestration Using Docker-Native Tools
- What is Docker Swarm?
- Docker Swarm vs Kubernetes: How to Choose [P1]
- What is Docker Compose?
What are Docker Containers?
Docker is the world’s most popular container runtime, and has driven the massive adoption of containerized architecture in recent years. Docker offers a user-friendly way to develop containerized applications, providing a set of tools developers can use to package applications in container images, run containers and manage their lifecycle.
Docker container images, which were adopted as an industry standard by the Open Container Initiative (OCI), are easy to create, version, and share. They allow developers to deploy containers consistently on any Docker-compatible host.
Related content: Read our guide to Docker architecture ›
What is Container Orchestration?
The main purpose of container orchestration is to manage the entire lifecycle of containers at large scale. Container orchestration technology helps software development teams control and automate a variety of tasks, including:
- Provisioning and deploying containers
- Managing availability and redundancy of containerized applications
- Auto scaling containers in response to changes in application load
- Ensuring containers run on a host with appropriate hardware resources, and moving them away from a host that does not provide the required resources
- Exposing container services to the outside world
- Performing load balancing and service discovery between containers
- Monitoring the health of containers and hosts
- Managing persistent storage for containerized applications
Do You Need Orchestration for Your Docker Containers?
Here are some guidelines that can help you determine whether your application requires orchestration or not:
- Production applications—applications running in production will typically require orchestration capabilities, to provide capabilities like high availability and auto scaling in response to application loads. Even for non-production workloads, a containerized application with more than several containers can typically benefit from orchestration. Read our guide to Docker in production ›
- Applications with high demands for resilience and scaling—container orchestrators like Kubernetes enable you to balance loads and quickly scale containers up to meet demand. You can do this declaratively, by describing a desired state, instead of manually coding responses to changing conditions.
- CI/CD techniques—container orchestration platforms support a range of deployment patterns. For example, you can use blue/green deployment as well as rolling upgrades.
Kubernetes Orchestration for Docker
Kubernetes, an open-source project developed by Google and currently governed by the Cloud Native Computing Foundation (CNCF), is the world’s most popular container orchestrator. It is a highly mature project used to run thousands of large-scale production clusters. Kubernetes offers powerful orchestration features, but is also highly complex and requires a steep learning curve.
Here are the main architecture components of Kubernetes:
Cluster | A collection of multiple nodes, typically including at least one master node and several worker nodes (also known as minions). |
Node | A physical or virtual machine (VMs). |
Control plane | A component that schedules and deploys application instances across Kubernetes nodes. The control plane communicates with nodes via the Kubernetes API server. |
Kubelet | An agent process running on nodes. It is responsible for managing the state of each node, and it can perform several actions to maintain a desired state. For example, once the kubelet receives information from the Kubernetes API server, it can start, stop, and maintain containers according to the instructions provided by the control plane. |
Pods | A basic scheduling unit. Pods consist of one or more containers that are co-located on a host machine and can share resources. Each pod gets a unique IP address in the cluster. This enables applications to use ports without risking conflicts. The desired state of a pod’s containers can be described using JSON or YAML objects. These objects, which are called PodSpec, are passed to kubelet via the API server |
Deployments, replicas, and ReplicaSets | A resource object that specifies the pods and number of replicas for each pod. A replica is a container instance. To define the amount of replicas that should run in a cluster, you use a ReplicaSet. The ReplicaSet is a component of the deployment object. |
Docker Orchestration Using Docker-Native Tools
For small scale or development and testing environments, Kubernetes may be overkill. Many developers use orchestration options built into the Docker stack, which provide some of the capabilities of Kubernetes, and are easier to learn and use. Here are several orchestration tools offered by Docker:
- Swarm Mode—creates a set of nodes managed by a master node, similar to a Kubernetes cluster, and provides full orchestration features.
- Docker Compose—automatically creates containers for multi-container applications, but without full orchestration capabilities.
- docker-machine—installs Docker Engine and provisions hosts. Can be used on its own or in combination with Swarm Mode or Docker Compose, to provision the resources necessary for a containerized application.
What is Docker Swarm?
Swarm is Docker’s native container orchestration tool. It can package and run your applications as containers. Swarm can find the relevant container images and deploy containers on laptops, servers, public clouds, and private clouds. Docker Swarm is easier to configure and use than Kubernetes, making it an attractive option for small-scale usage scenarios and development teams without Kubernetes expertise.
Here are key architecture components of Docker Swarm:
Swarm | A collection of nodes that include at least one manager and several worker nodes. Nodes can be either virtual or physical machines. A Docker swarm is the equivalent of a Kubernetes cluster. |
Service | A task that agent nodes or managers are required to perform on the swarm. Administrators can specify service tasks. Each service defines the required container images and the commands the swarm should run in each container. |
Manager node | A node tasked with delivering work—in the form of tasks—to worker nodes. The manager node also manages the state of the Docker swarm it belongs to. For large swarms, you can run dedicated master nodes, but for smaller swarms it is possible to use master nodes to run worker tasks as well. |
Worker node | A node responsible for running tasks distributed by the swarm’s manager node. Each worker node must run an agent, which is responsible for reporting back to the master node about the state of its tasks. The agent helps ensure that the manager node tracks the tasks and services running in the swarm. |
Task | A Docker container responsible for executing the commands defined within a service. Manager nodes are responsible for assigning tasks to worker nodes. After the assignment, it is not possible to move the task back to another worker. If a task fails within a replica set, the manager assigns a new version of the same task to another node available in the swarm. |
Learn more in our detailed guide to Docker Swarm ›
Is Docker Swarm Dead?
This question has a different answer for two main editions of Docker:
- Docker Community Edition—this is the open source version of Docker. It includes Docker Swarm, and there are no official plans to retire or depreciate it. Mirantis, the new owner of Docker Enterprise Edition, has announced that it will continue to suport and develop Docker Swarm in the foreseeable future.
- Docker Enterprise Edition (Docker EE)—this is “Docker as a service” solution, which was acquired by Mirantis from Docker in 2019. Mirantis announced that its managed Docker service will gradually switch from Swarm to Kubernetes as its main orchestrator.
This means that:
- Users of Docker Community Edition can safely continue to use Docker Swarm. Watch the official documentation for any notice of changes to Docker’s development plans.
- Users of Docker EE managed services should transition from Swarm to Kubernetes, because Swarm will not be supported in Docker EE in the future.
Docker Swarm vs Kubernetes: How to Choose [P1]
Here are several key differences between Docker Swarm and Kubernetes, which can help you choose the right orchestrator for your Docker containers.
Scalability
- Kubernetes – provides a rich set of features, which can be complex to learn and use. Offers unified APIs at the cluster level and strong cluster resource guarantees.
- Docker Swarm – allows even inexperienced users to quickly deploy and scale containers. The downside is that Swarm’s automated scaling features are less mature than those offered by Kubernetes.
Networking
- Kubernetes – operates using a flat networking model. This means all pods in a cluster are able to communicate with each other. Users can leverage network policies to restrict communication. Network policy implementation is typically handled by a container networking provider such as Calico or Cilium.
- Swarm – each node in a Swarm cluster uses two key components for networking. Each node creates an overlay network used to expose services to the cluster. A node also creates a host-only bridge network dedicated to inter-container communication. To encrypt traffic, users can customize the overlay network. Docker swarm does not support network policies – to achieve segmentation, you must create different networks for different groups of services.
Container setup
- Kubernetes – offers its own YAML format for declarative setup, as well as APIs. Both of these can be used to configure and deploy Kubernetes clusters, and offer powerful functionality, but neither is compatible with Docker Compose and the Docker API.
- Swarm – offers the Docker Swarm API, which lets users leverage Docker capabilities such as Docker Compose. However, this API is limited in its functionality and does not provide the majority of enterprise capabilities offered by Kubernetes.
High availability
- Kubernetes – comes with built-in high availability, including features for failover and self-healing of failed nodes. Kubernetes can identify unhealthy pods, replace these nodes with healthy ones, and perform load balancing.
- Swarm – comes with limited high availability functionality, which is primarily based on cloning services as nodes. If failure occurs, a node can be moved to another resource.
Load balancing
- Kubernetes – when pods are configured to expose services externally, Kubernetes can perform load balancing for all cluster services, usually by using an ingress.
- Swarm – can distribute incoming requests to service names using DNS. Swarm can automatically assign addresses to services. Alternatively, users can define specific ports for each service to run on.
Graphical UI
- Kubernetes – offers a dashboard accessible through a web UI. The official Kubernetes Dashboard lets you view and control the status of your Kubernetes cluster.
- Swarm – does not come with a built-in dashboard. Users can integrate with third-party or open source tools to add dashboard functionality.
Security
- Kubernetes – offers enterprise-grade security controls, such as authentication, pod security policies, RBAC authorization, secrets management, SSL/TLS, and network policies. Users can extend these capabilities by integrating with cloud-native security solutions.
- Swarm – offers network-level security in the form of authenticated TLS (mTLS). It can rotate security certificates between nodes at regular intervals.
What is Docker Compose?
Compose is a Docker tool designed to help you define and run multi-container applications. Compose lets you use YAML files to configure application services. You can then use a single command to create and start all services from the configuration file. You can use Compose in all environments, including staging, testing, development, production, and CI/CD workflows.
Here is how to use Docker Compose:
- Use a Dockerfile to define an environment for the application. This step ensures that the environment can be reproduced in multiple locations.
- Use the docker-compose.yml to define services for your application. This step ensures that services can run together within an isolated environment.
Start and run the application by running the docker compose up
command. Alternatively, you can use the docker-compose binary to run docker-compose up.