Read all FAQs

Frequently Asked Questions

What is Container Orchestration?

Container orchestration is the practice and process of organizing containers and allocating resources to them at scale. This differs from containerization software, such as Docker, which creates and acts as a container’s runtime. Container orchestration software typically coordinates several virtual and physical machines each with its own containerization software installed.

Docker can run containerized applications by creating a virtualized container, deployed inside with an application and its required libraries, and with its own allocation of resources available on the machine where the container is running. This container believes it exists as a fully functioning machine, running a single app when it is a container among many other containers running microservices on a single hardware system within even further groupings of hardware. Container orchestration steps in to manage several of these container deployments, with the aim of scaling across multiple servers.

While there are several orchestration technologies—even Docker has theirs called Docker Swarm—Kubernetes is the most popular container orchestrator and works exceptionally well with Docker as its container creator.

Kubernetes, when deployed, creates a cluster (in Docker, the “swarm” is the similar cluster feature) which will have inside it several worker machines called nodes each running containerized applications. A node can be a VM or a physical machine. A cluster will also have a control plane that runs on its own node, though not necessarily, and provides decision-making capabilities for the entire cluster.

Nodes are made up of several components:

  • A kublet agent that runs the node and ensures its containers are in a pod, and that they start and stop as expected.
  • A kube-proxy allows and maintains communication to the pods inside and outside the cluster.
  • A container runtime that runs the containers (Docker).

A pod is a grouping of running containers within the cluster. Because of virtualization, a pod is assigned an IP address, usually encompassing closely associated containers that are located on the same host/node for efficient resource sharing. This grouping allows the orchestrator to easily schedule, and use ports without conflicts.

Depending on the use case, clusters can be quite full. Kubernetes states that their clusters can hold up to 5000 nodes. Though in practice, this is not normal. In short, the container orchestrator is the piece of software that manages all of these components, virtual and physical, to ensure that resources are used and shared effectively and efficiently within a container architecture.

Container orchestration is employed to automate the deployment, networking, monitoring, scaling, scheduling, and management of containers. As well as:

  • Allocating resources to components.
  • Configuring and scheduling containers.
  • Configuring applications based on container runtime.
  • Keeping containers and information sharing secure.
  • Load balancing.
  • Making containers available.
  • Provisioning and deploying containers.
  • Scaling up or down containers.
  • Traffic routing

Container orchestration software is built on two concepts, ephemeral computing, and desired state.

Ephemeral computing is the idea that processes, applications, containers, machines, all die at some point, and there should be a contingency for when that point is reached. For container orchestration, the contingency usually means spinning up a new container to replace the faulted container that then should be destroyed. The principle favors keeping services alive by understanding the expectation that systems will eventually fail, like the container, which should be anticipated and planned for.

Container orchestrators follow the desired state philosophy, which means that admins configure, beforehand, the state they desire of the orchestrator, and then the orchestrator maintains that standard. In a scripted style approach, scripts instruct the system to, for example, create a new database server. If the script fails the database is not created, and then staff must fix the script and rerun it. In the desired state, the state is defined and declared to the orchestrator as something like:

  • Always have 1 replica of the database.
  • Always have 2 replicas of API servers.
  • Always have 3 replicas of frontend servers.

In this way, the orchestrator will always ensure that there are that many replicas of those types always running. If one goes down, it will spin up another. If there are too many replicas, the extras are dismissed. This desired state definition is configured within a container orchestration tool by declaring it in either a YAML or JSON file.

Container orchestration is akin to a sports team, all working together to get the ball past the goal. If one player goes down (though not replaced immediately on the field during game time), the remaining players continue to push the ball forward, staying on mission. Much like how orchestrators are directed to operate.

Given the complexity that arises when developing microservices architecture and managing the containers that support those microservices, container orchestration aims to benefit organizations with simplified container organization and coordination and greater automation of container management.

  • Automation — Automation is the root of many container orchestration benefits, and is arguably the number one reason for companies who adopt containers. Automation also helps support agile and DevOps approaches.
  • Simplified Operations — Containers, especially at the enterprise level, cannot feasibly or efficiently be managed manually. Container orchestration software enables organizations to organize and control container deployments.
  • Resilient Systems — Demonstrating something like a self-healing characteristic, container orchestration helps to prevent critical service failures by automatically restarting containers with those vital applications.
  • Reduced Errors — Tied closely to simplified operations, automation reduces human error by centralizing configuration. Also, as soon as bugs are fixed, containers can quickly be replaced and updated with fresh images.
  • Scalability, Agility, Flexibility — Container orchestration is the technology that allows organizations to scale their operations quickly in the cloud. It also provides the agility to choose new approaches, and the flexibility to choose specific technologies.

From a high-level viewpoint, cloud computing technology has evolved to become highly accessible, for many, it's simply the next utility. This availability has reduced the cost of operating in the cloud, and has enabled developers to innovate much more rapidly; circumstances that were brought about by containers and container orchestration software. These two open-source software packages are essential in cloud computing and app delivery today.

Containers themselves were the solution for an automated software delivery approach that streamlined complex multi-container deployments through the use of Dockerfiles that defined how images were to be built. Running Docker though was limited to a single computer since it is OS-level virtualization software. Kubernetes takes, and extends the capability over multiple machines, with the ability to manage those containers as a single system made of multiple systems.

Orchestration has made it possible for anyone to scale their apps and services, whether you are a fortune 500 company, or a single developer whose app went viral, in both cases, Docker combined with Kubernetes makes reaching many end-users possible.

Container orchestration platforms can be found for every major cloud provider. However, many of them are based on the popular open-source container orchestration software Kubernetes. The following are some of the most familiar names in container cloud services.

  • Amazon Elastic Container Service (Amazon ECS) — Amazon ECS is their home-grown version that runs and manages Docker containers. It's a fully managed service that integrates very well into the Amazon suite of services, while essentially offering consumers a serverless experience.
  • Amazon Elastic Kubernetes Service (Amazon EKS) — Amazon EKS is Amazon's Kubernetes solution, which is also a managed platform for Kubernetes in an AWS services subscription. This setup allows for hybrid and multicloud environments, whereas ECS does not.
  • Kubernetes — The de facto container orchestration software, and it’s open-source.
  • Mirantis Kubernetes Engine (formerly Docker Enterprise) — Docker enterprise is a set of advanced enterprise development features that work with Docker and Kubernetes to provide a shared platform across dev and ops in the context of deploying to containers. For developers and enterprises, this is the industry-leading DevOps platform for building and running modern containerized applications.
  • Google Kubernetes Engine (GKE) — Google promotes their powerful cloud technology running a Kubernetes engine.
  • Red Hat OpenShift Container Platform — An open-source, out-of-the-box container orchestration solution for Linux.
  • Azure Kubernetes Service (AKS) — Popular Kubernetes container orchestration on the Azure platform.

Kubernetes is open-source, and largely considered the gold standard for container orchestration, though, as stated above, and because it is highly portable, there are many vendors to choose from that can accommodate it. Kubernetes is highly flexible and used in the delivery of complex applications. Docker container orchestration, or Docker Swarm, is Docker’s flavor of orchestration software that is included with Docker. Both are solid and effective solutions for massively scaling deployments, as well as implementation and management.

  • Kubernetes focus on high demand use cases with complex configurations,
  • ocker Swarm prompts ease of use and simple and quick deployed use cases
  • The following table highlights several comparisons between the two.


    Docker Swarm


    App Definition & Deployment

    Desired state definition in YAML file

    Desired State definition


    No autoscaling possible

    Cluster autoscaling, horizontal pod autoscaling


    Service replication at Swarm Node level

    Stacked Control Plane node with load balancing either inside or outside the cluster

    Cloud Support


    AWS, Azure, Google

    Graphic User Interface (Gui)

    GUI not available; must use 3rd party tools

    GUI is available; web interface

    Load Balancing

    No auto load balancing, but port exposure for external load balance services

    Horizontal scaling & load balancing


    Multi-layered overlay network with peer-to-peer distribution among hosts

    Flat peer-to-peer connections between pods and nodes

    Storage Volume Sharing

    Shares storage with other containers

    Shares storage within the same Pod

    Updates & Rollbacks

    Rolling updates and service health monitoring

    Automated rollouts & rollbacks