Kubernetes (k8s from here on out) is a sophisticated container orchestration system that manages and runs containers and container applications. Yes, everybody knows this, but what does this truly mean?
Before we proceed, I’ll use the word “node” a lot and for those who might not already know, a node means a physical server or a virtual machine.
Now Let’s break it down. Fundamentally, k8s is a sophisticated yet simple system.
For the simple part, k8s is essentially a bunch of containers used to run and manage other containers — you’ll see for yourself later on. The sophistication comes into play when dealing with the details of a k8s cluster. A k8s cluster is a collection of nodes that work together to make k8s efficient and highly available. In a cluster, we can have one or more master nodes (called the control plane) and as many worker nodes as we need.
The master nodes oversee everything happening within the cluster and are composed of several components:
API Server: Exposes the k8s HTTP API, enabling communication between k8s components and external tools like dashboards or third-party integrations.
Scheduler: Assigns pods to worker nodes, distributing workloads to ensure balanced resource usage.
etcd: The consistent key-value store that acts as the cluster’s database, holding all cluster state data.
Controller Manager: is responsible for running ‘controllers’. Controllers are a control-loop that continuously monitor the state of the cluster and are responsible for maintaining the state of the cluster or for moving the current state of the cluster towards the desired state.
Cloud Controller Manager (optional): Handles cloud-specific control logic, but we'll skip this for now.
These components run individually as containers within the master node but operate in sync and communicate with one another—essentially, a set of containers that run other containers😅. Don’t believe me? See for yourself: https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/images/images.go
Now, let’s talk about worker nodes and kubelets, which form the backbone of any Kubernetes cluster.
Worker Nodes: These are the nodes where your application workloads actually run. Each worker node hosts the pods that contain your application containers. Worker nodes are responsible for running containerized applications and managing the necessary compute resources (CPU, memory, storage) to keep your applications running smoothly.
Kubelet: Every worker node has a kubelet, a crucial agent that ensures the containers are running as specified by the
PodSpecs
. The kubelet communicates with the control plane (mainly the API server) to receive instructions and report back on the status of running workloads. It manages the pod lifecycle, ensuring that the desired state is maintained—starting containers, restarting them if they fail, and reporting node and pod status to the control plane.
In essence, while the control plane dictates what should happen, the kubelet on each worker node ensures it actually happens. It’s the worker bee of the Kubernetes cluster, constantly checking and maintaining the health of your applications.
Together, the control plane and worker nodes make Kubernetes a powerful system for managing containerized applications at scale, providing resilience, scalability, and automation to modern infrastructure.