Kubernetes Architecture Series - Part 1: From Containers to Cloud-Native Orchestration
Part 1 of the three-part blog series on Kubernetes architecture

Khurram Mahmood
July 23, 2025

Kubernetes Architecture Series - Part 1: From Containers to Cloud-Native Orchestration
Few years ago as software developer, life was different. We didn’t have the luxury of containers or the orchestration power of Kubernetes. We ran multiple applications on shared servers—sometimes physical, sometimes EC2 instances—and with that came a familiar set of problems that most developers of that era will remember all too well.
Every new application we deployed brought with it a fresh round of “what’s broken now?” Slow startup times were a given, especially if the server had gone down and needed to be restarted. We often faced the dreaded "but it works on my machine" syndrome, where code that behaved perfectly in development would mysteriously fail in staging or production. Configuration drift was another persistent ghost in the system—no matter how tight our automation scripts were, something always slipped through.
And scaling? That was a project in itself. Provisioning a new server wasn’t just a few lines of code—it was tickets, approvals, golden images, and hours (if not days) of setup and testing. Even when we automated the obvious bits with shell scripts or Ansible, the margin for error remained stubbornly wide.
We developed a whole ecosystem of checks, tests, and processes to fight these challenges. CI/CD pipelines helped, monitoring tools kept us informed, and config management tools brought some order. But the pain never truly went away—until containerization came along and changed everything.
When Containers Became the Currency of Deployment
When Docker first emerged as a mainstream tool, it felt like discovering fire. Suddenly, we could package not just the application code, but the entire runtime environment. Dependencies, configurations, binaries—everything traveled together in a neat, self-contained unit. Containers booted up in seconds, replicated easily, and ran the same way whether it was on my laptop, a staging server, or production in the cloud.
This new way of working gave us a newfound confidence and consistency across environments. It felt like magic. But like every powerful tool, Docker came with its own set of challenges—especially once the number of services started to grow.
We began running dozens of containers, and soon the simple Docker CLI commands weren’t enough. We needed to schedule workloads, manage communication between services, maintain availability, and ensure that failed components could recover automatically. That’s when I realized we didn’t just need containers—we needed something to orchestrate them. And that led us to Kubernetes.
Why Kubernetes? Why a Container Orchestrator at All?
To appreciate Kubernetes, it's important to understand what containers don’t solve on their own.
Containers are great at packaging and running software consistently across environments. They are fast, lightweight, and portable—especially when compared to virtual machines, which tend to be heavier, slower to start, and harder to replicate consistently.
But containers by themselves are like raw ingredients in a kitchen. You still need a chef who knows what goes where, when to add what, how to keep the temperature consistent, and how to serve meals to hundreds of guests at once. In other words, you need orchestration.
That’s the role Kubernetes plays. It takes the raw power of containers and turns it into a scalable, resilient, cloud-native platform that can run thousands of workloads in a reliable, repeatable way.
How Kubernetes Brings Order to the Chaos
Kubernetes isn’t just a tool—it’s a system. A beautifully designed system, in fact, with a clear separation of concerns. At the heart of Kubernetes lies a well-thought-out architecture consisting of a control plane and a data plane, often referred to as master and worker nodes.
The control plane is where decisions are made. Think of it as mission control. It includes components like the API server, which acts as the front door to the cluster, the scheduler that decides which pod goes where, the controller manager that ensures the desired state is maintained, and etcd, the key-value store that keeps track of the whole cluster’s state. Highly available cluster will have multiple master replicas. Most cloud implementations such as Amazon EKS provide highly available control planes that have multiple master replicas distributed across multiple availability zones.
The data plane, on the other hand, is where the actual work happens. These are the worker nodes, and they’re responsible for running our applications. Inside these nodes, we find components like the kubelet, which communicates with the control plane to receive and execute instructions, and the kube-proxy, which manages networking and ensures traffic gets to the right pod. Every node also runs a container runtime, like containerd or CRI-O, which actually runs the containers inside the pods.
Ah, pods. That’s where the real magic happens.
A pod is the smallest deployable unit in Kubernetes. It wraps one or more containers into a single, logical unit. When I first learned about pods, it immediately reminded me of the pain we had trying to scale our services or keep them alive during server failures. With Kubernetes, I could define replicas of a pod and let the system maintain the desired number, scaling them up or down as needed.
Better yet, Kubernetes introduced the idea of self-healing—if a pod crashes, the system automatically spins up a new one. If a node goes down, the pods are redistributed. All of this was the kind of reliability we once tried to script ourselves with endless cron jobs, health checks, and alerts—only now it was built in and worked at scale.
The Components in Kubernetes Architecture
To truly appreciate the elegance of Kubernetes, you need to understand how each piece plays its part in the larger machine.

The control plane acts as the brain. Its job is to decide what needs to happen and when. The API server is the gateway that all commands flow through, whether they’re coming from the user or internal components. It authenticates, validates, and processes all changes to the cluster’s state. Note that these interactions do not include the end user i.e. user of the applications hosted in container.
The scheduler assigns pods to nodes based on resource availability and other constraints. If a node has enough CPU and memory, and the pod has the right tolerations and affinities, it gets scheduled there.
The controller manager is constantly watching the cluster and comparing its current state with the desired state defined in the configuration. If something’s out of sync—say a pod has disappeared—it takes action to bring things back into alignment.
The etcd database keeps a consistent and highly available record of everything happening in the cluster. It’s like the cluster’s memory.
On the data plane, the kubelet on each node ensures that the containers specified in the pod spec are actually running and healthy. It listens to the control plane and acts on its instructions. The kube-proxy handles network routing, ensuring that services can find and talk to each other. And, of course, the container runtime is the engine that runs each container.
Together, these components form a resilient, intelligent, and scalable system that abstracts away much of the complexity we used to wrestle with manually.
Looking Ahead
In this post, we explored how Kubernetes came to be the orchestrator of choice for modern cloud-native applications, driven by the very real challenges we faced with traditional deployments and even early container usage. We also looked at how its architecture—divided between a control plane and a data plane—makes it capable of scaling, healing, and operating in a distributed environment.
But Kubernetes doesn’t stop at just running containers. In the next blog post, we’ll dive deeper into the Kubernetes object model—deployments, replica sets, namespaces, secrets, volumes, persistent volumes, and persistent volume claims—to understand how they work together to create production-ready, scalable architectures.
Stay tuned. The next chapter will help you understand not just what Kubernetes does, but how to use it effectively.