Kubernetes Architecture Series -Part 3: ConfigMaps, Secrets, Multi-Tenancy, and Storage


This is the third and final part of my blog series on Kubernetes architecture.
In Part 1 we explored the foundations: the control plane, the data plane, and how Pods form the basic execution unit of Kubernetes. In Part 2, we shifted perspective to the application layer, covering ReplicaSets, Deployments, scaling strategies, Services, and Ingress — essentially, how applications come to life and evolve inside a cluster.
In this part, we’ll dive deeper into what makes Kubernetes enterprise-ready: configuration management, secrets management, multi-tenancy, and persistent storage. These are the capabilities that transform Kubernetes from a platform for running containers into a true application platform that organizations can rely on.
ConfigMaps in Kubernetes
I am old enough as a developer to remember the pain of managing application configuration across environments. We used to have .env
or properties
files lying around, sometimes even different config servers for staging and production. The challenge was always the same: how do you keep configuration flexible without baking it into the code or the container image?
Kubernetes solves this elegantly with ConfigMaps. A ConfigMap is simply a key-value store that lets you externalize configuration and inject it into containers at runtime. This way, the same container image can be used across dev, test, and production, with only the configuration changing.
Here’s an example of a ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
APP_MODE: 'production'
APP_DEBUG: 'false'
LOG_LEVEL: 'info'
You can then consume this ConfigMap in a Pod as environment variables:
apiVersion: v1
kind: Pod
metadata:
name: demo-app
spec:
containers:
- name: app
image: myapp:1.0
envFrom:
- configMapRef:
name: app-config
Or mount it as a file inside the container:
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: app-config
With this, your developers can focus on code, while your DevOps team controls configuration per environment.

Struggling with team productivity on Kubernetes?
Secrets Management
Now let’s talk about something riskier: secrets. We’ve all seen it — API keys hardcoded into source code, passwords in config files, or tokens in Slack messages. It works… until it doesn’t. A single leaked secret can bring down systems or open security holes.
Kubernetes provides Secrets to handle sensitive data. They look a lot like ConfigMaps but are designed specifically for passwords, tokens, certificates, and keys.
Here’s an example of a Secret:
apiVersion: v1
kind: Secret
metadata:
name: db-secret
type: Opaque
data:
username: YWRtaW4= # "admin" base64 encoded
password: c2VjdXJlcGFzcw== # "securepass" base64 encoded
And here’s how to mount it in a Pod:
apiVersion: v1
kind: Pod
metadata:
name: db-client
spec:
containers:
- name: client
image: mysql:5.7
env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: db-secret
key: username
- name: DB_PASS
valueFrom:
secretKeyRef:
name: db-secret
key: password
It’s important to understand that Kubernetes Secrets are base64 encoded, not encrypted by default. This means that without enabling encryption at rest for etcd
or integrating an external secret manager like HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault, you’re not truly securing secrets. Many enterprises extend Kubernetes with these tools for production-grade security.
Namespaces and Multi-Tenancy
In real organizations, multiple teams and applications share the same Kubernetes cluster. How do you prevent one team’s workload from interfering with another? This is where Namespaces come in.
Namespaces provide a logical partition within a cluster. Think of them as virtual clusters inside the physical cluster. They’re not just for organizing workloads — they’re essential for:
- Resource isolation (CPU/memory quotas per namespace)
- Access control (RBAC tied to namespaces)
- Environment separation (dev, test, prod)
Here’s an example of creating a namespace:
apiVersion: v1
kind: Namespace
metadata:
name: dev-team
Deploying an application into that namespace is as simple as adding the namespace
field in your manifests:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-api
namespace: dev-team
spec:
replicas: 2
selector:
matchLabels:
app: demo-api
template:
metadata:
labels:
app: demo-api
spec:
containers:
- name: api
image: demo/api:1.0
In a real-world scenario, you might have dev
, staging
, and production
namespaces, each with its own policies and quotas. This gives both developers and operations peace of mind — workloads stay isolated, and resource usage is predictable.
While namespaces provides logical separation for multi-tenant applicaitons. Namespaces are not intended for isolating arbitrary groups of users. They are not a mechanism to provide secure separation. This is discussed in this arcticle by Shamaila in detail with real world example.
Storage in Kubernetes
Containers are ephemeral by design. Delete a Pod, and all the data inside is gone. That’s fine for stateless applications, but what about databases, user uploads, or logs that must persist?
Kubernetes solves this with Volumes. Volumes provide storage that survives container restarts within a Pod. But for enterprise needs, we often need storage that survives Pod replacement or migration across nodes. This is where PersistentVolumes (PV) and PersistentVolumeClaims (PVC) come into play.
A PersistentVolume is a piece of storage provisioned by an admin (or dynamically by a storage class). A PersistentVolumeClaim is how applications request that storage.
Here’s a simple PV and PVC example:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-demo
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt/data
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-demo
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
And here’s how to use it in a Pod:
apiVersion: v1
kind: Pod
metadata:
name: pod-using-pvc
spec:
containers:
- name: app
image: myapp:1.0
volumeMounts:
- mountPath: '/data'
name: app-storage
volumes:
- name: app-storage
persistentVolumeClaim:
claimName: pvc-demo
In production, these volumes are usually backed by cloud storage systems (EBS on AWS, Persistent Disks on GCP, Azure Disks, or NFS/GlusterFS in on-premises clusters). This allows you to run stateful applications — like PostgreSQL, Redis, or file-based systems — safely in Kubernetes.
Conclusion
ConfigMaps, Secrets, Namespaces, and persistent storage are the building blocks that make Kubernetes truly enterprise-ready. Together, they allow teams to manage configuration cleanly, secure sensitive information, isolate workloads across teams and environments, and run applications that need durable storage.
This completes our three-part journey:
- Part 1: Kubernetes architecture foundations (control plane, data plane, Pods)
- Part 2: Applications, scaling, and deployments (ReplicaSets, Deployments, Services, Ingress, strategies)
- Part 3: Configuration, multi-tenancy, and storage (ConfigMaps, Secrets, Namespaces, PV/PVC)
Kubernetes is a deep ecosystem, and these three parts are just the beginning. From here, the next frontier lies in advanced patterns like Operators, GitOps, and service meshes — the tools and practices that take Kubernetes from infrastructure to full application lifecycle automation.