In today’s fast-paced digital landscape, businesses strive to innovate and scale their applications efficiently. This has led to the rise of microservices architecture and the adoption of Kubernetes as the go-to platform for container orchestration. But under the hood of Kubernetes and microservices, what exactly do these technologies entail? In this blog, we’ll explore the shift from monolithic to microservices architectures, understand containers and Kubernetes, and examine DevOps tools like Jenkins X to boost productivity.
Table of Contents
From monolithic to microservices architecture
Back in the day, monolithic architecture was popular before the cloud became a trend. Monolithic architecture applications tend to layer the structure into three tiers or N-tier applications:
- The presentation layer, also called the front-end layer, includes Single Page Applications (SPAs), traditional and mobile web applications, hybrid applications, and native mobile applications.
- The business logic layer, also recognized as middleware, might expose an API for third parties to consume. It should also integrate its microservices or external applications asynchronously to help the microservices’ resiliency in case of partial failures.
- The data access layer, also known as the back-end layer, is responsible for bringing data to other layers. Data comes in different forms and shapes, and the choice of data source depends on your data model.
The most representative architecture likes the following :

In the cloud age, microservices architecture is becoming more popular than monolithic architecture for scalability and availability. Transforming a monolithic application into microservices is a prelude to application modernization.
The cloud journey is an iterative process with different approaches, such as lift-and-shift, optimization, and app modernization. These approaches depend on requirements, budget, and more complex factors, which we’ll discuss in the following section.
Cloud migration journey versus innovation journey
If we look at the cloud migration journey, it intersects with the innovation journey. Migrating a monolithic application can be straightforward by lifting and shifting it in a VM or containers using Docker Compose or deploying multi-container Pods in Kubernetes. This approach, despite having a few technical problems, will help move your application to the cloud.

However speaking of agility, scalability and availability, it is definitely not a win-win. To make a difference here, we have to transform monolith application to microservices, however, instead of doing a big chunk refactor or rebuild them from scratch, we do have a possibility to do this part gradually and incrementally. Here is also where containers come up at their best advantage.
Before discussing why a container-based solution is a great approach to implementing microservices architecture compared to serverless frameworks, let’s go back a little bit by looking at the principles of microservices.
Principle of microservices
If we look at the microservices, we can see the main idea of microservices is to break down your business logic and data access layers into independent modules like the following diagram which you can compare with the previous diagram described the monolith architecture. This is also known as ‘ One process, one microservice’ principle. Each of microservices are loosely coupled, it then combines together with other microservices to construct a complete business application.

Pathway to Building Microservices
As you can see, each service has its own data store and targets a specific business requirement.
Here are three main reasons to apply the principle of microservice architecture:
- Independent modules help each module scale at its own pace and with ease.
- Enabling scalability reinforces availability. Each module can be deployed independently, reducing downtime. If one microservice goes offline, we can easily restore that module without affecting other functionalities.
- Different technologies can be used within the same application across various modules. For example, in a past project, I programmed both in Node.js and C# to develop RESTful APIs. These APIs were exposed behind a common API Gateway, allowing front-end applications or external applications to consume them. This enabled us to use diverse technical resources within the same organization to build a global API integration platform and boost business performance.
A great example of implementing microservice architecture is Microsoft eShop Container. This application uses .NET Core and Docker and contains different types of microservices.
Pathway to containers
As I mentioned before, containers can be an excellent option for building microservices, alongside serverless functions like Azure Functions. Containers are faster and more lightweight than deploying the same application in a VM. They provide a structured way to integrate with your CI/CD pipeline, generally without needing significant changes in code and configuration.
Now, what are containers? Generally speaking, containerization manages an application or service by combining its dependencies and configuration into a container image. This containerized application can be tested as a unit, regardless of its hosted operating system. Many container runtimes exist, such as Docker, CRI-O, and Containerd. Docker is the most popular and common enterprise container platform. You can learn more about Docker from Docker’s official documentation: Docker Documentation.
Kubernetes can use different container runtimes, with Docker being the first choice for Azure Kubernetes runtime. We’ll take a closer look at Kubernetes in the upcoming section.
Big picture of Container Registries
As I mentioned, containerization solutions like Docker encapsulate and package an application and its dependencies into a container image. An image serves as a static representation of the application, its configuration, and its dependencies. Once you have your container image, you can run your application on any host operating system that supports Docker or other container runtimes. Running the same image on-premises or in the cloud provides consistent results.
A container registry stores your container images. Public container registries like Docker Hub, maintained by Docker, are widely used. Other vendors, such as Nginx, also offer registries. Microsoft Azure provides Azure Container Registry as a cloud-based option. Alternatively, some legacy applications may use a private on-premises registry for all their Docker images.

To store your image in a container registry help you versioned your container images and keep track image its dependencies as well, all those factors help to provide a consistent deployment unit when it integrated with DevOps process.
Kubernetes architecture
Imagine managing one or even a few containers on-premises or in the cloud for development and testing purposes. However, in production, especially for mission-critical solutions like e-commerce sites or financial systems, you might manage hundreds or thousands of containers. Managing networking, deployments, and configurations becomes challenging.This is where Kubernetes comes in.
Kubernetes is a portable, highly extensible, open-source orchestration platform that manages containerized workloads and services. Think of Kubernetes as a big orchestrator engine that helps your containers achieve their “desired state” and manages all these containers across different worker nodes. It becomes your best friend in handling complex container environments.
Vanilla Kubernetes architecture
Kubernetes follows master and slave architecture. Kubernetes is made up of several core components as shown in the following diagram :

Basically, the Master of Kubernetes contains the following components :
- kube-apiserver which you can see as a communication manager between different tools and Kubernetes cluster.
- etcd is a distributed reliable key-value store that is simple, secure and fast. The etcd data store stores information regarding the cluster such as the nodes, pods, configs, secrets, accounts, roles, bindings and others ( the state of the cluster and information about the cluster itself ).
- kube-scheduler is responsible for scheduling pods onto nodes, you can see it as a postal official, it send the pods information to each node and when it arrives the respective node, the kubelet agent on that node will actually implement that node.
- kube-controller-manager is responsible for running Kubernetes controllers, for example, the node controller that responds to changes in a node’s status.
Recall that every node in a Kubernetes cluster has the following components:
- kubelet is a node agent that accepts pod specifications sent from API server or locally ( static pod ) and actually provision that pod on the respective node.
- Container runtime is the runtime that actually help running containers within the pods such as Docker, CRI-O, Containerd etc.
- kube-proxy which implements network rules and traffic forwarding to implement part of the Kubernetes service concept in Kubernetes.
Running Kubernetes on Microsoft Azure
Kubernetes can run on various platforms: from your laptop, to VMs on a cloud provider, to a rack of bare metal servers. The effort required to set up a cluster varies from running a single command to crafting your own customized cluster.
Users have several options to run Kubernetes on Azure. The easiest way is to start with Azure Kubernetes Services (AKS), a managed Kubernetes service.
AKS offers a fresh experience for beginners. It helps you deploy a Kubernetes cluster within minutes and is free to use. You only pay for the nodes you use. AKS simplifies working with Kubernetes, offering a more manageable approach.
The general architecture of Azure Kubernetes is structured as follows:

In Azure Kubernetes Service (AKS), Microsoft Azure manages the master as part of the control plane. You can create an AKS cluster via the Azure portal, Azure CLI, or AZ PowerShell. Using IaC (Infrastructure as Code) templating options like Resource Manager templates (ARM), Bicep, or Terraform makes provisioning AKS throughout the DevOps process straightforward. After deploying an AKS cluster, Azure configures the Kubernetes cluster for users. Users can also use features like CNI, Virtual Kubelet, KEDA, and monitoring, configurable as extensions during deployment. Windows Server container support is currently in preview in AKS.
These features simplify managing Kubernetes.

Then I must mention AKS-Engine, the core of Azure Kubernetes Service. It is open-source on GitHub and available for public contribution, which you can find here: AKS-Engine on GitHub.
AKS-Engine allows users to customize deployment features beyond what Azure Kubernetes Service officially supports. Excellent contributions to AKS-Engine often get integrated into future AKS roadmaps.
Here are some useful links to help you explore further:
Looking forward
The main idea of microservices is to break down business logic and data access layers into independent modules. This follows the “one process, one microservice” principle. Each microservice should be stateless and autonomous in a “self-contained” way. Each microservice has its own data store, and all resources in the microservice share the same lifecycle. Containerization manages an application or service, combining its dependencies and configuration. These are abstracted as manifest files and packaged together as a container image. If you’re interested in Kubernetes certifications, check out this post about Kubestronaut. Stay tuned here on Medium with me, and subscribe to my newsletter if you’re interested in a similar topic. See you in the next one!
















Top 10 AI Startups That Raised Over $100M in 2026
How Big Tech Is Fighting Over AI Chips Not AI Models
The $20K Humanoid Robot That Can’t Fold Your Laundry (Yet)
If I were about to get started with Microsoft Azure in 2026
If I were about to get started with AWS in 2026
Intriguing facts about Kubernetes and GPU infrastructure for AI workloads
5 Comments