About 3 years ago, I worked on a Serverless Kubernetes showcase using Azure Kubernetes services and Azure Container instances (ACI) for Virtual Kubelet (VK) and Azure functions. They are based on two Cloud Native Computing Foundation (CNCF) projects :
- Virtual Kubelet ( VK )
- Kubernetes Event-Driven Autoscaling (KEDA)
Naturally, I started to play with them and recaptured my samples on another more open-source centric Github repository called Cloud-native Serverless.
Table of Contents
A little backstory about VK & KEDA
For people who are familiar with Kubernetes cluster architecture. A Kubelet agent is software installed on the Kubernetes node; we also call it the captain of the worker node. The official documentation describes kubelet as a primary “node agent” that runs on each node. The main responsibility of the kubelet agent is to take a set of PodSpecs, so it can provision the pod and ensure that the containers described in those PodSpecs are running and healthy. Besides that, it also registers the worker node with the apiserver by matching the metadata.name field of the Node. A node has to be healthy and available for scheduling for a workload to be scheduled on it.
Virtual kubelet
As an open-source project, Virtual Kubelet was accepted to CNCF in April 2018 and is at the Sandbox project maturity level ( to learn more about CNCF maturity levels from here ).
Virtual Kubelet is an implementation of the Kubernetes kubelet that connects a Kubernetes cluster to other APIs. Therefore Virtual Kubelet can be used in the use cases in which to enable the extension of the Kubernetes API, it works in conjunction with serverless container platforms in the cloud or the air-gapped environment.
KEDA
KEDA ( Kubernetes Event-Driven Autoscaling ) was created in May 2019 as a partnership between Microsoft and Red Hat and joined the CNCF Sandbox in March 2020. In August 2021, KEDA was accepted as an incubating project. Kubernetes Event-Driven Autoscaling (KEDA) is landing their focus scaling applications in Kubernetes based on the demanding events as an event-driven autoscaler.
Capabilities & features
On a high level, it requires the following aspects to make virtual kubelet as an awesome technology be meaningful to real-life business use cases.
Provisioning & Scalability
This is compared to a worker node, which is often a virtual machine or a pool of virtual machines that shared common resources ( such as VMSS in Azure ). The value of a virtual node is being able to spin up a new instance in a matter of seconds (The reality may still not be ideal, but this is the goal ). Together with KEDA, to scale containerized applications based on the on-demanding events. Those technologies will be extremely helpful for organizations such as retail businesses for their significant demanding requests on black Fridays ( typical cloud bursting scenario). Imagine how easily and quickly a user would give up on the threshold if the website were not responsive enough when they added items to their cart or, worse, encountered exceptions during the payment process. Those are highly impactful to the user experience and business continuity.

Cloud Native Architecture and Design: A Handbook for Modern Day Architecture and Design with Enterprise-Grade Examples.
The book progresses to cover the details of how IT systems can modernize to embrace cloud-native architecture, and also provides details of various enterprise assessment techniques to decide what systems can move and cannot move into the cloud.
Scheduling
Virtual Kubelet (VK) provides the opportunity to allocate a virtual node specifically for on-demand workloads, tailoring compute resources to meet their specific requirements. This is often achieved by leveraging Kubernetes functionalities to assign pods to worker nodes. For instance : nodeSelector, node affinity, anti-affinity, taint, or toleration for scheduling. It enables us to reclaim these virtual nodes once the job is completed efficiently.
Performance
Ensuring the effectiveness of executing the workloads and jobs is challenging as lots of other factors, such as networking throughput and latency may impact it. The effectiveness also shows on the auto-scaling aspect while having other ongoing workloads up and running.
Availability & reliability
Reliability is critical while executing workloads and jobs relying on on-demand infrastructure, as it directly affects task completion. Simultaneously, availability plays a crucial role in whether the workload is accessible, consumable, or monitorable.
Security & compliances
Despite the on-demand nature, security matters. As it’s part of the Kubernetes cluster, we need to consider if it fits in the big picture of the overall security and compliance requirements of its existence. A possible use case for VK is to use a virtual node to execute external source code and thus minimize the security risk to the rest of the members in the same cluster, we need to ensure no blind spots during its execution, hacking could happen at any point of time. An effective networking and security & governance policies could greatly help in the scenario.
Observability
To ensure effective management, comprehensive end-to-end full-stack monitoring is crucial when running VK or KEDA. This monitoring aids in understanding resource usage and performance, such as New relic, Datadog, Tigera, Dynatrace etc.
Those solutions provide insights into several critical aspects, including the real-time need for instances during high-demand workloads, the duration required to set up new instances or pods, and whether we’re approaching or exceeding various limits. These limits encompass resource thresholds like CPU and memory, networking limitations such as IP addresses, and storage constraints like IOPS. A clear definition of Pod Disruption Budget (PDB) is important in this scenario, and it helps define monitoring KPIs.
Cost-effectiveness
You no longer need to preserve a compute instance; thus, minimum maintenance, OS patches, or other administration efforts exist. On-demand scheduling is only when you need it, and it is deallocated when it’s no longer needed. Since the billing is pay-per-use, users only need to pay for the amount of vCPU and memory resources your containerized application requests, an optimized and flexible billing model.
Challenges & Resolutions
On-demanding workloads
There are various Virtual Kubelet (VK) providers available, including traditional ones like Microsoft’s Azure Container Instances (ACI) and Azure Batch. These providers handle on-demand workloads by rapidly deploying additional instances, particularly effective in data analytics and machine learning scenarios. As cloud-native data patterns grow in significance, these capabilities become more pivotal in the cloud-native landscape.
In addition, they are backed by other Azure services, including the Azure serverless platform. AWS Fargate is based on Amazon ECS to run containers without clusters of Amazon EC2 instances. Similarly, it has great potential in the platform integration with other AWS services.
From the open-source realm, Virtual Kubelet (VK) providers like Elotl Kip, who raised $5M to build nodeless Kubernetes, offer compatibility with AWS and GCP, actively striving to integrate Azure support into their services. Other providers such as HashiCorp Nomad are more like an alternative to Kubernetes, they use VK tech. Other ones, such as Admiralty Multi-Cluster Scheduler and Liqo, are more focused on the Kubernetes cluster federation use cases.
To get a complete and always latest list of virtual Kubelet provider lists from here.
Event-driven serverless functions
Function as a service or FaaS is a subcategory of serverless. Compared to Platform as a Service (PaaS), FaaS is more about an individual “function” with a code snippet containing a piece of business logic. It’s lightweight and flexible. It resolves the challenges around infrastructure, setting up dependencies, configurations, and everything, lays out the groundwork for a developer to get started on their actual job, and allows developers to focus on programming that matters most to them.
Many of us are familiar with Azure functions, AWS Lambda functions, and GCP cloud functions. These serverless functions operate within the public cloud and are renowned for their robustness. They excel in event-driven integration with various cloud services, offering a wide range of language runtimes and an extensive suite of development tools, all integral components of this ecosystem.
When all public cloud providers are doing serverless functions as a service in the cloud, it’s nice to see some interesting open-source Kubernetes native serverless function offerings such as Fission, OpenFaaS, and Knative. To know more about Serverless and hidden details about Serverless.
Thinking outside of the box
Take Fission as an example of serverless functions on Kubernetes. Fission supports popular programming languages such as Java, node.js, Golang, and Python. It aims to write short-lived code snippets called functions. In addition, it supports HTTP, timer, and other triggers. However, It’s not as rich as what we get from those already available from major cloud providers.
The more sophisticated Serverless functions are offered on Kubernetes, such as OpenFaaS and KNative. They come in handy when deploying event-driven functions and microservices. They have an effective mechanism to handle autoscaling, traffic splitting, and better observability. The main advantage of those functions is the capability to be running across platforms. Some may work better than others for sure, but technically, it makes it possible to be up and running in an air-gapped environment. Notably, it means we can literally run Kubernetes anywhere while taking advantage of autoscaling and resilience.
Speaking of which, Microsoft brought Azure functions to Azure-enabled Kubernetes a while back. Now it is able to deploy Azure Container Apps on Arc-enabled Kubernetes. We can now run Azure functions on the custom location, which could be on-premises or in an air-gapped environment. With all the richness brought by Azure functions in language runtime and many other integration benefits with Azure Services.
Limitations
A few important limitations are worth mentioning here :
- VK providers often lack support for persistent volumes, restricting options for stateful workloads.
- Despite supporting ConfigMaps and Secret mounts, many providers do not manage the update process for these configurations.
- Limitations in virtual Kubelet can notably affect networking scenarios. Such as the absence of support for PodIP due to the downward API,
- SecurityContext management for cluster workloads, including PodSecurityContext fields like FSGroup and RunAsNonRoot, is not fully supported by VK providers like Kip.
Showcases
Therefore I went ahead to build a showcase on my Youtube channel. This is about Serverless functions on Kubernetes with Fission as the following :
Watch out CVisiona channel, more videos about serverless functions on Kubernetes will be coming soon.
Conclusion
All in all, there are lots of interesting findings from my recent playthrough. I hope it will come true one day, and VK will become a better technology, than traditional worker nodes or node pools to drive more innovation and make a difference with Kubernetes. Serverless functions will bring true freedom to developers by allowing them to write code in any language, on any platform, and from anywhere. I’m very positive about the future development in the cloud-native & AI space, even in the age of AI. And you? Feel free to comment below with your thoughts. Let’s stay tuned !
















Intriguing facts about Kubernetes and GPU infrastructure for AI workloads
Is Serverless a Good Fit for AI Applications Today?
Are Microservices still relevant in 2026
How I built a local LLM Chatbot in Docker Containers Step-by-Step
If I were about to get started on Kubernetes in 2026
What is Kubernetes really about ? The Easy Way
7 Comments