It may seem like an odd argument to have considering the idea behind the two architectures. But there are valid reasons why someone would want to evaluate both as options for their needs. More often, however, you hear the argument of Kubernetes vs another containerized system like Docker. And there are merits to that argument as well.
So why choose Serverless? There are many values of using Serverless. The increased focus on the business needs by focusing on the code itself and not getting involved with the underlying systems necessary to run the code is very efficient. As a result of focusing on the application itself and not the hosting components, the developers are able to create better solutions since they aren’t bogged down with all the gotchas of the DevOps components.
Serverless can also prove to be cheaper as well as easier to manage. Cloud providers like AWS or Azure handle the underlying components. They offer excellent options for logging and diagnostics into those systems. Serverless also can reduce the overall complexity of a system by splitting up an application into smaller components in a microservice architecture. Since Serverless is based on computing charges when not in use it will cost you nothing. However, Kubernetes has a minimum number of nodes. So as a result, you will always be paying for something even when the application is not being utilized. This can end up being quite costly over the course of time. If your business isn’t 24/7, you now need to pay for resources you are not using off-hours. And depending on the size of the nodes necessary to run the applications, you could have large virtual machines that need to stick around all the time.
Aside from the simpler solution, Serverless functions scale incredibly fast, and their ability to scale is nearly limitless. While other solutions like Kubernetes also scale. They cannot compete with the speed and seamless nature of scaling in Serverless functions. When Kubernetes scales, the impact to end users can be quite noticeable. Depending on the application, impacting end-users could be unacceptable for your business needs.
However, with all the positives of Serverless, there are some downsides. One that could be quite problematic depending on your needs is the run time limitation. If there are workloads that cannot fit within the time allotted for a function to run, then it will require rework. In most cases, the workloads that are long-running can be broken down into chunks. However, if you cannot then perhaps Serverless isn’t the right solution for your needs.
A Case for Kubernetes
Now, on the other hand, you have Kubernetes which allows for the setup of a solution that can shrink or grow within specified limits. As a result, you get more control over the whole landscape of the solution. Kubernetes has multiple layers which let you host multiple different workloads within a single cluster that may have different needs. Based on the way the applications are labeled, they can automatically be pushed to the machines where they can perform best. The ways that you can set up the filtering logic for the applications are quite advanced as well. Leveraging selectors & labels, the workloads can be scheduled into various pod(s) without any intervention. If your workloads need to communicate with each other, Kubernetes also allows you to do that. To do so will require some initial setup to ensure an efficient pathway though. Developers should seriously consider using a service mesh that allows for a Serverless communication model. Otherwise, Kubernetes can quickly turn into a monolith which can end up being very difficult to maintain & diagnose issues.
Another huge win with Kubernetes is that it can run agnostic of a specific cloud platform. So, if there is a scenario where a business needs the ability to be in a multi-cloud architecture, it’s a great way to do that without needing to have specific solutions for each cloud platform.
One area that requires some additional setup & planning is managing upgrades to the cluster itself. While the workloads can be rolled out in a rolling update fashion. But the cluster itself sometimes requires a rebuild. For example, leveraging High Availability requires a redeployment of the cluster. To deploy a change like that, you will need to have a backup cluster while the upgrade is taking place. Now if there are two clusters in place then you will also need to have a traffic manager in place. This can quickly get much more complicated than just using a Serverless deployment model.
Making a Decision
In many cases, the decision to use Kubernetes or Serverless may be driven by forces outside of a technical consideration. But if the choice is strictly technical, I’d suggest first trying to use Serverless if it is reasonable to do so. The ability to work on isolated & more simplistic components helps to drive improved throughput for the development team. And allowing for components to independently scale lets your application grow with your needs. Additionally, the ease of diagnostics for Serverless applications that comes right out of the box is excellent. Not to say you cannot achieve the same with Kubernetes… it’s just something you need to develop into your solution and manage.
And if you have a workload that is going to extend beyond the maximum time allotted for Serverless or there is a requirement for durable storage then Kubernetes may make more sense. There are some advanced Serverless options out there to accommodate specific workloads like that… however, they are not really the norm. As stated above, if the workload you are working with extends beyond the typical timeout of Serverless functions, then you should evaluate if the run time of the workload can be optimized if it is reasonable to do so.
Another consideration to make is whether your organization is using a single cloud offering or leveraging multiple cloud platforms. If multiple platforms are in use, then Serverless may not make sense. Serverless functions in each cloud platform are quite different. Creating an AWS Lambda function is vastly different from an Azure Function. So, with multiple cloud platforms, you may have a scenario where you are re-writing code for each platform and that is not an effective use of time.