This text was taken from the book and a Udemy course The DevOps Toolkit: Catalog, Patterns, And Blueprints
Help us choose the next subject for the course by filling in a survey at https://www.devopsparadox.com/survey
What do I expect from serverless or, for that matter, any type of deployment services?
I expect us to be able to develop our applications locally and to run them in clusters. Ideally, local environments should be the same as clusters, but that is not critical. It’s okay if they are similar. As long as we can easily spin up the application we are working on, together with the direct dependencies, we should be able to work locally. Our laptops tend to have quite a few processors and gigabytes of memory, so why not use them? That does not mean that I exclude development that entirely relies on servers in the cloud, but, instead, that I believe that the ability to work locally is still essential. That might change in the future, but that future is still not here.
I want to develop and run applications locally before deploying them to other "real" environments. It does not matter much whether those applications are monoliths, microservices, or functions. Similarly, it should be irrelevant whether they will be deployed as serverless, or as Kubernetes resources, or by running as processes on servers, or anything else. I want to be able to develop locally, no matter what that something is.
We also need a common denominator before we switch to higher-level specific implementations. Today, that common denominator is a container image. We might argue whether we should run on bare-metal or VMs, whether we should deploy to servers or clusters, whether we should use a scheduler or not, and so on, and so forth. One of the things that almost no one argues anymore is that our applications should be packaged as container images. That is the highest common denominator we have today. It does not matter whether we use Docker, Docker Compose, Mesos, Kubernetes, a service that does not provide visibility to what is below it, or anything else. What matters is that it is always based on container images. We can even convert those into VMs and skip running containers altogether. Container images are a universal packaging mechanism for our applications.
I just realized that container images are NOT a common denominator. There are still those using mainframe. I’ll ignore them. There are also those developing for macOS, they are the exception that proves the rule.
Container images are so beneficial and commonly used that I want to say, "here’s my image, run it." The primary question is whether that should be done by executing docker-compose
, kubectl
, or something else. There is nothing necessarily wrong in adding additional layers of abstraction if that results in elevation of some of the complexities.
Then there is the emergence of standards. We can say that having a standard in an area of software engineering is the sign of maturity. Such standards are often de-facto, and not something decided by a few people. One such standard is container images and container runners. No matter which tool you are using to build a container image or run containers, most use the same formats and the same API. Standards often emerge when a sufficient number of people use something for a sufficient period. That does not mean that standards are something that everyone uses, but rather that the adoption is so high, that we can say that the majority is using it.
So, I want to have some sort of a standard, and let service providers compete on top of it. I do not want to be locked more than necessary. That’s why we love Kubernetes. It provides a common API that is, more or less, the same, no matter who is in charge of it, or where it is running. It does not matter whether Kubernetes is running in AWS, Google, Azure, DigitalOcean, Linode, in my own datacenter, or anywhere else. It is the same API. I can learn it, and I can have the confidence that I can use that knowledge no matter where I work, or where my servers are running. Can we have something similar for serverless deployments? Can’t we get a common API, and let service vendors compete on top of it with lower prices, more reliable service, additional features, or any other way they see fit?
Then there is the issue with restrictions. They are unavoidable. There is no such thing as an unlimited and unrestricted platform. Still, some of the limitations are acceptable, while others are not. I don’t want anyone to tell me which language to use to write my applications. That does not mean that I do not want to accept advice or admit that some are better than others. I do. Still, I do not want to be constrained either. If I feel that Rust is the right choice that would be better suited for a given task, I want to use it. The platform I’m going to use to deploy my application should not dictate which language I will use. It can "suggest" that something is a better choice than something else, but not to restrict my "creativity".
To put it bluntly, it should not matter which language I use to write my applications.
I also might want to choose the level of involvement I want to have. For example, having a single replica of an application for each request might fit some use cases. But there can be (and usually are) those in which I might want to serve up to thousand concurrent requests with a single replica. That cannot be a decision only of the platform where my application is running. It is part of the architecture of an application as well.
I do believe that the number of choices given to users by serverless service providers must be restricted. It cannot be limited only by our imagination. Nevertheless, there should be a healthy balance between simplicity, reliability, and freedom to tweak a service to meet specific use cases’ goals.
Then there is the issue of types of applications. Functions are great, but they are not the solution to all the problems in the universe. For some use-cases, microservices are a better fit, while in others, we might be better off with monoliths. Should we be restricted to functions when performing serverless deployments? Is there a "serverless manifesto" that says that it must be a function?
I am fully aware that some types of applications are better candidates to be serverless than others. That is not much different than, let’s say, Kubernetes. Some applications benefit more from running in Kubernetes than others. Still, it is my decision which applications go where.
I want to be able to leverage serverless deployments for my applications, no matter their size, or even whether they are stateless or stateful. I want to give someone else the job to provision and manage the infrastructure and to take care of the scaling and make the applications highly available. That allows me to focus on my core business and deliver that next "killer" feature as fast as possible.
Overall, the following list represents features and abilities I believe are essential when evaluating serverless solutions.
- It should allow local development
- It should leverage common and widely accepted denominators like, for example, container images
- It should be based on some sort of a standard
- It should not be too restrictive
- It should support (almost) any type of applications
None of those items from my "wish list" exclude those we mentioned earlier. Instead, they complement the basic features of managed serverless services that allow us to avoid dealing with infrastructure, scaling, and high-availability, and to pay for what we use. We can say that those are table-stakes, while the items in my "wish list" are things that I value a lot, and that can be used to evaluate which solution is a better fit for my needs.