The difference between continuous integration, delivery, and deployment is not in processes, but in the level of confidence we have in them.
The continuous deployment process is relatively easy to explain, even though implementation might get tricky. We’ll split our requirements into two groups. We’ll start with a discussion about the overall goals that should be applied to the whole process. To be more precise, we’ll talk about what I consider non-negotiable requirements.
A Pipeline needs to be secure. Typically, that would not be a problem. Before Kubernetes, we would run the pipeline steps on separate servers. We’d have one dedicated to building and another for testing. We might have one for integration and another for performance tests. Once we adopt container schedulers and move into clusters, we lose control of the servers. Even though it is possible to run something on a specific server, that is highly discouraged in Kubernetes. We should let it schedule Pods with as few restraints as possible. That means that our builds and tests might run in the production cluster and that might prove not to be secure. If we are not careful, a malicious user might exploit shared space. Even more likely, our tests might contain an unwanted side-effect that could put production applications at risk.
We could create separate clusters. One can be dedicated to the production and the other to everything else. While that is indeed an option we should explore, Kubernetes already provides the tools we need to make a cluster secure. We have RBAC, ServiceAccounts, Namespaces, PodSecurityPolicies, NetworkPolicies, and a few other resources at our disposal. We can share the same cluster and be reasonably secure at the same time.
Security is not the only requirement. Even when everything is secured, we still need to make sure that our pipelines do not affect negatively other applications running inside a cluster. If we are not careful, tests might, for example, request or use too many resources and, as a result, we might be left with insufficient memory for the other applications and processes running inside our cluster. Fortunately, Kubernetes has a solution for those problems as well. We can combine Namespaces with LimitRanges and ResourceQuotas. While they do not provide a complete guarantee that nothing will go wrong (nothing does), they do provide a set of tools that, when used correctly, do provide reasonable guarantees that the processes in a Namespace will not go “wild”.
Our pipeline should be fast. If it takes too much time for it to execute, we might be compelled to start working on a new feature before the execution of the pipeline is finished. If it fails, we will have to decide whether to stop working on the new feature and incur context switching penalty or to ignore the problem until we are free to deal with it. While both scenarios are bad, the latter is worst and should be avoided at all costs. A failed pipeline must have the highest priority. Otherwise, what’s the point of having automated and continuous processes if dealing with issues is eventual?
Continuous deployment pipeline must be secured, it should produce no side-effects to the rest of the applications in a cluster, and it should be fast.
The problem is that we often cannot accomplish those goals independently. We might be forced to make tradeoffs. Security often clashes with speed, and we might need to strike a balance between the two.
Finally, the primary goal, that one that is above all the others, is that our continuous deployment pipeline must be executed on every commit to the master branch. That will provide continuous feedback about the readiness of the system, and, in a way, it will force people to merge to the master often. When we create a branch, it is non-existent until it gets back to the master, or whatever is the name of the production-ready branch. The more time passes until the merge, the bigger the chance that our code does not integrate with the work of our colleagues.
The DevOps 2.4 Toolkit: Continuous Deployment To Kubernetes
The article you just read is an extract from The DevOps 2.4 Toolkit: Continuous Deployment To Kubernetes.
This book explores continuous deployment to a Kubernetes cluster. It uses a wide range of Kubernetes platforms and provides instructions on how to develop a pipeline on few of the most commonly used CI/CD tools.
I am assuming that you are already proficient with Deployments, ReplicaSets, Pods, Ingress, Services, PersistentVolumes, PersistentVolumeClaims, Namespaces and a few other things. This book assumes that we do not need to go through the basic stuff. At least, not through all of it. The book assumes a certain level of Kubernetes knowledge and hands-on experience. If that’s not the case, what follows might be too confusing and advanced. Please read The DevOps 2.3 Toolkit: Kubernetes first, or consult the Kubernetes documentation. Come back once you’re done and once you think you can claim that you understand at least basic Kubernetes concepts and resource types.
Give it a try and let me know what you think.