Category Archives: Cloud Computing

A Quick Introduction To Prometheus And Alertmanager

Kubernetes HorizontalPodAutoscaler (HPA) and Cluster Autoscaler (CA) provide essential, yet very rudimentary mechanisms to scale our Pods and clusters. While they do scaling decently well, they do not solve our need to be alerted when there’s something wrong, nor do they provide enough information required to find the cause of an issue. We’ll need to expand our setup with additional tools that will allow us to store and query metrics as well as to receive notifications when there is an issue.

If we focus on tools that we can install and manage ourselves, there is very little doubt about what to use. If we look at the list of Cloud Native Computing Foundation (CNCF) projects, only two graduated so far (October 2018). Those are Kubernetes and Prometheus. Given that we are looking for a tool that will allow us to store and query metrics and that Prometheus fulfills that need, the choice is straightforward. That is not to say that there are no other similar tools worth considering. There are, but they are all service based. We might explore them later but, for now, we’re focused on those that we can run inside our cluster. So, we’ll add Prometheus to the mix and try to answer a simple question. What is Prometheus?
Continue reading

Advertisements

Kubernetes’ Cluster Autoscaler Compared in GKE, EKS, and AKS

Kubernetes’ Cluster Autoscaler is a prime example of the differences between different managed Kubernetes offerings. We’ll use it to compare the three major Kubernetes-as-a-Service providers.

I’ll limit the comparison between the vendors only to the topics related to Cluster Autoscaling.
Continue reading

Setting Up Cluster Autoscaler In EKS

Unlike GKE, EKS does not come with Cluster Autoscaler. We’ll have to configure it ourselves. We’ll need to add a few tags to the Autoscaling Group dedicated to worker nodes, to put additional permissions to the Role we’re using, and to install Cluster Autoscaler.
Continue reading

To Replicas Or Not To Replicas In Kubernetes Deployments And StatefulSets?

Knowing that HorizontalPodAutoscaler (HPA) manages auto-scaling of our applications, the question might arise regarding replicas. Should we define them in our Deployments and StatefulSets, or should we rely solely on HPA to manage them? Instead of answering that question directly, we’ll explore different combinations and, based on results, define the strategy.

First, let’s see how many Pods we have in our cluster right now.

You might not be able to use the same commands since they assume that go-demo-5 application is already running, that the cluster has HPA enabled, that you cloned the code, and a few other things. I presented the outputs so that you can follow the logic without running the same commands.

kubectl -n go-demo-5 get pods

The output is as follows.

NAME    READY STATUS  RESTARTS AGE
api-... 1/1   Running 0        27m
api-... 1/1   Running 2        31m
db-0    2/2   Running 0        20m
db-1    2/2   Running 0        20m
db-2    2/2   Running 0        21m

We can see that there are two replicas of the api Deployment, and three replicas of the db StatefulSets.
Continue reading

Four Phases Of Kubernetes Adoption

Kubernetes is probably the biggest project we know. It is vast, and yet many think that after a few weeks or months of reading and practice they know all there is to know about it. It’s much bigger than that, and it is growing faster than most of us can follow. How far did you get in Kubernetes adoption?

From my experience, there are four main phases in Kubernetes adoption.

In the first phase, we create a cluster and learn intricacies of Kube API and different types of resources (e.g., Pods, Ingress, Deployments, StatefulSets, and so on). Once we are comfortable with the way Kubernetes works, we start deploying and managing our applications. By the end of this phase, we can shout “look at me, I have things running in my production Kubernetes cluster, and nothing blew up!” I explained most of this phase in The DevOps 2.3 Toolkit: Kubernetes.
Continue reading

The DevOps Toolkit Series Explores Continuous Deployment To Kubernetes

Soon after I started working on The DevOps 2.3 Toolkit: Kubernetes, I realized that a single book could only scratch the surface. Kubernetes is vast, and no single book can envelop even all the core components. If we add community projects, the scope becomes even more extensive. Then we need to include hosting vendors and different ways to set up and manage Kubernetes. That would inevitably lead us to third-party solutions like OpenShift, Rancher, and DockerEE, to name a few. It doesn’t end there. We’d need to explore other types of community and third-party additions like those related to networking and storage. And don’t forget the processes like, for example, continuous delivery and deployment. All those things could not be explored in a single book so The DevOps 2.3 Toolkit: Kubernetes ended up being an introduction to Kubernetes. It can serve as the base for exploring everything else.

The moment I published the last chapter of The DevOps 2.3 Toolkit: Kubernetes, I started working on the next material. A lot of ideas and tryouts came out of it. It took me a while until the subject and the form of the forthcoming book materialized. After a lot of consultation with the readers of the previous book, the decision was made to explore continuous delivery and deployment processes in a Kubernetes cluster. The high-level scope of the book you are reading right now was born.
Continue reading

Building A Self-Sufficient Docker Cluster

A self-sufficient system is a system capable of healing and adaptation. Healing means that the cluster will always be in the designed state. As an example, if a replica of a service goes down, the system needs to bring it back up again. Adaptation, on the other hand, is about modifications of the desired state so that the system can deal with changed conditions. A simple example would be increased traffic. When it happens, services need to be scaled up. When healing and adaptation are automated, we get self-healing and self-adaptation. Together, they both a self-sufficient system that can operate without human intervention.

How does a self-sufficient system look? What are its principal parts? Who are the actors?
Continue reading