Category Archives: Cloud Computing

Kubernetes’ Cluster Autoscaler Compared in GKE, EKS, and AKS

Kubernetes' Cluster Autoscaler is a prime example of the differences between different managed Kubernetes offerings. We'll use it to compare the three major Kubernetes-as-a-Service providers.

I'll limit the comparison between the vendors only to the topics related to Cluster Autoscaling.
Continue reading

Setting Up Cluster Autoscaler In EKS

Unlike GKE, EKS does not come with Cluster Autoscaler. We'll have to configure it ourselves. We'll need to add a few tags to the Autoscaling Group dedicated to worker nodes, to put additional permissions to the Role we're using, and to install Cluster Autoscaler.
Continue reading

To Replicas Or Not To Replicas In Kubernetes Deployments And StatefulSets?

Knowing that HorizontalPodAutoscaler (HPA) manages auto-scaling of our applications, the question might arise regarding replicas. Should we define them in our Deployments and StatefulSets, or should we rely solely on HPA to manage them? Instead of answering that question directly, we'll explore different combinations and, based on results, define the strategy.

First, let's see how many Pods we have in our cluster right now.

You might not be able to use the same commands since they assume that go-demo-5 application is already running, that the cluster has HPA enabled, that you cloned the code, and a few other things. I presented the outputs so that you can follow the logic without running the same commands.

The output is as follows.

We can see that there are two replicas of the api Deployment, and three replicas of the db StatefulSets.
Continue reading

Four Phases Of Kubernetes Adoption

Kubernetes is probably the biggest project we know. It is vast, and yet many think that after a few weeks or months of reading and practice they know all there is to know about it. It's much bigger than that, and it is growing faster than most of us can follow. How far did you get in Kubernetes adoption?

From my experience, there are four main phases in Kubernetes adoption.

In the first phase, we create a cluster and learn intricacies of Kube API and different types of resources (e.g., Pods, Ingress, Deployments, StatefulSets, and so on). Once we are comfortable with the way Kubernetes works, we start deploying and managing our applications. By the end of this phase, we can shout "look at me, I have things running in my production Kubernetes cluster, and nothing blew up!" I explained most of this phase in The DevOps 2.3 Toolkit: Kubernetes.
Continue reading

The DevOps Toolkit Series Explores Continuous Deployment To Kubernetes

Soon after I started working on The DevOps 2.3 Toolkit: Kubernetes, I realized that a single book could only scratch the surface. Kubernetes is vast, and no single book can envelop even all the core components. If we add community projects, the scope becomes even more extensive. Then we need to include hosting vendors and different ways to set up and manage Kubernetes. That would inevitably lead us to third-party solutions like OpenShift, Rancher, and DockerEE, to name a few. It doesn't end there. We'd need to explore other types of community and third-party additions like those related to networking and storage. And don't forget the processes like, for example, continuous delivery and deployment. All those things could not be explored in a single book so The DevOps 2.3 Toolkit: Kubernetes ended up being an introduction to Kubernetes. It can serve as the base for exploring everything else.

The moment I published the last chapter of The DevOps 2.3 Toolkit: Kubernetes, I started working on the next material. A lot of ideas and tryouts came out of it. It took me a while until the subject and the form of the forthcoming book materialized. After a lot of consultation with the readers of the previous book, the decision was made to explore continuous delivery and deployment processes in a Kubernetes cluster. The high-level scope of the book you are reading right now was born.
Continue reading

Building A Self-Sufficient Docker Cluster

A self-sufficient system is a system capable of healing and adaptation. Healing means that the cluster will always be in the designed state. As an example, if a replica of a service goes down, the system needs to bring it back up again. Adaptation, on the other hand, is about modifications of the desired state so that the system can deal with changed conditions. A simple example would be increased traffic. When it happens, services need to be scaled up. When healing and adaptation are automated, we get self-healing and self-adaptation. Together, they both a self-sufficient system that can operate without human intervention.

How does a self-sufficient system look? What are its principal parts? Who are the actors?
Continue reading

Managing Secrets In Docker Swarm Clusters

Docker 1.13 introduced a set of features that allow us to centrally manage secrets and pass them only to services that need them. They provide a much-needed mechanism to provide information that should be hidden from anyone except designated services.

A secret (at least from Docker's point of view) is a blog of data. A typical use case would be a certificate, SSH private keys, passwords, and so on. Secrets should stay secret meaning that they should not be stored unencrypted or transmitted over a network.
Continue reading

Choosing The Right Tools To Create And Manage Docker Swarm Clusters In AWS

Swarm Mode is taking Docker scheduling by the storm. You tried it in your lab or by setting up a few VMs locally and joining them into a Swarm cluster. Now it's time to move it to the next level and create a cluster in AWS. Which tools should you use to do that? How should you set up your AWS Swarm cluster?

If you are like me, you already discarded the option to create servers manually through AWS Console, enter each with SSH, and run commands that will install Docker Engine and join the node with others. You know better than that and want a reliable and automated process that will create and maintain the cluster.

Among a myriad of options, we can choose from to create a Docker Swarm cluster in AWS, three stick from the crowd. We can use Docker Machine with AWS CLI, Docker for AWS as CloudFormation template, and Packer with Terraform. That is, by no means, the final list of the tools we can use. The time is limited and I promised to myself that this article will be shorter than War and Peace so I had to draw the line somewhere. Those three combinations are, in my opinion, the best candidates as your tools of choice. Even if you do choose something else, this article, hopefully, gave you an insight into the direction you might want to take.
Continue reading

To Terraform Or Not To Terraform: Configuration Management In AWS (And Other Cloud Computing Providers)

Configuration management tools have as their primary objective the task of making a server always be in the desired state. If a web server stops, it will be started again. If a configuration file changed, it will be restored. No matter what happens to a server, its desired state will be restored. Except, when there is no fix to the issue. If a hard disk fails, there's nothing configuration management can do.

The problem with configuration management tools is that they were designed to work with physical, not virtual servers. Why would we fix a faulty virtual server when we can create a new one in a matter of seconds. Terraform understands how cloud computing works better than anyone and embraces the idea that our servers are not pets anymore. The are cattle. It'll make sure that all your resources are available. When something is wrong on a server, it will not try to fix it. Instead, it will destroy it and create a new one based on the image we choose.
Continue reading