To Terraform Or Not To Terraform: Configuration Management In AWS (And Other Cloud Computing Providers)

Configuration management tools have as their primary objective the task of making a server always be in the desired state. If a web server stops, it will be started again. If a configuration file changed, it will be restored. No matter what happens to a server, its desired state will be restored. Except, when there is no fix to the issue. If a hard disk fails, there’s nothing configuration management can do.

The problem with configuration management tools is that they were designed to work with physical, not virtual servers. Why would we fix a faulty virtual server when we can create a new one in a matter of seconds. Terraform understands how cloud computing works better than anyone and embraces the idea that our servers are not pets anymore. The are cattle. It’ll make sure that all your resources are available. When something is wrong on a server, it will not try to fix it. Instead, it will destroy it and create a new one based on the image we choose.
Continue reading

Collecting Metrics and Monitoring Docker Swarm Clusters

In the Forwarding Logs From All Containers Running Anywhere Inside A Docker Swarm Cluster article, we managed to add centralized logging to our cluster. Logs from any container running inside any of the nodes are shipped to a central location. They are stored in ElasticSearch and available through Kibana. However, the fact that we have easy access to all the logs does not mean that we have all the information we would need to debug a problem or prevent it from happening in the first place. We need to complement our logs with the rest of the information about the system. We need much more than what logs alone can provide.
Continue reading

Forwarding Logs From All Containers Running Anywhere Inside A Docker Swarm Cluster

In this article, we’ll discuss a way to forward logs from containers created as Docker Swarm services inside our clusters. We’ll use the ELK stack. They’ll be forwarded from containers to LogStash and, from there, to ElasticSearch. Once in the database, they will be available through Kibana.
Continue reading

Service Discovery Inside A Docker Swarm Cluster

The text that follows contains excerpts from the Service Discovery Inside A Swarm Cluster chapter of the The DevOps 2.1 Toolkit: Docker Swarm book.

Service Discovery In The Swarm Cluster

The old (standalone) Swarm required a service registry so that all its managers can have the same view of the cluster state. When instantiating the old Swarm nodes, we had to specify the address of a service registry. However, if you take a look at setup instructions of the new Swarm (Swarm Mode introduced in Docker 1.12), you’ll notice that we nothing is required beyond Docker Engines. You will not find any mention of an external service registry or a key-value store.
Continue reading

The DevOps Toolkit Series Are Born

The DevOps 2.1 ToolkitAt the beginning of 2016, I published The DevOps 2.0 Toolkit. It took me a long time to finish it. Much longer than I imagined.

I started by writing blog posts in TechnologyConversations.com. They become popular and I received a lot of feedback. Through them, I clarified the idea behind the book. The goal was to provide a guide for those who want to implement DevOps practices and tools. At the same time, I did not want to write a material usable to any situation. I wanted to concentrate only on people that truly want to implement the latest and greatest practices. I hoped to make it go beyond the “traditional” DevOps. I wished to show that the DevOps movement matured and evolved over the years and that we needed a new name. A reset from the way DevOps is implemented in some organizations. Hence the name, The DevOps 2.0 Toolkit.
Continue reading

Distributed Application Bundles (Tour Around Docker 1.12 Series)

The new Swarm bundled in Docker 1.12+ is a vast improvement compared to the old orchestration and scheduling. There is no more the need to run a separate set of Swarm containers (it is bundled in Docker Engine), failover strategies are much more reliable, service discovery is baked in, the new networking works like a charm, and so on. The list of features and improvements is quite big. If you haven’t had a chance to give it a spin, please read the Docker Swarm Introduction and Integrating Proxy With Docker Swarm.

In those articles we used commands to run Docker services inside the Swarm cluster. If you are like me, you are probably used to the simplicity Docker Compose provides. Instead trying to memorize commands with all the arguments required for running services, you probably specified everything as Docker Compose files and run containers with a simple docker-compose up -d command. After all, that is much more convenient than docker service create followed by an endless number of arguments. My brain does not have the capacity to memorize all of them.

When I started “playing” with the new Swarm, my first impression was this is great, followed by I don’t want to memorize different arguments for all of my services. I want my Compose files back into the game.
Continue reading