Tag Archives: Ansible

The DevOps 2.0 Toolkit

Today is an exciting day for me. I just decided that the book I spent the last eight months writing is ready for general public.

What made me write the book? Certainly not the promise of wealth since, as any author of technical books will confirm, there is no money that can compensate the number of hours involved in writing a technical book. The reasons behind this endeavor are of a different nature. I realized that this blog is a great way for me to explore different subjects and share my experience with the community. However, due to the format, blog posts do not give enough space to explore, in more details, subjects I’m passionate about so, around eight months ago, I decided to start working on The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book. It treats similar subjects as those I write about in this blog, but with much more details. More importantly, the book allowed me to organize my experience into a much more coherent story.

The book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It’s about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It’s about scaling to any number of servers, designing self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.

In other words, this book envelops the whole microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We’ll use Docker, Kubernetes, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, and so on. We’ll go through many practices and, even more, tools.

At this moment, around 70% is finished and you’ll receive regular updates if you decide to purchase the book. The truth is that my motivation for writing the book is the same as with this blog. I like sharing my experience and this book is one more way to accomplish that. You can set your own price and if you feel that the minimum amount is still too high, please send me a private message and I’ll get back to you with a free copy.

Please give The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices a try and let me know what you think. Any feedback is welcome and appreciated.

Microservices: The Essential Practices

Before we jump and try to explore the practices we must master in order to successfully implement microservices architecture, let us briefly refresh our understanding of monolithic applications.
Continue reading

Configuration Management in the Docker World

Anyone managing more than a few servers can confirm that doing such a task manually is a waste of time and risky. Configuration management (CM) exists for a long time and there is no single reason I can think of why one would not use one of the tools. The question is not whether to adopt one of them but which one to choose. Those that already embraced one or the other and invested a lot of time and money will probably argue that the best tool is the one they chose. As things usually go, the choices change over time and the reasons for one over the other might not be the same today as they were yesterday. In most cases, choices are not based on available options but by the architecture of the legacy system we are sworn to maintain. If such systems are to be ignored or someone with enough courage and deep pockets would be willing to modernize them, today’s reality would be dominated by containers and microservices. In such a situation, the choices we made yesterday are definitely different from choices we could make today.
Continue reading

Scaling To Infinity with Docker Swarm, Docker Compose and Consul (Part 4/4) – Scaling Individual Services

This article should be considered deprecated since it speaks about the old (standalone) Swarm. To get more up-to-date information about the new Swarm mode, please read the Docker Swarm Introduction (Tour Around Docker 1.12 Series) article or consider getting the The DevOps 2.1 Toolkit: Docker Swarm book.

This series is split into following articles.

In the previous article we switched from manual to automatic deployment with Jenkins and Ansible. In the quest for zero-downtime we employed Consul to check health of our services and, if one of them fails, initiate deployment through Jenkins.

In this article we’ll explore how to scale individual services.

Continue reading

Scaling To Infinity with Docker Swarm, Docker Compose and Consul (Part 3/4) – Blue-Green Deployment, Automation and Self-Healing Procedure

This article should be considered deprecated since it speaks about the old (standalone) Swarm. To get more up-to-date information about the new Swarm mode, please read the Docker Swarm Introduction (Tour Around Docker 1.12 Series) article or consider getting the The DevOps 2.1 Toolkit: Docker Swarm book.

This series is split into following articles.

In the previous article we manually deployed the first version of our service together with a separate instance of the Mongo DB container. Both are (probably) running on different servers. Docker Swarm decided where to run our containers and Consul stored information about service IPs and ports as well as other useful information. That data was used to link one service with another as well as to provide information nginx needed to create proxy.

We’ll continue where we left and deploy a second version of our service. Since we’re practicing blue/green deployment, the first version was called blue and the next one will be green. This time there will be some additional complications. Deploying the second time is a bit more complicated since there are additional things to consider, especially since our goal is to have no downtime.

Continue reading

Scaling To Infinity with Docker Swarm, Docker Compose and Consul (Part 2/4) – Manually Deploying Services

This article should be considered deprecated since it speaks about the old (standalone) Swarm. To get more up-to-date information about the new Swarm mode, please read the Docker Swarm Introduction (Tour Around Docker 1.12 Series) article or consider getting the The DevOps 2.1 Toolkit: Docker Swarm book.

This series is split into following articles.

The previous article showed how scaling across the server farm looks like. We’ll continue where we left and explore details behind the presented implementation. Orchestration has been done through Ansible. Besides details behind tasks in Ansible playbooks, we’ll see how the same result could be accomplished using manual commands in case you might prefer a different orchestration/deployment framework.

Continue reading

Scaling To Infinity with Docker Swarm, Docker Compose and Consul (Part 1/4) – A Taste of What Is To Come

This article should be considered deprecated since it speaks about the old (standalone) Swarm. To get more up-to-date information about the new Swarm mode, please read the Docker Swarm Introduction (Tour Around Docker 1.12 Series) article or consider getting the The DevOps 2.1 Toolkit: Docker Swarm book.

This series is split into following articles.

Previous articles put a lot a focus on Continuous Delivery and Containers with Docker. In Continuous Integration, Delivery or Deployment with Jenkins, Docker and Ansible I explained how to continuously build, test and deploy micro services packaged into containers and do that across multiple servers, without downtime and with the ability to rollback. We used Ansible, Docker, Jenkins and few other tools to accomplish that goal.

Now it’s time to extend what we did in previous articles and scale services across any number of servers. We’ll treat all servers as one server farm and deploy containers not to predefined locations but to those that have the least number of containers running. Instead of thinking about each server as an individual place where we deploy, we’ll treat all of them as one unit.

Continue reading

Centralized System and Docker Logging with ELK Stack

With Docker there was not supposed to be a need to store logs in files. We should output information to stdout/stderr and the rest will be taken care by Docker itself. When we need to inspect logs all we are supposed to do is run docker logs [CONTAINER_NAME].

With Docker and ever more popular usage of micro services, number of deployed containers is increasing rapidly. Monitoring logs for each container separately quickly becomes a nightmare. Monitoring few or even ten containers individually is not hard. When that number starts moving towards tens or hundreds, individual logging is unpractical at best. If we add distributed services the situation gets even worst. Not only that we have many containers but they are distributed across many servers.

The solution is to use some kind of centralized logging. Our favourite combination is ELK stack (ElasticSearch, LogStash and Kibana). However, centralized logging with Docker on large-scale was not a trivial thing to do (until version 1.6 was released). We had a couple of solutions but none of them seemed good enough.
Continue reading

Continuous Integration, Delivery or Deployment with Jenkins, Docker and Ansible

This article tries to provide one possible way to set up the Continuous Integration, Delivery or Deployment pipeline. We’ll use Jenkins, Docker, Ansible and Vagrant to set up two servers. One will be used as a Jenkins server and the other one as an imitation of production servers. First one will checkout, test and build applications while perform deployment and post-deployment tests.
Continue reading

Continuous Deployment: Implementation with Ansible and Docker

This article is part of the Continuous Integration, Delivery and Deployment series.

The previous article described several ways to implement Continuous Deployment. Specifically, it described, among other things, how to implement it using Docker to deploy applications as containers and nginx for reverse proxy necessary for successful utilization of blue-green deployment technique. All that was running on top of CoreOS, operating system specifically designed for running Docker containers.

In this article we’ll try to do the same process using Ansible (an open-source platform for configuring and managing computers). Instead of CoreOS, we’ll be using Ubuntu.

Source code used in this article can be found in the GitHub repo vfarcic/provisioning (directory ansible).
Continue reading