Continuous Integration, Delivery or Deployment with Jenkins, Docker and Ansible

This article tries to provide one possible way to set up the Continuous Integration, Delivery or Deployment pipeline. We’ll use Jenkins, Docker, Ansible and Vagrant to set up two servers. One will be used as a Jenkins server and the other one as an imitation of production servers. First one will checkout, test and build […]

Continuous Deployment: Implementation with Ansible and Docker

This article is part of the Continuous Integration, Delivery and Deployment series. The previous article described several ways to implement Continuous Deployment. Specifically, it described, among other things, how to implement it using Docker to deploy applications as containers and nginx for reverse proxy necessary for successful utilization of blue-green deployment technique. All that was […]

Microservices Development with Scala, Spray, MongoDB, Docker and Ansible

This article tries to provide one possible approach to building microservices. We’ll use Scala as programming language. API will be RESTful JSON provided by Spray and Akka. MongoDB will be used as database. Once everything is done we’ll pack it all into a Docker container. Vagrant with Ansible will take care of our environment and […]

To Terraform Or Not To Terraform: Configuration Management In AWS (And Other Cloud Computing Providers)

Configuration management tools have as their primary objective the task of making a server always be in the desired state. If a web server stops, it will be started again. If a configuration file changed, it will be restored. No matter what happens to a server, its desired state will be restored. Except, when there […]

Blue-Green Deployment To Docker Swarm with Jenkins Workflow Plugin

The idea behind this article is to explore ways to deploy releases with Jenkins to Docker Swarm without downtime. We’ll use blue-green procedure. More info about the process and one possible implementation can be found in the Blue-Green Deployment, Automation and Self-Healing Procedure article. One of the downsides of the process we used in that […]

Scaling To Infinity with Docker Swarm, Docker Compose and Consul (Part 4/4) – Scaling Individual Services

In the [previous article](http://technologyconversations.com/2015/07/02/scaling-to-infinity-with-docker-swarm-docker-compose-and-consul-part-34-blue-green-deployment-automation-and-self-healing-procedure/) we switched from manual to automatic deployment with Jenkins and Ansible. In the quest for **zero-downtime** we employed Consul to check **health** of our services and, if one of them fails, initiate deployment through Jenkins.

In this article we’ll explore how to scale individual services.

Scaling To Infinity with Docker Swarm, Docker Compose and Consul (Part 2/4) – Manually Deploying Services

The [previous article](http://technologyconversations.com/2015/07/02/scaling-to-infinity-with-docker-swarm-docker-compose-and-consul-part-14-a-taste-of-what-is-to-come/) showed how scaling across the server farm looks like. We’ll continue where we left and explore details behind the presented implementation. Orchestration has been done through [Ansible](http://www.ansible.com/home). Besides details behind tasks in Ansible playbooks, we’ll see how the same result could be accomplished using manual commands in case you might prefer a different orchestration/deployment framework.

Scaling To Infinity with Docker Swarm, Docker Compose and Consul (Part 1/4) – A Taste of What Is To Come

It’s time to extend what we did in previous articles and scale services across any number of servers. We’ll treat all servers as one **server farm** and deploy containers not to predefined locations but to those that have the least number of containers running. Instead of thinking about each server as an individual place where we deploy, we’ll treat all of them as one unit.

Scaling To Infinity with Docker Swarm, Docker Compose and Consul (Part 3/4) – Blue-Green Deployment, Automation and Self-Healing Procedure

We’ll continue where we left and deploy a second version of our service. Since we’re practicing blue/green deployment, the first version was called **blue** and the next one will be **green**. This time there will be some additional complications. Deploying the second time is a bit more complicated since there are additional things to consider, especially since our goal is to have no downtime.