Scaling To Infinity with Docker Swarm, Docker Compose and Consul (Part 4/4) – Scaling Individual Services

This article should be considered deprecated since it speaks about the old (standalone) Swarm. To get more up-to-date information about the new Swarm mode, please read the Docker Swarm Introduction (Tour Around Docker 1.12 Series) article or consider getting the The DevOps 2.1 Toolkit: Docker Swarm book.

This series is split into following articles.

In the previous article we switched from manual to automatic deployment with Jenkins and Ansible. In the quest for zero-downtime we employed Consul to check health of our services and, if one of them fails, initiate deployment through Jenkins.

In this article we’ll explore how to scale individual services.

Setup

For those of you who stopped VMs we created in the previous article (vagrant halt) or turned of your laptops, here’s how to quickly get to the same state where we were before. The rest of you can skip this chapter.

vagrant up
vagrant ssh swarm-master
ansible-playbook /vagrant/ansible/infra.yml -i /vagrant/ansible/hosts/prod
ansible-playbook /vagrant/ansible/books-service.yml -i /vagrant/ansible/hosts/prod
export DOCKER_HOST=tcp://10.100.199.200:2375

We can verify whether everything seems to be in order by running the following.

docker ps
curl http://10.100.199.200/api/v1/books | jq .

The first command should list, among other things, booksservice_[COLOR]_1 and booksservice_db_1 containers. The second one should retrieve JSON response with three books we inserted before.

With this out-of-the-way, we can continue where we left.

Scaling Services

Let us scale our books-service so that it is running on at least two nodes. That way we can be sure that if one of them fails, the other one will be running while the rescue setup we did in the previous article is finished and the failed service is redeployed.

docker ps | grep booksservice
cd /data/compose/config/books-service
docker-compose scale blue=2
docker ps | grep booksservice

If you are currently running green, please change the above command to docker-compose scale green=2.

The last docker ps command listed that two instances of our service are running; booksservice_blue_1 and booksservice_blue_2.

As with everything else we did by now, we already added scaling option to the Ansible setup. Let’s see how to do the equivalent of the command above with Ansible. We’ll deploy the latest version of the service scaled to three instances.

ansible-playbook /vagrant/ansible/books-service.yml -i /vagrant/ansible/hosts/prod --extra-vars "service_instances=3"
docker ps | grep booksservice

With a single run of the books-service playbook, we deployed a new version of the service scaled to three instances.

We won’t go into details but you can probably imagine the potential this has beyond simple scaling with the goal have one running when the other one fails. We could, for example, create a system that would scale services that are under heavy load. That can be done with Consul that could monitor services response times and, if they reach some threshold, scale them to meet the increased traffic demand.

Just as easy, we can scale down back to two services.

ansible-playbook /vagrant/ansible/books-service.yml -i /vagrant/ansible/hosts/prod --extra-vars "service_instances=2"
docker ps | grep booksservice

All this would be pointless if our nginx configuration would not support it. Even though we have multiple instances of the same service, nginx needs to know about it and perform load balancing across all of them. The Ansible playbook that we’ve been using already handles this scenario.

Let’s take a look at the nginx configuration related to the books-service.

cat /data/nginx/includes/books-service.conf

The output is following.

location /api/v1/books {
  proxy_pass http://books-service/api/v1/books;
}

This tells nginx that whenever someone requests an address that starts with /api/v1/books, it should be proxied to http://books-service/api/v1/books. Let’s take a look at the configuration for the books-service address (after all, it’s not a real domain).

cat /data/nginx/upstreams/books-service.conf
docker ps | grep booksservice

The output will differ from case to case. The important part is that the list of nginx upstream servers should coincide with the list of services we obtained with docker ps. One possible output of the first command could be following.

upstream books-service {
    server  10.100.199.202:32770;
    server  10.100.199.203:32781;
}

This tells nginx to balance requests between those two servers and ports.

We already mentioned in the previous articles that we are creating nginx configurations using Consul Template. Let us go through it again. The blue template looks like this.

upstream books-service {
    {{range service "books-service-blue" "any" }}
    server {{.Address}}:{{.Port}};
    {{end}}
}

It tells Consul to retrieve all instances (range) of the service called books-service-blue ignoring their status (any). For each of those instances it should write the IP (.Address) and port (.Port). We created a template for both blue and green versions. When we run the last deployment, Ansible took care of creating this template (with correct color), copying it to the server and running Consul Template which, in turn, reloaded nginx at the end of the process.

The current setting does not scale MongoDB. I’ll leave that up to you. The process should be the same as with the service itself with additional caveat that Mongo should be set to use Replica Set with one instance set as primary and the rest as secondary instances.

The End (For Now)

We covered a lot of ground in these four articles and left even more possibilities unexplored. We could, for example, host on the same cluster not only different services but also copies of the same services for multiple customers. We could create logic that deploys services not only to nodes that have the least number of containers but those that have enough CPU or memory. We could add Kubernetes or Mesos to the setup and have more powerful and precise ways to schedule deployments. We already had CPU, memory and HD checks set in Consul but no action is taken when they reach their thresholds. However, time is limited and not all can be explored at once.

I’d like to hear from you what would be the next subject to explore, which parts of these articles require more details or your experience after trying to apply this in your organization.

If you have any trouble following these examples, please let me know and I’ll give my best to help you out.

The DevOps 2.0 Toolkit

The DevOps 2.0 ToolkitIf you liked this article, you might be interested in The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book.

This book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It’s about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It’s about scaling to any number of servers, design of self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.

In other words, this book envelops the whole microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We’ll use Docker, Kubernetes, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, and so on. We’ll go through many practices and, even more, tools.

7 thoughts on “Scaling To Infinity with Docker Swarm, Docker Compose and Consul (Part 4/4) – Scaling Individual Services

  1. Florian

    Hi Viktor, thank you so much for this great series of posts – it was a pleasure reading them and exploring further opportunities.

    Just one little question regarding the first scaling operation: You said “If you are currently running green, please change the above command to docker-compose scale blue=2” – am I right this should rather be “… to docker-compose scale green=2” then?

    All the best from Berlin,
    Florian

    Reply
  2. ErPe

    Hey! Really nice article. My question would be how this fits into production environments ? Especially with one server of consul ? Isnt that a SPF ?

    Reply
    1. Viktor Farcic Post author

      In this simple demo, there was Consul on each server and one of them acting as a server. In a more complicated setting, with more nodes, you would put more Consul servers, and, maybe, even put them on dedicated nodes. It all truly depends on a case by case basis depending on the size of the project.

      Even though Consul is not officially production-ready, it’s been used in production without any problem and is becoming the preferred tool for that type of the job.

      The major question is whether it is worthwhile going with the peak of technology or wait until more adventurous ones proved it worthy. That’s for each of us to decide. Even if you do not choose Swarm, Consul, and other tools from this article, the logic behind their implementation would still be the same. Part of it is out-of-the-box experience in some tools like Kubernetes and Mesos while some other parts need to be custom made.

      Reply
      1. ErPe

        In my specific case scenario throwing consul as a server on nodes which needs to be hosting services would be probably overkill.

        But thanks for explanation – I will definitely take some learning points from this great article!

        Reply
  3. Jeff G

    Great article. The challenge for us will be dealing with sticky sessions. I assume we would just extend your scripts to tell NGINX (or maybe HAProxy) to drain the existing sticky sessions from the green deployment for so many minutes before completely removing it.

    Reply
    1. Viktor Farcic Post author

      I’m not sure I understood your question. Are you referring to the removal of the green release after blue is deployed? If that’s the case, Consul Template takes care of that by reconfiguring nginx after, in this scenario, the blue release is deployed. On the other hand, nginx maintains existing connections (in this case to the green release) until they receive responses. Since some time is spent on testing between deployment and removal of the old release, those requests to the old release should have plenty of time to complete.

      Then again, I might have misunderstood your question. If that’s the case, can you please elaborate a bit more. Feel free to send me an email (it is in the about section), if you’d prefer one-on-one conversation.

      Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s